I’ve read that oversampling makes LPFing easier. Doesn’t delta-sigma modulator do oversampling already?
Why do we need to, for example, 8x oversample a 44.1khz song, and then pass it through a delta-sigma modulator?
WHY on earth not adjust the time soo all graphs match and then keep the time per Division on screen?
The last two could be down to “pressed prntscreen at an unlucky moment on low-res scope”
Edit: Could also cause undersampling or aliasing if the time-domain is too long.
The issue though is more complicated, oversampling makes sine waves look smoother.
However just like up sampling images, despite what TV would imply you can’t add information that’s not there, so the up-sampling “filters” in effect make assumptions as to what the signal is they are up-sampling.
The filters as a result also add ringing to transient response, which is why on impulse plots you’ll see significant ultrasonic ringing both before and after the impulse. There was a bit of a religious debate around pre-ringing being bad in the late 90’s/early 00’s, and a lot of filters were developed to minimize that at the expense of more post ringing. We seem to be past that, but there isn’t clear agreement on what a good upsampling filter looks like.
Sorta kinda. AFAIK, DeltaSigma’s oversampling works on the basis that the more samples you take, the closer it can approximate the “ideal” output. Just like FFT works when breaking down a waveform.