About oversampling

Status
Not open for further replies.

WildStriker

New Member
Hi,

I was wondering anyone could explain about the process of OVERSAMPLING an analogue signal during the ADC (analogue-digital conversion) process?
 
No matter what anyone tell you about oversampling, it's nothing more than a mathematical 'trick' to syenthisize a sampled analog wavefore with more resolution than it would otherwise have. The "oversamples" are not actual samples of the analog wave, rather they are fake samples that are interposed along with the real samples. The values of these over samples are interpolated and insterted between real samples. When filtered, the resulting waveform is smoother and has less quantization noise than of the over samples weren't used. The development of the technique is highly mathematic, and uses discrete fourier transform and infinite response filtering math. You have to study discrete math to really get a grasp on it.

Be careful when asking about oversampling; many people think they understand it when they don't, in fact.
 
Last edited:
BrownOut, I believe you are referring to oversampling when recreating the analog signal from the digital word samples with a D/A. However the OP's question was about oversampling and A/D conversion.

A/D oversampling (beyond the Nyquist lower limit of two samples per the highest frequency of interest of the signal) can be used to minimize the anti-alias filter requirements at the A/D input. To avoid the aliasing of any signal and noise beyond the desired signal frequency limit into the signal pass-band, the A/D input must have a low-pass filter that rolls off above the frequency of interest. If the sampling frequency is near the Nyquist limit, then the filter must have a sharp roll-off (be of a high order) to properly suppress any higher frequency signals/noise and prevent aliasing.

If a high amount of oversampling is used, (such as with Sigma-Delta A/D converters) then the Nyquist frequency is moved well above the signal frequency and a much simpler HP analog filter (often just a single-pole) can be used. The rest of the required HP rolloff can then be done with a digital filter at the A/D digital output as part of the normal digital signal processing, which can readily generate a high order rolloff with no additional components.
 
Are there genuine sources online where it explains this?

What online books or website can anyone recommend me to?
 
Last edited:
BrownOut, I believe you are referring to oversampling when recreating the analog signal from the digital word samples with a D/A. However the OP's question was about oversampling and A/D conversion.

Nope. I'm talking about A/D conversion. When converting from analog to digital, samples are 'faked' and interpolated into the sample stream. Like This says.


Also, look at this. This is an oversampling ADC and uses an interpolation filter to accomplish what I've described. The math can get complicated, but they show some nice graphics that gives you the idea.

One important concept in oversampling is the nyquest frequency still determines the maximum frequency that gets processed. That's why CD's are still limited to about 22KHZ or so.
 
Last edited:
Both your references refer to oversampling in DAC (digital to analog conversion) not oversampling A/D (analog to digital) conversion, which is the OP's question. The title of your second reference is "Oversampling Interpolating DACs" which is clearly about digital to analog conversion, not A/D conversion. I do understand the difference between oversampling in a DAC and oversampling in A/D. Do you?

An additional advantage of oversampling with an A/D is that the samples can be averaged to improve the apparent resolution of the A/D. That's how Sigma Delta A/D converters achieve accuracy of 20 bits or more with a basic 2-bit conversion process.
 

The first reference says, " Oversampling is the process of increasing the sampling frequency by generating new digital samples based on the values of known samples..." One gernerates digital samples by A/D (analog to digital) conversion. The second reference says, "Oversampling and digital filtering eases the requirements on the antialiasing filter which precedes an ADC." And follows with a brief discussion of the advantages of oversampling as pertains to what is captured in the analog stream. I included this for the graphics, which shows how the technique of oversampling, whether used in ADC or DAC, produce a sampled signal with better resolution and more modest filter requirements.
 
Last edited:

h D,
Out of interest I completed a working prototype a few weeks ago, now under going field trials, using over over sampling for a 12bit ADC, using 1000 samples/average value.
The results were quite surprising and repeatable, the basic span was 3000 counts, it was easily stretched to 30,000

Regards.
 

Hi Eric, are you sampling a static signal and averaging it to cancel random noise? When I hear "oversampling" I think of it from a DSP standpoint. I guess it has other meanings.
 
Hi Eric, are you sampling a static signal and averaging it to cancel random noise? When I hear "oversampling" I think of it from a DSP standpoint. I guess it has other meanings.

hi D,
The intention is not just to cancel background on the 'pseudo' static signal but to enhance the resolution of the 12bit ADC.
 
You still seem to be somewhat trying to mix apples and oranges.

The first reference is referring to generating new digital samples by interpolating between the available digital samples for output by a DAC, which is different from any oversampling that the A/D may have previously done to generate these samples. The interpolation is a mathematical function of the digital samples in the digital domain.

The second reference refers to A/D oversampling in the first sentence (only) as a setup to discussing oversampling in DACs in the rest of the article. The second sentence is "The concept of oversampling and interpolation can be used in a similar manner with a reconstruction DAC." The article is about DAC oversampling. Some of the graphs may apply to both types of oversampling but that's not discussed in the paper.

Oversampling in the DAC process consists of interpolation between the available digital samples that were previously taken. It is a mathematical process.

Oversampling in the A/D process consists of actual (not fake or calculated) samples taken at greater than the Nyquist rate. The oversampling process in this case is not, in itself mathematical, although mathematical functions, such as filtering and interpolation, may be performed on these digital samples.

Edit: Upon reflection I believe our differences are that I am referring to two distinct types of oversampling (digital interpolation and real) and you are referring to only the digital interpolation type.
 
Last edited:
I have another question concerning oversampling. How is an analogue low pass filter used in the initial ADC process, and then how a digital filter is used?
 
An analog filter is used at the input to the A/D to minimize any noise and signals above the frequency of interest. Otherwise these will alias into your signal passband causing noise and corruption of the digitized signal.

If the A/D oversamples the input significantly above the signal Nyquist frequency (sample rate twice the highest frequency of interest) then a digital filter can be used at the A/D output to provide addition filtering of noise and signal components above the desired signal frequency. This can lessen the requirements on the analog input filter.
 

But if an analog low pass filter is required, how is it used in the ADC process?
 
An analog low pass filter is always required. It only attenuates frequencies out of the bandwidth of interest. It is used before and A/D conversion takes place. Digital filtering takes place after A/D conversion, and is a more precise way to apply filters. Digital filtering can be used for any number or process, not only low pass filtering.

The reason for the analog filter on the front end is just to keep out signals that might otherwise contaminate the digital samples that get processed by the digital processing.
 
Hello,

One of the most important things about oversampling is the noise. An AD converter can not oversample to even 1 more bit, no matter how many samples are taken, if there is no noise present in the input signal. This means the pre filter cutoff frequency has to be very carefully considered to allow some higher bandwidth noise to enter the system. What's even worse, if that noise does not have an adequate fluctuation converted bits that are close to true bits come out very accurate while other bits are extremely inaccurate. The problem is, this last condition is hard to see unless a careful calibration check is done after the circuit is built up, because users start seeing values popping out like 2.123456 volts and think they have 7 digit accuracy when, although that one sample can be accurate to 0.000001 volt, samples that are just a little different from that absolute value come out with a precision of only 0.001 perhaps. The only way to detect this is with a calibration check, looking for non linearity within the range of interest.
Some references also note that various uC pins might generate the necessary noise as they switch 0 to 1 and back in normal operation. That's true, but then again they also could introduce systematic noise which could cause very bad linearity.
Because of all this, i like to tell people that to understand AD oversampling the best place to start is to try to understand the role that noise plays in the system, before doing anything else.
 
OK, I get it. Read a paper on it. Oversampling+niose creates order out of chaos.

Huh! Who'da thunk it?

Well actually, noise is part of the oversampling. Without noise there is no oversampling. The noise is what randomly kicks the bits up or down to other values which eventually average out to a new value.
Look at it this way:
If we sample 1v and get a count of 1000 every single time, how can that possibly ever average out to a higher or lower value? Answer: it cant. 1000+1000+1000+1000=4000 and that divided by 4 brings us back to 1000, even though the actual signal would have come out to 1000.1 (binary) It takes noise to dither it above and below the count of 1000 so we can 'see' in between values at some point that 'average' out to in between values even though we can still only detect whole values. The problem comes in however when the noise is not random, and then we end up with systematic errors that destroy the linearity.
It's like moving your head back and forth to see something more clearly.
 
Last edited:
I got that. But oversampling is defined by Fs > Fn, where Fs is the sampling frequency, and Fn is the nyquest frequency. There are other things that happen as a result of oversampling besides increasing the virutal quantization resolution ( easing of filtering requirements, lowering noise )

But expanding the range of the A/D requires oversampling+noise+averaging ( which I left out earlier )
 
Last edited:
Status
Not open for further replies.
Cookies are required to use this site. You must accept them to continue using the site. Learn more…