Continue to Site

Welcome to our site!

Electro Tech is an online community (with over 170,000 members) who enjoy talking about and building electronic circuits, projects and gadgets. To participate you need to register. Registration is free. Click here to register now.

  • Welcome to our site! Electro Tech is an online community (with over 170,000 members) who enjoy talking about and building electronic circuits, projects and gadgets. To participate you need to register. Registration is free. Click here to register now.

Oversampling- magnitude vs time resolution

Status
Not open for further replies.

dknguyen

Well-Known Member
Most Helpful Member
BASICALLY:
I think that being able produce a unique reconstruction of a signal on it's samples (goal of the Nyquist Thereom) is different than being able to accurately reconstruct a signal based on its samples (something separate from the Nyquist thereom?). Yes? No? So now I'll go into my thinking about this.

=================
I'm just going over sampling theory more carefully and thought of something. THe Nyquist Thereom states that if you sample at a frequency >2x the bandwidth of the input signal your output waveform will be unique. So if your sampling frequency is exactly 2x the bandwidth your signal is unique, but that doesn't necessarily mean it's accurate. Just because it's unique doesn't mean it could be all mangled or choppy right?

Then there is oversampling where you sample faster than 2x the bandiwdth. NOw when this is done, you can either use the greater number of samples to "fill in the holes" (increase the resolution in time) on the graph to make the picture look more complete, or you can do what delta-sigma ADCs do and "average" consecutive samples to increase the resolution magnitude-wise.

Like you could end up with an output that looks

-almost like the input signal with very small spacings between the samples, but the magnitude accuracy of each individual sample is not as good as it could be
-or have a signal with larger time spacing between the samples but the magnitude of each individual sample is more accurate

I'm just wondering kind of where the balance is chosen. This leads back to the sigma-delta ADCs because the sampling rate is not the same as the output rate or the bandwidth and it's messing with my head.
==================
EXAMPLE:

1. ONE SAMPLE PER DATA POINT- You take a few X samples in time and those X samples are the the representation of the line.
2. MULTIPLE SAMPLES PER DATA POINT- You sample at a faster rate and you take 100 samples per data point. The magnitude of each data point is the average magnitude of the 100 consecutive samples. The time that you label the data point with is the averaged time of the 100 samples (ie. the 100 samples were taken between time 1s and 2s, then the data point would be considered to be at 1.5s). Yet, all these samples used to form this data point happened at different points in time and had different magnitudes. It would lead to inaccuracy wouldn't it? Or is this taken into account with the Nyquist sampling frequency? It's really simplifying how things actually go on, but I'm trying to visualize this.

THE QUESTION: Or does Nyquist take care of all these magnitude/time resolution issues? So that if the signal bandwidth is 1kHz, then even if you sample at 1000kHz for one second, it's more accurate if you use the data to form 1000 data points, rather than 1 million data points?
 
Last edited:
Nyquist and the related sampling theorems assume that you have infinite resolution(bits) ADC/DAC's for everything (along with ideal sinc pulses that exist from -infinity to +inifinity). As soon as you wander into the real world, you wander outside the theoretical realm, and you have the fun task of modeling these errors - or you just keep on adding bits until your customers stop complaining...

It's kind of like the laws of conservation of energy - they set ideal bounds, but they don't tell you what kind of structure, bearings, and materials you need to optimize a mechanism. If you want to model a sigma-delta converter, you're going to 1) connect the functional blocks together, 2) grind through the numbers and equations.

Take a look at this pile of pages:
**broken link removed**
MT-001 goes into detail about the quantization noise issues, and there are also some sigma-delta tutorials - MT-022, MT-023
 
Last edited:
You are correct. For example, suppose your maximum sampling frequency is 40KHz. Nyquist says your highest input frequency would then be 20KHz. Now suppose you input a sine at 20KHz. You should then be able to accurately get that sine wave back. Where people have problems with this scenario is when you input a signal at that frequency and its NOT a sine wave. You and I know that if it is not a sine, Fourier says that it has frequency content higher than 20KHz, therefore it will come back mangled because of this. Tell this to the audio people who picked 44.1KHz for CDs, and it will become clear why audiophiles disagree with that sampling rate. So bringing in a square wave at 20KHz sampled at 40 KHz will yield a sine wave back at 20KHz, assuming the proper antialiasing filters were applied. Hardly an accurate representation of the original signal.
 
Analog said:
So bringing in a square wave at 20KHz sampled at 40 KHz will yield a sine wave back at 20KHz, assuming the proper antialiasing filters were applied. Hardly an accurate representation of the original signal.
Firstly, the ear is one of the first things to deteriorate with age, so in less you're under 20 the chances are you won't be able to hear all the way up to 20KHz.

Secondly, even a 12KHz square wave, triangle wave and sine wave will all sound the same since you won't be able to hear the harmonics which will all be above 20KHz.

Having said this, oversampling dose increase the quality, a 12KHz sinewave sampled at 120KHz will look a lot cleaner than one sampled at 44KHz.
 
That only really stands for single frequencies though Hero, complex instruments and audio are incredibly difficult to model, especially through human perception of it. These harmonics might not be consciously perceived but according to some of the information on the following page
**broken link removed**
the listener can attribute a higher overall quality to audio at a much higher sampling rate than can be generally 'heard' this is most notable in stereo signals.
 
Status
Not open for further replies.

Latest threads

New Articles From Microcontroller Tips

Back
Top