BASICALLY:
I think that being able produce a unique reconstruction of a signal on it's samples (goal of the Nyquist Thereom) is different than being able to accurately reconstruct a signal based on its samples (something separate from the Nyquist thereom?). Yes? No? So now I'll go into my thinking about this.
=================
I'm just going over sampling theory more carefully and thought of something. THe Nyquist Thereom states that if you sample at a frequency >2x the bandwidth of the input signal your output waveform will be unique. So if your sampling frequency is exactly 2x the bandwidth your signal is unique, but that doesn't necessarily mean it's accurate. Just because it's unique doesn't mean it could be all mangled or choppy right?
Then there is oversampling where you sample faster than 2x the bandiwdth. NOw when this is done, you can either use the greater number of samples to "fill in the holes" (increase the resolution in time) on the graph to make the picture look more complete, or you can do what delta-sigma ADCs do and "average" consecutive samples to increase the resolution magnitude-wise.
Like you could end up with an output that looks
-almost like the input signal with very small spacings between the samples, but the magnitude accuracy of each individual sample is not as good as it could be
-or have a signal with larger time spacing between the samples but the magnitude of each individual sample is more accurate
I'm just wondering kind of where the balance is chosen. This leads back to the sigma-delta ADCs because the sampling rate is not the same as the output rate or the bandwidth and it's messing with my head.
==================
EXAMPLE:
1. ONE SAMPLE PER DATA POINT- You take a few X samples in time and those X samples are the the representation of the line.
2. MULTIPLE SAMPLES PER DATA POINT- You sample at a faster rate and you take 100 samples per data point. The magnitude of each data point is the average magnitude of the 100 consecutive samples. The time that you label the data point with is the averaged time of the 100 samples (ie. the 100 samples were taken between time 1s and 2s, then the data point would be considered to be at 1.5s). Yet, all these samples used to form this data point happened at different points in time and had different magnitudes. It would lead to inaccuracy wouldn't it? Or is this taken into account with the Nyquist sampling frequency? It's really simplifying how things actually go on, but I'm trying to visualize this.
THE QUESTION: Or does Nyquist take care of all these magnitude/time resolution issues? So that if the signal bandwidth is 1kHz, then even if you sample at 1000kHz for one second, it's more accurate if you use the data to form 1000 data points, rather than 1 million data points?
I think that being able produce a unique reconstruction of a signal on it's samples (goal of the Nyquist Thereom) is different than being able to accurately reconstruct a signal based on its samples (something separate from the Nyquist thereom?). Yes? No? So now I'll go into my thinking about this.
=================
I'm just going over sampling theory more carefully and thought of something. THe Nyquist Thereom states that if you sample at a frequency >2x the bandwidth of the input signal your output waveform will be unique. So if your sampling frequency is exactly 2x the bandwidth your signal is unique, but that doesn't necessarily mean it's accurate. Just because it's unique doesn't mean it could be all mangled or choppy right?
Then there is oversampling where you sample faster than 2x the bandiwdth. NOw when this is done, you can either use the greater number of samples to "fill in the holes" (increase the resolution in time) on the graph to make the picture look more complete, or you can do what delta-sigma ADCs do and "average" consecutive samples to increase the resolution magnitude-wise.
Like you could end up with an output that looks
-almost like the input signal with very small spacings between the samples, but the magnitude accuracy of each individual sample is not as good as it could be
-or have a signal with larger time spacing between the samples but the magnitude of each individual sample is more accurate
I'm just wondering kind of where the balance is chosen. This leads back to the sigma-delta ADCs because the sampling rate is not the same as the output rate or the bandwidth and it's messing with my head.
==================
EXAMPLE:
1. ONE SAMPLE PER DATA POINT- You take a few X samples in time and those X samples are the the representation of the line.
2. MULTIPLE SAMPLES PER DATA POINT- You sample at a faster rate and you take 100 samples per data point. The magnitude of each data point is the average magnitude of the 100 consecutive samples. The time that you label the data point with is the averaged time of the 100 samples (ie. the 100 samples were taken between time 1s and 2s, then the data point would be considered to be at 1.5s). Yet, all these samples used to form this data point happened at different points in time and had different magnitudes. It would lead to inaccuracy wouldn't it? Or is this taken into account with the Nyquist sampling frequency? It's really simplifying how things actually go on, but I'm trying to visualize this.
THE QUESTION: Or does Nyquist take care of all these magnitude/time resolution issues? So that if the signal bandwidth is 1kHz, then even if you sample at 1000kHz for one second, it's more accurate if you use the data to form 1000 data points, rather than 1 million data points?
Last edited: