I don't have letters behind my name to make me an expert, but I did write an article some years ago for one of the trade journals, I forget which one, about the "Nyquist Frequency Fable". Two misconceptions are common:
toggle quoted message
Show quoted text
1) That sampling at 2x the highest frequency present is adequate. What Nyquist actually says is the converse: sample any slower than that and you're guaranteed to get aliasing. As others have pointed out, around 10x the highest frequency is a good guide. 2) The theory behind the sampling theorem presumes sampling both real and imaginary parts at 2x the max frequency. In the time domain, the imaginary part is just phase-shifted by 90? from the real part. Thus to truly sample both real and imaginary parts at 2x the highest frequency, you're actually sampling (in the time domain) at 4x. This guarantees that you don't completely miss the signal by sampling exactly at the zero crossings. So 4x is pretty much a minimum, and 10x is a good real-world guide. And remember that it's talking about the highest frequency component actually present. What's the highest frequency component in a perfect 1 KHz square wave? This is why you have to first filter in the continuous time domain BEFORE sampling. I once was asked by a manager type to digitally filter a pressure sensor that was pulsing (due to an upstream pump) at almost (but not quite exactly) our sampling frequency. It took a lot of effort to convince him that we really needed to add an analog filter upstream, since the sample rate was non-negotiable. Steve Hendrix At 2021-11-22 13:45, Dave Daniel via groups.io wrote:
The Nyquist sampling theorem (or Nyquist-Shannon or Whittaker-Nyquist-Shannon, etc., etc.) postulates that one must sample at a sampling rate of at least twice the highest frequency contained in the signal being sampled in order to prevent aliasing. |