Thanks for writing, ron.
I actually found myself spending too much time making bandpass filters, when I realized it isn't the audio frequencies being attenuated that I needed, but faster processing of a narrow part of the spectrum-and it doesn't need to be
accurate, just
precise. What is missing is my FHT/FFT expertise.
To answer your question, I'm building on the cheap (FPGA, at least topically, seems powerful but
not cheap) a USB powered device that will be ESP-32 driven (unless I can really knock the processor needed with this post, in which case esp8266). With a microphone, it needs to sense relatively short bursts of sound (this is why samples/second important), but only in a narrow part of the audio spectrum. This is why I'm trying to find a way to feed the Analog pin whatever, then have the frequency determinant portion not process the entire spectrum. Like I said above, doing so (overkill) drastically increases the time/sample, and worse makes each bin 'wider', lessening discretion.
The thing with accuracy vs. precision is that it only cares if the same frequency repeats. Accuracy is honestly something I don't expect it to do. Precision (repeatability) is more important.
Think of it like this might help... I'm looking for a signal, and if I had no idea what frequency it was going to be at, I would need to search the entire spectrum. Because nothing is free, I'll miss it because it takes too long (given finite processing) to make each pass. The signal duration is likely well under the scan rate. But then, I find out it will be within a certain range of frequencies. I still don't know the exact frequency, but it doesn't matter-as long as I can pick it up.
Now, in theory, I should be able to tell the FHT/FFT algorithm to no longer look at everything, but only those frequencies that make up that part of the analog input signal. It might be thought of as an
in-code bandpass filter(?). This way, I have no passive filter losses, no need to make active bandpass (neither of which would speed things up).
Another example to help would be to think of audio communication/signal. Perhaps notes, or frequencies were the signals, and the discrimination of which was above/below the other determined the formatting/protocol of the information. You wouldn't care what the exact frequency was, just identifying the presence of a series of quick tones whose information lie in the 'pulses' at frequencies relative to each other. You'd need fast, discriminatory processing, right? But because the tones are communicating based on their relative position to each other, tightly packed in the frequency domain, you don't need difficult absolute frequency discrimination.
Come to think of it, doing the above in reality (above/below human hearing) sounds interesting...
Does this help?