Forum Discussion
Altera_Forum
Honored Contributor
14 years ago --- Quote Start --- Why is it too fast? You need to keep in mind that the input bandwidth to the ADC needs to be consistent with its sample rate. So if your ADC is designed to operate at 40MHz and it has an input filter with 20MHz of bandwidth, regardless of what clock rate you operate it at, it will be letting in 20MHz worth of noise. To eliminate that noise, you can either change the filter and the sample rate, or you can sample at 40MHz and then use digital decimation filters. --- Quote End --- well...in lock-in amplifier, after demodulation. I need to build a LPF to keep the DC component and the cutoff frequency is 1khz (so I will keep 154+-1khz signal). in this case, isn't 40msps too high as a sampling frequency? the filter will have extremely high order. and that will make the system very slow. that was my consideration... --- Quote Start --- It depends. If you never know what is being filtered, then you have to design for worst-case input signals. If you control the type of input signal and know what it should be (and monitor it in the hardware), then you can optimize for just the signals you expect. That is part of the modeling task :) For example, will the sensors have amplifiers, so that the ADC input range is always driven over its full amplitude range, or will there be some minimum and maximum amplitude over which it operates? Systems with ADCs that have fewer bits will often be driven by automatic-gain control (AGC) amplifiers that keep the amplitude at the input to the ADC constant. --- Quote End --- hmm... I guess I didn't make myself clear enough or I am not following... for example, the input of the filter is 32bit, the coefficients are 16bit, the order is 32, then the output will be 32+16+log2(32)=53, full precision. but if my output is 32bit, i need to cut 16LSB, and [2:6] MSBs, is that correct? Thanks!! Allison