Altera_Forum
Honored Contributor
15 years agoClock frequency vs. Latency issues for an IIR filter
The LPM arithmetic & logic functions in Quartus have an option to specify the required Output Latency in terms of no. of clock cycles. An obvious choice is to set this value to the least i.e. '1' in order to minimize the propagation delay.
Does this mean that - for each module: the output will be updated after 1 clock cycle from the input, no matter what clock frequency we use ? Suppose, I have to implement the following control algorithm in my FPGA: Vout[k] = C1*( Vin[k] + Vin[k-1] ) + C2*Vstate[k] + Vout[k-1]; (it resembles an IIR filter, and involves adders, multipliers, delays ,etc.) My aim is to reduce the overall input-to-output latency, while the throughput remains more or equal to the sampling frequency. What clock frequency should I supply to each of these modules ? My input-output (ADC-DAC) sampling frequency is 20 MHz & FPGA oscillator clock is 50 Mhz. I understand that to generate Vx[k-1] from Vx[k], we need to delay Vx[k] by the exactly 1 sampling instant (1 / 20Mhz = 50ns). So delays would necessarily be supplied with this 20 Mhz sampling clock. But what about adders, multipliers, etc. Can I not use a pll_clock = 100 Mhz (Tpll = 10ns) for these, and finish the entire equation within 10ns*5steps = 50ns ?