Forum Discussion
Altera_Forum
Honored Contributor
12 years agoOn Xilinx when you were interfacing a DDR/QDR memory they provided a tool that generated an interface module that you could include in the design. It contained code that would somehow relearn and reset I/O delays each time the system was restarted. If you wanted you could then query the delays set on each I/O. Usually the delays were almost identical but sometimes not. Since the module took care of it every time we didn’t keep close track of whether or how the delays changed from one restart to another or one board to another.
In another case we had two FPGAs on the board with a high-speed parallel bus between them. On each startup we sent calibration patterns from one FPGA while on the receiving FPGA I wrote code to detect the data eye edges and adjust the input delays accordingly to center the sample point. Here again calibrations were done on each restart rather than once-and-done at development time. I’m not necessarily advocating the Xilinx way as better. I’m just saying it’s what I’m accustomed to. If the assumption of consistency on the boards is valid then the once-and-done approach sounds more efficient since you don’t need to burn logic resources to execute the calibration and it may be simpler on the silicon if Altera doesn’t need to route clock and control signals to each IOE. Coming from using Xilinx, though, I’m not used to having that assumption made.