Forum Discussion
Altera_Forum
Honored Contributor
11 years agoThen change your external clocks to be 8ns. You could also do:
set_max_delay -from [all_inputs] -to [all_ouputs] 8.0 Note that .sdc constraints are meant to emulate the system. For example, let's say you have a processor that has a 20ns period. It takes 8ns to get data out and across the board to the FPGA. The FPGA decodes those values and sends them onto the next chip, also clocked off the 20ns clock, and it has a setup time of 4ns. Now you took your 20ns period, subtracted out the 8 and 4ns to realize you had to get through the FPGA in 8ns, and are trying to tell the FPGA that low-level data. But from an .sdc perspective you should set your virtual clocks to 20ns, set the input max delay to 8ns, the output max delay to 4ns, and it will determine the FPGA needs to get data through in 8ns. There are some benefits to this: - It's nice to show the whole system. I've too often seen FPGA's with an 8ns Tpd constraint. I'll ask the designer where that came from, and they'll have no idea, as someone did the calculations in a notebook ten years ago. In the example above, you can document it all, i.e.: set_input_delay -clock clk_in -max 8.0 [get_ports cpu_d[*]] ;# 8.0ns comes from cpu datasheet page 47. Tco of 8ns - Secondarily, if something changes, it's easy to modify the constraints. If you change the board layout so the delays from CPU to FPGA are 0.5ns loner, than just change the 8 to 8.5. If you buy a new cpu that has a faster Tco, then plug in that new value. If you want to see if you meet timing with a 15ns period, then just modify the clock values.