Forum Discussion
Altera_Forum
Honored Contributor
14 years agoset_max_delay is the correct override. I believe the original PCI worked like that, whereby all data was sent within the clock period, but the OE timing was a requirement in the middle of the clock window, so that all drivers were off before somebody new starts driving the bus.
Remember that set_max_delay and set_output_delay work together. For example, if you do a "set_output_delay -clock ext_10ns_clk -max 3 [get_ports data_out*]" then you're saying data_out* drives an external register that is clocked by ext_10ns_clk and the external delay to get there is 3ns. There will be a setup relationship between the internal clock driving your output register and this external clock. Most likely both clocks are the same frequency, so let's say it's a 10ns setup relationship. When you apply a "set_max_delay -from [get_keepers oe_reg*] 7", you're overriding the setup relationship from the oe_reg to the external register, so instead of 10ns it is now 7ns, but the external delay is still there. So, in this example, the set_max_delay makes the setup relationship become 7ns and 3ns of that will be used externally, so the oe has to get its data out in 4ns, basically like a 4ns Tco. You should see in the report_timing a value called oExt that shows this external delay. I mention this because users often think of set_max_delay as a complete override, and so it acts like a Tco. That's true if there is no external delay(and no phase-shift or falling edge, but that's another story). But if there is an external delay, that is used in set_max_delay's calculation.