Forum Discussion
Altera_Forum
Honored Contributor
9 years agoHi Rookie2017,
you're right that the maximum output delay constraint is there to guarantee the setup time at the SDRAM's input register. Think about maximum output delay as "how long must the launched data signal be stable (at the fpga's data output pin) before the launch clock edge (at the fpga's clock output pin)". In the initial case, assume that your clock and data traces just have the right length, so that the term data_trace_delay_max - clk_trace_delay_min becomes zero, and therefore max output delay = set_up_time. Makes sense, right? The launched signal must be stable before the launch clock, so that the setup time of the SDRAM register is not violated. That's no problem since you already added a 180° phase shift to the clock. Now when your data trace becomes longer (but the clock trace remained at the same length), it arrives later at the SDRAM. It comes closer to the clock edge, and therefore it grows into the region where the SDRAM's setup time might be violated. Logically, the maximum output delay must become larger, i.e. the FPGA must somehow ensure the launched data signal is stable for a bit longer before the clock edge comes. How can you achieve that? Either the fitter plays around with the output delay of that pad (if available), or you have to change the 180° to something else. I think the reason for your confusion is that you think about the maximum output delay as an "available" time. It's easier to think about it as a "required" time. Also, the min/max output delays do not actually define the time between the clock edge and the data transition; they rather define a time window around the clock edge in which the data must not change. Best regards, GooGooCluster