Contradictions between 2 DDR examples
Hello,
In page 12 of this document (Example 1):
https://www.intel.com/content/dam/www/programmable/us/en/pdfs/literature/an/an477.pdf
The example lists:
Minimum Setup time = 0.9 ns
Minimum hold time = 2.7 ns
Based on this - the example calculates the Max and Min input delays as follows :
Calculated input maximum delay = tco of external device = 2.8 ns – 2 ns (rising edge of RX_CLK) = 0.8 ns
Calculated input minimum delay = tco of external device = 1.2 ns – 2 ns (rising edge of RX_CLK) = –0.8 ns
However,
If we follow Intel's presentation about constraining source synchronous DDR interfaces (Example 2) the calculation will yeild a different result:
tui = 8.0 , Tsu = 0.9 , Th = 2.7
Calculated input maximum delay = tui / 2 - Tsu = 8 / 2 - 0.9 = 3.1 nS
Calculated input minimum delay = Th - tui / 2 = 2.7 - ( 8 / 2 ) = -1.3 nS
Am I getting something wrong or is there an error in one of the examples ?