Forum Discussion
The JTAG tdi input is getting sourced from the prior device in the jtag chain tdo output and both the source and destination are getting the common jtag clock tck routed from the jtag header although with differing trace delays to arrive at . This seems to match up exactly with the https://www.intel.com/content/dam/www/programmable/us/en/pdfs/literature/manual/mnl_timequest_cookbook.pdf figure 10 on page 9 setup on the lefthand side of the diagram.
Page 10 then gives for this scenario for the formula passed to the set_input_delay constraint -max as:
$CLKAs_max + $tCOa_max + $BDa_max - $CLKAd_min
rewriting this it is tCOa_max + BDa_max + (CLKAs_max -CLKAd_min)
Note that CLKAs_max -CLKAd_min is the difference in the delay for the clock to arrive at the source vs destination, NOT just the delay to the Destination(FPGA).
However on page20 for the JTAG constraints in the proc called set_tdi_timing_spec_when_driven_by_device the formula passed to set_input_delay -max is:
$previous_device_tdo_tco_max + $tdi_trace_max - [get_tck_delay_min]]
where the get_tck_delay_min procedure only includes the delay of the tck clock to get to the destination device (FPGA) but does no take into account the delay of the tck to the sourcing device.
Based on the p9 scenario forumula I would have instead expected the jtag formula used for the set_input_delay to be something like:
$previous_device_tdo_tco_max + $tdi_trace_max - [tck_delay_to_source_device_max -tck_delay_to_FPGA_min ]]
So I am trying to understand the discrepancy of why the proc set_tdi_timing_spec_when_driven_by_device only uses the delay of tck clock to the destination( FPGA) and not to the sourcing device.
Thanks