Confused by JTAG tdi constraint in Intel Quartus Prime Timing Analyzer Cookbook
I found the Intel Quartus Prime Timing Analyzer Cookbook here at https://www.intel.com/content/dam/www/programmable/us/en/pdfs/literature/manual/mnl_timequest_cookbook.pdf.
I tried using the jtag constraints found on p18 However noticed some items that confused me. In the procedure set_tdi_timing_spec_when_driven_by_device it sets the tdi delay by:
set tdi_in_max [expr $previous_device_tdo_tco_max + $tdi_trace_max - [get_tck_delay_min]]
set_input_delay -add_delay -clock_fall -clock altera_reserved_tck -max $tdi_in_max [get_ports {altera_reserved_tdi}]
However I am confused by the use of [get_tck_delay_min]. This ends up results in very large negative delays passed to the set_input_delay command.
If this is a system synchronous interface normally the skew between the JTAG clock TCK arriving at the driving device and the FPGA would be used, not the actual delay from the header to the FPGA. For example earlier in the cookbook on p9-10 this example is given and the difference between clock delay to source and clock delay to destination is used, as expected:
set_input_delay -clock clkA_virt \
-max [expr $CLKAs_max + $tCOa_max + $BDa_max - $CLKAd_min] \
[get_ports {data_in[*]}]
Is it possible the tdi set_input_delay is incorrectly calculated in the cookbook or is there something I am missing?
Thanks