Forum Discussion
This is closer to what you should have:
# virtual clock driving "upstream" device assuming it's the same as what that device is outputting
create_clock -name vir_clock_in -period 20
# input clock to the FPGA; remove -waveform option if data enters FPGA edge-aligned with the clock
create_clock -name input_clock -period 20 [get_ports input_clock] -waveform {10.0 20.0}
# if you're using a PLL
derive_pll_clocks
# input delays calculated based on upstream device datasheet and board delays (if board trace is not perfectly matched); typically device spec would be tcomax and tcomin
set_input_delay -clock vir_clock_in -max <max_delay> [get_ports input_data]
set_input_delay -clock vir_clock_in -min <min_delay> [get_ports input_data]
See this online training for details, especially on the calculations for input delay max/min:
https://www.intel.com/content/www/us/en/programmable/support/training/course/ocss1000.html
#iwork4intel
- JSmit1236 years ago
New Contributor
Your vir_clock_in is not related to input_clock . How does it work?
And, according to Altera/Intel, it's not mandatory to create a virtual clock. I still can't understand what is wrong with my SDC file.