Forum Discussion
Altera_Forum
Honored Contributor
16 years agoI wouldn't initially take the approach of trying to adapt your clock phase based on the timing results. Those timing results will likely change based on your timing constraints and design changes.
Try this approach: Let's assume an ideal world with no delays in the FPGA and no delays on the board. The datasheet says that you need 1.5ns of setup time and 0.5ns of hold time. So let's take that away from our clock period and see what margin we have left: 10ns - 1.5ns - 0.5ns = 8ns. Let's split the difference between the setup and hold time. 8ns / 2 = 4ns; We want 4ns margin on both our setup and hold time so our sram clock phase would be -0.5ns - 4ns = -4.5ns. So if we were only taking into account the output to the SRAM, we might choose a phase shift of -4.5ns. However, what about the input? The datasheet for the SRAM says there is a worst-case tCO of 3.5ns. With our -4.5ns clock phase, that data will show up 4.5ns - 3.5ns = 1ns before the rising edge of our regular non-shifted system clock. Hmmm, we have our SRAM a setup time of 5.5ns but we're only giving the FPGA a setup time of 1ns. That's not very nice. Maybe we should give the FPGA some more setup time. So we have to make a decision here: 1 - We can push our SRAM clock further out (which eats at our output hold margin) and thereby push the delayed SRAM input data into the following clock cycle. You would have to compensate for this by adding a clock cycle of latency in your design. This is a perfectly valid choice in which case, I would use a phase shift similar to what nekojiru suggested (like -1.66ns) -or- 2 - We can pull our SRAM clock back in a bit (which eats at our output setup margin) and thereby increase our setup margin on the input data coming back from the SRAM. Well let's redo our calculation and this time we split the margin between the output setup and input setup times. 10ns - 3.5ns - 1.5ns = 5ns; <- This is our margin. 5ns / 2 = 2.5ns; <- This is how much margin will give for both setup time in both direction. -3.5ns -2.5ns = -6ns. Now in reality, I think like you've calculated that the data delay inside the FPGA is going to be longer than the clock delay. So I would rather see that -6ns be more like -5ns. Does this help at all?