Forum Discussion
In static timing analysis, you're always analyzing the Data Arrival Path versus the Data Required Path. (99% of the time the Data Arrival is your source clock + data path and Data Required is your latch clock path). So you've got two signals racing against each other. Let's pretend we're sending clock and data off chip, so Data Arrival is the data path going off chip and Data Required is the clock path going off chip. Now let's say they're using identically laid out paths in silicon, down to the ps. The rise-fall variation is one component that will make them vary, but on-die variation is usually a bigger component. Let's say we're looking at the slow timing model and say the very worst the data output path could be is 10ns. Just because that path is at the very worst possible part of the PVT corner, another output(say the clock) might be better. TimeQuest has two "sub-models" within a corner, what I call a fast and slow sub-model. So when doing setup analysis, the data would be 10ns but the clock might only be at 9.5ns. That then flips for hold analysis, and the data might be at 9.5ns and hold at 10ns. The fact that it cuts both ways is a killer, as the amount of on-die variation is doubled when it cuts into your margin. (In this example the difference is 500ps, but we lose 1ns).
Now, when you look at a single path, you're not analyzing it in reference to anything else, but TimeQuest is basically acting as if you did, i.e. it gives you the fast and slow sub-models within a corner. That all being said, 1ns of variation seems really high. Maybe it's because you're using local routing and not a global?