Forum Discussion
Altera_Forum
Honored Contributor
13 years ago --- Quote Start --- Why should q[1] period be not exactly 160ns? Provided the base clock is exactly 40ns, this is true. I think you are missing a point about what TQ is supposed to do. TQ don't calculate design clocks: period and phase of clocks are indeed inputs you feed to TQ in order it can calculate the behavior of other signals, for example their relationship with the latching clock when they get to a register. Then if you use create_generated_clock or create_clock and declare a clock period different from the real one, TQ would use this one you specify, and all analysis results will be according to this value. From what you say, I believe you intend to evaluate q[1] signal jitter. In this case you only need to specify in TQ the 40ns base clock and analyse the q[1] timing. --- Quote End --- Thanks very much. You mentioned this "Then if you use create_generated_clock or create_clock and declare a clock period different from the real one, TQ would use this one you specify, and all analysis results will be according to this value.", I think this is same as my understanding, that TQ will calculate the clock based on my description. But let me take another example, if I need use FPGA to generate a very slow clock signal for the chip which is outside the FPGA. Since the clock is very slow (e.g. period=10240ns), I use counter and drive by 40ns clock, and q[7] will be 10240ns. However, based on the theory of digital counter, there should be error between ture clock period and ideal period. If I want to check the true clock, to see whether the error is small enough for my chip, can I check it in TQ or there is no approach to do that? Thanks very much.