Forum Discussion
What kind of messages are you seeing in Quartus when this happens? And why does your SDC file have an extension of .fdc instead of .sdc? Perhaps you have not manually added the SDC files in the Quartus Timing Analyzer settings, so it's stuck, while in the Timing Analyzer GUI, you choose to manually read in the correct file(s).
#iwork4intel
It is .fdc because I use the same file within synplify pro for synthesis.
the .fdc/.sdc are present in the .qsf, don't worry.
I added a couple of false path and for some reason that speeds up a lot the timing analyzer
(like 20x faster).
Maybe that when there are alot of negative timing slack,
the internal timing database of quartus is growing (RAM and/or Disk space) and exponentially slowing down?
I have also set up to 16 cores to be used, that speeds-up as well.
Linux job stats after i close the job:
CPU time : 43799.84 sec.
Max Memory : 19792 MB
Average Memory : 4198.15 MB
Total Requested Memory : 48000.00 MB
Delta Memory : 28208.00 MB
Max Processes : 21
Max Threads : 92
Run time : 76589 sec.
Turnaround time : 76591 sec.
- AEsqu5 years ago
Contributor
There is an interesting fitter report table that shows the added delay to meet hold timing:
*.fit.rpt
Estimated Delay Added for Hold Timing Details (Delay Added in ns)
Note: This table only shows the top 100 path(s) that have the largest delay added for hold.
Is there a way to extend the table to for example 5000 paths instead of 100?
- AEsqu5 years ago
Contributor
Well, I was a bit too entousiastic, the timing analyzer still ran for 24 minutes.
- AEsqu5 years ago
Contributor
Wow,
I archived the quartus 20.1 project under linux and extracted under win 10 with quartus 20.2,
the run time has dropped from 6 hours to 1h15m !
Either quartus 20.2 is way faster than quartus 20.1,
either linux implementation is much slower than windows.
I will install quartus 20.2 under Linux to see.