Forum Discussion

AEsqu's avatar
AEsqu
Icon for Contributor rankContributor
5 years ago

Quartus 20.1 timing analysis takes time

The Quartus 20.1 timing analysis takes time after the fitter phase.

But when the GUI timing analysis is open,

it is much faster to report timings.

Why is it so long the first time?

Can this be speed-up?

See picture.

6 Replies

  • AEsqu's avatar
    AEsqu
    Icon for Contributor rankContributor

    After 20 minutes it was still running, so I stopped the flow and ran it in the GUI instead:


    History TAB of timing analyzer GUI:
    qsta_utility::auto_CRU "create_timing_netlist -snapshot final -model slow"

    Console TAB:
    This flow below takes the vast majority of time (after Successfully loaded final database: elapsed time is 00:00:12.):

    *******************************************************************
    Running Quartus Prime Timing Analyzer
    *******************************************************************
    The Quartus Prime Shell supports all TCL commands in addition
    to Quartus Prime Tcl commands. All unrecognized commands are
    assumed to be external and are run using Tcl's "exec"
    command.
    - Type "exit" to exit.
    - Type "help" to view a list of Quartus Prime Tcl packages.
    - Type "help <package name>" to view a list of Tcl commands
    available for the specified Quartus Prime Tcl package.
    - Type "help -tcl" to get an overview on Quartus Prime Tcl usages.
    *******************************************************************
    project_open -force "synplify_synth_quartus_fit/Achilles_arria_X.qpf" -revision Achilles_arria_X
    qsta_utility::auto_CRU "create_timing_netlist -snapshot final -model slow"
    Automatically reading constraints and updating the timing netlist. To change this behavior, see Timer Analyzer Settings.
    Parallel compilation is enabled and will use up to 8 processors
    Loading final database.
    Loading "final" snapshot for partition "root_partition".
    Loading "final" snapshot for partition "auto_fab_0".
    Successfully loaded final database: elapsed time is 00:00:12.
    Core supply voltage operating condition is not set. Assuming a default value of '0.9V'.
    Low junction temperature is 0 degrees C
    High junction temperature is 100 degrees C

    After that, reading the SDC and computing is faster:

    The Timing Analyzer is analyzing 36 combinational loops as latches. For more details, run the Check Timing command in the Timing Analyzer or view the "User-Specified and Inferred Latches" table in the Synthesis report.
    The Timing Analyzer found 73 latches that cannot be analyzed as synchronous elements. For more details, run the Check Timing command in the Timing Analyzer or view the "User-Specified and Inferred Latches" table in the Synthesis report.
    Reading the HDL-embedded SDC files elapsed 00:00:00.
    Reading SDC File: '../../sdc/Achilles_arria_X_project_quartus.sdc'
    Reading SDC File: '../../sdc/fpga.fdc'
    Clock uncertainty is not calculated until you update the timing netlist.
    Reading SDC files elapsed 00:00:08.
    ...

    • sstrell's avatar
      sstrell
      Icon for Super Contributor rankSuper Contributor

      What kind of messages are you seeing in Quartus when this happens? And why does your SDC file have an extension of .fdc instead of .sdc? Perhaps you have not manually added the SDC files in the Quartus Timing Analyzer settings, so it's stuck, while in the Timing Analyzer GUI, you choose to manually read in the correct file(s).

      #iwork4intel

      • AEsqu's avatar
        AEsqu
        Icon for Contributor rankContributor

        It is .fdc because I use the same file within synplify pro for synthesis.

        the .fdc/.sdc are present in the .qsf, don't worry.

        I added a couple of false path and for some reason that speeds up a lot the timing analyzer

        (like 20x faster).

        Maybe that when there are alot of negative timing slack,

        the internal timing database of quartus is growing (RAM and/or Disk space) and exponentially slowing down?

        I have also set up to 16 cores to be used, that speeds-up as well.

        Linux job stats after i close the job:

        CPU time : 43799.84 sec.

        Max Memory : 19792 MB

        Average Memory : 4198.15 MB

        Total Requested Memory : 48000.00 MB

        Delta Memory : 28208.00 MB

        Max Processes : 21

        Max Threads : 92

        Run time : 76589 sec.

        Turnaround time : 76591 sec.