Stratix 10 : Critical Warning: DDR Timing requirements not met
I have two DDR4 ('DDR A' and 'DDR B') in my project, and I am using two DDR4 EMIF controller for Stratix 10.
My problem is the 2 of them don't meet timing, the compilation report gives this kind of messages for both of them (setup and hold timing values change between them).
Info: Core: emif_fpga_b_emif_s10_0_altera_emif_arch_nd_191_y322eui - Instance: memory_ddr4x72_wrapper_b|u1|emif_s10_0|emif_s10_0
Info: setup hold
Info: Address/Command (Fast 900mV 0C Model) | 0.176 0.176
Info: Core (Fast 900mV 0C Model) | 0.762 -5.928
Info: Core Recovery/Removal (Fast 900mV 0C Model) | 0.827 1.703
Info: DQS Gating (Fast 900mV 0C Model) | 0.53 0.53
Info: Read Capture (Fast 900mV 0C Model) | 0.036 0.036
Info: Write (Fast 900mV 0C Model) | 0.058 0.058
Info: Write Levelling (Fast 900mV 0C Model) | 0.141 0.141
Critical Warning: DDR Timing requirements not met
1st question : How can I solve that?
My other problem is that when I run timing analyzer GUI I have 29 failing path only in DDR A. They are all from : {memory_ddr4x72_wrapper_a|u0|emif_s10_0|emif_s10_0|ecc_core|core|ecc|internal_master_wr_data[xxx]}
and -to :{memory_ddr4x72_wrapper_a|u0|emif_s10_0|emif_s10_0|arch|arch_inst|io_tiles_wrap_inst|io_tiles_inst|tile_gen[xxx].lane_gen[xxx].lane_inst|lane_inst~phy_reg1}
With different values of xxx.
Why do I see failing path only in DDRA and not in the 2 DDRs when I report timing ?
Does anyone have a solution? How can I do to meet timing within the EMIF IP Core? Are the timing related to the board and package skews settings that I define in the IP ?
Please note that I have already posted this message, but it has been closed because I haven't replied fast enough.
An intel employee told me to run the following command :
if {![is_post_route]} {
set_min_delay -from [get_keepers "memory_ddr4x72_wrapper_a\|u0\|emif_s10_0\|emif_s10_0\|ecc_core\|core\|ecc\|internal_master_wr_data\[*\]*"] -to {memory_ddr4x72_wrapper_a|u0|emif_s10_0|emif_s10_0|arch|arch_inst|io_tiles_wrap_inst|io_tiles_inst|tile_gen[*].lane_gen[*].lane_inst|lane_inst~phy_reg1} 4.114
}
The path with the worst negative slack of my paths had a Data Delay of 3.74. I then added 10% to this value as you told me : 3.74*1.1 = 4.114
But this made it worse, I went from 48 failing path to more than 500 ones..