ContributionsMost RecentMost LikesSolutionsRe: Q20.2: Critical Warning (19166): For IOPLL feedback delay chain setting was reduced from "X" to "0" Hi, Thanks for the reference. However, I'm a bit confused as to the best way to reduce the feedback path length: how is that calculated, and what do I need to do in order to reduce it? We're connecting the PLL to the IP and our user logic. Should we be using a different clock for our user logic? More specifically, what is the proceedure for determining the length of the feedback path, and how does one reduce the feedback path length? Re: 100G Ethernet IP (Stratix 10) does not complete placement No, this fully addresses the issue. Thanks! Q20.2: Critical Warning (19166): For IOPLL feedback delay chain setting was reduced from "X" to "0" Quartus Prime Version 20.2.0 Build 50 06/11/2020 SC Pro Edition Copyright (C) 2020 Intel Corporation. All rights reserved. Target FPGAs: "1SX280HU2F50E1VGAS" "1SX280HU1F50E2VG" "1SX280HU3F50E2VG" When compiling a design using Quartus 20.2 that uses the JESD204B IP, I've created a PLL to provide the Link Clock which is to be fed to the JESD204B IP. However, when specifying the Link Clock PLL to be used in "Normal" mode vs. "Direct" mode, we now get some Critical Warnings. The documentation indicates that this link clock needs to be set to "normal" to get deterministic latencies and to support sysref sampling. This Link Clock is then distributed to four Tx and four Rx JESD 204B IP instantiations. Since making this change, we're seeing a number of these errors, in the form of: Critical Warning (19166): For IOPLL "jesd_pll[3].jesd_core_pll_inst_250|iopll_0|stratix10_altera_iopll_i|s10_iopll.fourteennm_pll", feedback delay chain setting was reduced from "4497" to "3843" to keep the IOPLL stable. Critical Warning (19166): For IOPLL "jesd_pll[0].jesd_core_pll_inst_250|iopll_0|stratix10_altera_iopll_i|s10_iopll.fourteennm_pll", feedback delay chain setting was reduced from "3524" to "0" to keep the IOPLL stable. Critical Warning (19166): For IOPLL "jesd_pll[2].jesd_core_pll_inst_250|iopll_0|stratix10_altera_iopll_i|s10_iopll.fourteennm_pll", feedback delay chain setting was reduced from "3808" to "65" to keep the IOPLL stable. Critical Warning (19166): For IOPLL "jesd_pll[3].jesd_core_pll_inst_250|iopll_0|stratix10_altera_iopll_i|s10_iopll.fourteennm_pll", feedback delay chain setting was reduced from "5217" to "0" to keep the IOPLL stable. Critical Warning (19166): For IOPLL "jesd_pll[1].jesd_core_pll_inst_250|iopll_0|stratix10_altera_iopll_i|s10_iopll.fourteennm_pll", feedback delay chain setting was reduced from "2790" to "125" to keep the IOPLL stable. Critical Warning (19166): For IOPLL "jesd_pll[0].jesd_core_pll_inst_250|iopll_0|stratix10_altera_iopll_i|s10_iopll.fourteennm_pll", feedback delay chain setting was reduced from "3524" to "0" to keep the IOPLL stable. Critical Warning (19166): For IOPLL "jesd_pll[2].jesd_core_pll_inst_250|iopll_0|stratix10_altera_iopll_i|s10_iopll.fourteennm_pll", feedback delay chain setting was reduced from "3808" to "65" to keep the IOPLL stable. Critical Warning (19166): For IOPLL "jesd_pll[3].jesd_core_pll_inst_250|iopll_0|stratix10_altera_iopll_i|s10_iopll.fourteennm_pll", feedback delay chain setting was reduced from "5217" to "0" to keep the IOPLL stable. Critical Warning (19166): For IOPLL "jesd_pll[1].jesd_core_pll_inst_250|iopll_0|stratix10_altera_iopll_i|s10_iopll.fourteennm_pll", feedback delay chain setting was reduced from "2790" to "125" to keep the IOPLL stable. Critical Warning (19166): For IOPLL "jesd_pll[0].jesd_core_pll_inst_250|iopll_0|stratix10_altera_iopll_i|s10_iopll.fourteennm_pll", feedback delay chain setting was reduced from "3524" to "0" to keep the IOPLL stable. Critical Warning (19166): For IOPLL "jesd_pll[2].jesd_core_pll_inst_250|iopll_0|stratix10_altera_iopll_i|s10_iopll.fourteennm_pll", feedback delay chain setting was reduced from "3758" to "10" to keep the IOPLL stable. Critical Warning (19166): For IOPLL "jesd_pll[3].jesd_core_pll_inst_250|iopll_0|stratix10_altera_iopll_i|s10_iopll.fourteennm_pll", feedback delay chain setting was reduced from "5280" to "0" to keep the IOPLL stable. Critical Warning (19166): For IOPLL "jesd_pll[1].jesd_core_pll_inst_250|iopll_0|stratix10_altera_iopll_i|s10_iopll.fourteennm_pll", feedback delay chain setting was reduced from "2689" to "16" to keep the IOPLL stable. However, I don't see any documentation on what this means, or why it's happening. The relevant PLL instantiation block looks like this, and is within a generate block: jesd_core_pll_250 jesd_core_pll_inst_250 ( .rst (w_JesdCorePllReset [genvar_JESD]), // input .refclk (w_JesdDevClk [genvar_JESD]), // 250MHz input .locked (w_JesdCorePllLocked_250 [genvar_JESD]), // output .outclk_0 (w_LinkClk_250 [genvar_JESD]) // 250MHz output TX & RX 1GSPS ); jesd_core_pll_300 jesd_core_pll_inst_300 ( .rst (w_JesdCorePllReset [genvar_JESD]), // input .refclk (w_JesdDevClk [genvar_JESD]), // 250MHz input .locked (w_JesdCorePllLocked_300 [genvar_JESD]), // output .outclk_0 (w_LinkClk_300 [genvar_JESD]) // 300MHz output JESD RX 3GSPS ); jesd_core_pll_375 jesd_core_pll_inst_375 ( .rst (w_JesdCorePllReset [genvar_JESD]), // input .refclk (w_JesdDevClk [genvar_JESD]), // 250MHz input .locked (w_JesdCorePllLocked_375 [genvar_JESD]), // output .outclk_0 (w_LinkClk_375 [genvar_JESD]) // 375MHz output JESD TX 3GSPS ); assign w_JesdLinkClk_RX[genvar_JESD] = ((JESD_L == 4) && (JESD_M == 2) && (JESD_F == 1) && (JESD_S == 1)) ? w_LinkClk_250[genvar_JESD] : /* ((JESD_L == ๐ && (JESD_M == 2) && (JESD_F == ๐ && (JESD_S == 20)) ? */ w_LinkClk_300[genvar_JESD] ; assign w_JesdLinkClk_TX[genvar_JESD] = ((JESD_L == 4) && (JESD_M == 2) && (JESD_F == 1) && (JESD_S == 1)) ? w_LinkClk_250[genvar_JESD] : /* ((JESD_L == ๐ && (JESD_M == 2) && (JESD_F == ๐ && (JESD_S == 20)) ? */ w_LinkClk_375[genvar_JESD] ; //For JESD_Rx assign w_JesdCorePllLocked[genvar_JESD+4] = ((JESD_L == 4) && (JESD_M == 2) && (JESD_F == 1) && (JESD_S == 1)) ? w_JesdCorePllLocked_250[genvar_JESD] : /* ((JESD_L == ๐ && (JESD_M == 2) && (JESD_F == ๐ && (JESD_S == 20)) ? */ w_JesdCorePllLocked_300[genvar_JESD] ; //For JESD_Tx assign w_JesdCorePllLocked[genvar_JESD+8] = ((JESD_L == 4) && (JESD_M == 2) && (JESD_F == 1) && (JESD_S == 1)) ? w_JesdCorePllLocked_250[genvar_JESD] : /* ((JESD_L == ๐ && (JESD_M == 2) && (JESD_F == ๐ && (JESD_S == 20)) ? */ w_JesdCorePllLocked_375[genvar_JESD]; assign w_JesdLinkClk_RX [genvar_JESD+4] = w_JesdLinkClk_RX [genvar_JESD]; //assign w_JesdLinkClk_TX [genvar_JESD+4] = w_JesdLinkClk_TX [genvar_JESD]; UNCOMMENT WHEN 8 TX The idea is that the PLLs which we aren't going to use will by synthesized away, which is happening. But I have no idea what this error means, or how to fix it. Re: 100G Ethernet IP (Stratix 10) does not complete placement Hi, When running the identical design in Quartus 19 or 20, we would get the above errors - however, when using Quartus 21.2, we not seen any instance of these errors. Based on this, it looks like it was an error related to those versions of Quartus. Re: 100G Ethernet IP (Stratix 10) does not complete placement I apologize; you're right. That may be the best course of action. Another note: Moving to Quartus 20.2 has resulting in this error being much less frequent. We don't think this is a routing congestion issue, as the design is very much pipelined, and we are not close to fully utilization <10% on most figures, including DSP. Re: 100G Ethernet IP (Stratix 10) does not complete placement I wanted to clarify the statistics above. I previously indicated that it occurred 90% of the time - that was incorrect. Since upgrading to Quartus 20.2, we now see this error around 5-10% of the time. This is on the exact same build, just being run multiple times. Re: 100G Ethernet IP (Stratix 10) does not complete placement Hello, To answer your questions, and share some additional insight: 1) These errors are occurring with the Low Latency 100G soft IP, and 2) These errors ALSO occur with the Low Latency 40G soft IP, 3) They do not always occur. Right now, we get this error around 90% of the time. In some cases, the exact same build will sometimes work, but re-running the exact same build (using our automated CI system, so there is no human intervention, and the build environment is the same), causes the build to fail. 4) It doesn't seem to be related to complexity: when we try and compile the design with 100G only, we see this error, but we also see it when we take out our IP. 5) We know that our design routes, and the current build uses comparatively few resources. 6) These problems started occuring after we refactored the code - we suspect this caused a difference in how the inference engine is doing things - but the RTL viewer shows the code is mostly the same. 7) We have confirmed that this is an issue with Quartus 19.4, and Quartus 20.2. ๐ Our latest build compiles if the IP is generated using Quartus 19.4, but the actual build is done by Quartus 20.2. If the entire build is carried out by Quartus 20.2, then we encounter the same issue. The specific error message consists of a very large number of lines that look like this: โInfo (170189): Fitter placement preparation operations beginning Info (170191): Fitter placement operations beginning Error (170077): Cannot place the following nodes Error (170078): Cannot place node "auto_fab_0|alt_sld_fab_0|alt_sld_fab_0|memfabric_0|mm_interconnect_0|limiter_pipeline_001|gen_inst[0].core|data1[9]" of type Register cell with location constraints CUSTOM_REGION_X11_Y37_X282_Y432 from Promoted Clock Region File: / 1SX280HU3F50E2VG__/qdb/_compiler/blah/_flat/19.4.0/partitioned/1/.temp/sld_fabrics/ipgen/alt_sld_fab_0/alt_sld_fab_0/altera_avalon_st_pipeline_ stage_1920/synth/altera_avalon_st_pipeline_base.v Line: 66 Error (170079): Cannot place node "auto_fab_0|alt_sld_fab_0|alt_sld_fab_0|memfabric_0|mm_interconnect_0|rsp_mux|src_payload[2]~40xsyn" of type Combinational cell File: /scratch/fpga-q17/build-dir/fpga/build-tate/build__1SX280HU3F50E2VG__master__full40trx4__cyan-rtm2 .2-ddr-430-g2e4f3f99__2021-09-01-152714296614376/qdb/_compiler/tate/_flat/19.4.0/partitioned/1/.temp/sld_fabrics/ipgen/alt_sld_fab_0/alt_sld_fab_0/altera_merlin_multiplexer_191/synth/alt_sld_fab_0_altera_merlin_multiplexer_191_ldk5wsi.sv Line: 152 It includes DSP and ethernet blocks - but the issue we are seeing generally happens when both are in play. However, we've seen this failure when we compile with only usually works, however, if I include the Ethernet and our DSP blocks, it fails fitting. If I compile a very, very, simple design, then it sometimes works. However, if I compile a design with additional user logic, I then get the errors above. We spent the day porting the code to quartus 20.2 from 19.4, but our CI system still shows these kind of errors. I've looked at the KB and forum posts, and it seems similar to the following: https://www.intel.com/content/www/us/en/programmable/support/support-resources/knowledge-base/tools/2018/why-does-my-dsp-design-fail-during-fit-with-error-170079---canno.html and https://community.intel.com/t5/Intel-Quartus-Prime-Software/Seeking-a-solution-to-Quartus-Prime-19-2-fitter-error-170079/td-p/704677 In the second post, it suggests that this is an errata with Quartus itself. Can you confirm that there haven't been any similar issues in Quartus 20.2? Best Regards, 100G Ethernet IP (Stratix 10) does not complete placement Hi, We're attempting to route a 100G interface with RS-FEC, using the Low Latency 100G IP. When we configure the IP without RS-FEC, it routes, but when we enable RS-FEC, it consistently fails to successfully place the design. When we disable RS-FEC, the design routes. We suspect the problem relates to our SDC file, but we're not sure how to modify it. The current design only includes the 100G interface, with minimal logic to support ethernet control packets (and which has been validated to work over a 40G interface). Are there additional constraints we should be looking at? I noticed that the user guide says that sometimes we may have to conduct floor planning - is that often the case? SolvedLow Latency 100G with RS-FEC fails placement on Stratix 10 H-tile Hi, We're attempting to route a 100G interface with RS-FEC. When we configure the IP without RS-FEC, it routes, but when we enable RS-FEC, it consistently fails to successfully place the design. When we disable RS-FEC, the design routes. We suspect the problem relates to our SDC file, but we're not sure how to modify it. The current design only includes the 100G interface, with minimal logic to support ethernet control packets (and which has been validated to work over a 40G interface). Are there additional constraints we should be looking at? I noticed that the user guide says that sometimes we may have to conduct floor planning - is that often the case? # Specify the number of 40G interfaces: set x100g_count 4 #************************************************************** # Time Information #************************************************************** set added_uncertainty_312mhz 0.48ns set added_uncertainty_390mhz 0.424ns #************************************************************** # Create Clock #************************************************************** # 100G clocks for { set i 0 } { $i < $x100g_count } { incr i } { set cg_ref_clk [get_ports cg_ref[$i] ] create_clock -name cg_ref[$i] -period 1.551515152 $cg_ref_clk # set cg_ref_clk to 0, in case if one element in the loop failed to assign correct value # to cg_ref_clk, the next value wouldn't have a duplicate clock set cg_ref_clk 0 } set TRS_DIVIDED_OSC_CLK [get_clocks ALTERA_INSERTED_INTOSC_FOR_TRS|divided_osc_clk] # # Get clock info for 100G RT, TX and IOPLL clocks: # #set IOPLL_LOCKCLK [list] set RX_CORECLK [list] set TX_CORECLK [list] set RX_RS_CORE_CLK [list] set RX_RS_CORE_NCK [list] set TX_RS_CORE_CLK [list] set REF_CLK [list] set SYSTEM_TIME_CLK [get_clocks {fpga_ref_pll|iopll_0_outclk0}] # This is the clock from TRx125MCalClk pin which is 100 MHz. set STATUS_CLK [get_clocks {mgmt_clk_pll|iopll_0_outclk0}] for { set i 0 } { $i < $x100g_count } { incr i } { set THIS_RX_RS_CORE_CLK [get_clocks gen_XG[$i].xg_wrapper|GEN_100G.av_top|alt_e100s10_0|WITH_FEC.fecpll|RXIOPLL_INST.fecrxpll|alt_e100s10ex_iopll_rx_outclk0] set THIS_RX_RS_CORE_NCK [get_clocks gen_XG[$i].xg_wrapper|GEN_100G.av_top|alt_e100s10_0|WITH_FEC.fecpll|RXIOPLL_INST.fecrxpll|alt_e100s10ex_iopll_rx_n_cnt_clk] set THIS_TX_RS_CORE_CLK [get_clocks gen_XG[$i].xg_wrapper|GEN_100G.av_top|alt_e100s10_0|WITH_FEC.fecpll|TXPLL_IN.TXFPLL_INST.tx_pll_gen.fectxpll|clkdiv_output_div1] set THIS_RX_CORECLK [get_clocks gen_XG[$i].xg_wrapper|GEN_100G.av_top|alt_e100s10_0|xcvr|rx_clkout2|ch1] set THIS_TX_CORECLK [get_clocks gen_XG[$i].xg_wrapper|GEN_100G.av_top|alt_e100s10_0|xcvr|tx_clkout2|ch1] set THIS_REF_CLK [get_clocks cg_ref[$i]] lappend RX_RS_CORE_CLK [get_clock_info -name [get_clocks gen_XG[$i].xg_wrapper|GEN_100G.av_top|alt_e100s10_0|WITH_FEC.fecpll|RXIOPLL_INST.fecrxpll|alt_e100s10ex_iopll_rx_outclk0]] lappend RX_RS_CORE_NCK [get_clock_info -name [get_clocks gen_XG[$i].xg_wrapper|GEN_100G.av_top|alt_e100s10_0|WITH_FEC.fecpll|RXIOPLL_INST.fecrxpll|alt_e100s10ex_iopll_rx_n_cnt_clk]] lappend TX_RS_CORE_CLK [get_clock_info -name [get_clocks gen_XG[$i].xg_wrapper|GEN_100G.av_top|alt_e100s10_0|WITH_FEC.fecpll|TXPLL_IN.TXFPLL_INST.tx_pll_gen.fectxpll|clkdiv_output_div1]] lappend RX_CORECLK [get_clock_info -name [get_clocks gen_XG[$i].xg_wrapper|GEN_100G.av_top|alt_e100s10_0|xcvr|rx_clkout2|ch1]] lappend TX_CORECLK [get_clock_info -name [get_clocks gen_XG[$i].xg_wrapper|GEN_100G.av_top|alt_e100s10_0|xcvr|tx_clkout2|ch1]] lappend REF_CLK [get_clock_info -name [get_clocks cg_ref[$i]]] set_clock_groups -exclusive -group $THIS_TX_CORECLK -group $THIS_RX_CORECLK -group $THIS_TX_RS_CORE_CLK -group $THIS_RX_RS_CORE_CLK -group $THIS_RX_RS_CORE_NCK -group $TRS_DIVIDED_OSC_CLK -group $STATUS_CLK -group $REF_CLK } if {![is_post_route]} { set_clock_uncertainty $added_uncertainty_312mhz -add -from $RX_RS_CORE_CLK -to $RX_RS_CORE_CLK -setup set_clock_uncertainty $added_uncertainty_312mhz -add -from $TX_RS_CORE_CLK -to $TX_RS_CORE_CLK -setup set_clock_uncertainty $added_uncertainty_312mhz -add -from $RX_RS_CORE_NCK -to $RX_RS_CORE_NCK -setup set_clock_uncertainty $added_uncertainty_390mhz -add -from $RX_CORECLK -to $RX_CORECLK -setup set_clock_uncertainty $added_uncertainty_390mhz -add -from $TX_CORECLK -to $TX_CORECLK -setup } else { # everywhere else } set_clock_groups -exclusive -group $TX_CORECLK -group $RX_CORECLK -group $TX_RS_CORE_CLK -group $RX_RS_CORE_CLK -group $RX_RS_CORE_NCK -group $TRS_DIVIDED_OSC_CLK -group $STATUS_CLK -group $REF_CLK set_clock_groups -exclusive -group [lindex $TX_CORECLK 0] -group [lindex $TX_CORECLK 1] -group [lindex $TX_CORECLK 2] -group [lindex $TX_CORECLK 3] set_clock_groups -exclusive -group [lindex $RX_CORECLK 0] -group [lindex $RX_CORECLK 1] -group [lindex $RX_CORECLK 2] -group [lindex $RX_CORECLK 3] set_clock_groups -exclusive -group [lindex $TX_RS_CORE_CLK 0] -group [lindex $TX_RS_CORE_CLK 1] -group [lindex $TX_RS_CORE_CLK 2] -group [lindex $TX_RS_CORE_CLK 3] set_clock_groups -exclusive -group [lindex $RX_RS_CORE_CLK 0] -group [lindex $RX_RS_CORE_CLK 1] -group [lindex $RX_RS_CORE_CLK 2] -group [lindex $RX_RS_CORE_CLK 3] set_clock_groups -exclusive -group [lindex $RX_RS_CORE_NCK 0] -group [lindex $RX_RS_CORE_NCK 1] -group [lindex $RX_RS_CORE_NCK 2] -group [lindex $RX_RS_CORE_NCK 3] derive_clock_uncertainty # From "AV SoC Golden Hardware Reference Design" set_input_delay -clock altera_reserved_tck -clock_fall 3 [get_ports altera_reserved_tdi] set_input_delay -clock altera_reserved_tck -clock_fall 3 [get_ports altera_reserved_tms] #set_input_delay -clock altera_reserved_tck -clock_fall 3 [get_ports altera_reserved_ntrst] #************************************************************** # Set Output Delay #************************************************************** set_output_delay -clock altera_reserved_tck 3 [get_ports altera_reserved_tdo] #************************************************************** # Set Clock Groups #************************************************************** #set_clock_groups -asynchronous -group $RX_CORECLK \ # -group $TX_CORECLK \ # -group $STATUS_CLK \ # -group $SYSTEM_TIME_CLK #set_clock_groups -asynchronous -group $IOPLL_LOCKCLK \ # -group $STATUS_CLK # set false path from PMA fifo flags' clock to clk_status (ex_100g_inst|ex_100g_inst|alt_s100|csr|eio_flags_csr[*]) for { set i 0 } { $i < $x100g_count } { incr i } { for {set chNum 0} {$chNum < 4} {incr chNum} { set RX_CLK [get_clocks gen_XG[$i].xg_wrapper|GEN_100G.av_top|alt_e100s10_0|xcvr|rx_pcs_x2_clk|ch$chNum] set TX_CLK [get_clocks gen_XG[$i].xg_wrapper|GEN_100G.av_top|alt_e100s10_0|xcvr|tx_pcs_x2_clk|ch$chNum] set_clock_groups -exclusive -group $RX_CLK -group $STATUS_CLK -group $TRS_DIVIDED_OSC_CLK set_clock_groups -exclusive -group $TX_CLK -group $STATUS_CLK -group $TRS_DIVIDED_OSC_CLK } } set_clock_groups -exclusive -group $RX_CORECLK -group [get_clocks {mgmt_clk_pll|iopll_0_refclk}] set_clock_groups -exclusive -group $TX_CORECLK -group [get_clocks {mgmt_clk_pll|iopll_0_refclk}] set_clock_groups -exclusive -group $RX_CORECLK -group [get_clocks "mgmt_clk_pll|iopll_0_refclk"] set_clock_groups -exclusive -group $TX_CORECLK -group [get_clocks "mgmt_clk_pll|iopll_0_refclk"] set_clock_groups -exclusive -group $STATUS_CLK -group [get_clocks {mgmt_clk_pll|iopll_0_refclk}] set_clock_groups -exclusive -group [get_clocks {mgmt_clk_pll|iopll_0_refclk}] -group $STATUS_CLK #************************************************************** # Set False Path #************************************************************** set_false_path -to o_40GA_LED[*] set_false_path -to o_40GB_LED[*]