MichaelBOccasional ContributorJoined 5 years ago40 Posts1 LikeLikes received1 SolutionView All Badges
ContributionsMost RecentMost LikesSolutionsRe: Plan error after upgrading Quartus Pro version 20.4 to 22.4 Hi Richard, I’m very sorry for the delay. I was busy with several other tasks and had to shift the task regarding the GPIO error. I was able to reproduce the error in a small test design. I’d like to ask if you could have a look into the error I’m getting during plan phase and if there is a possibility to resolve it. Short summary: I’m trying to create 5 GPIOs in the same IO bank (3V). Two of them are used as bidirectional IO and three are output only. Quartus is generating the ‘quartus_inserted_3vio_obuf_oe_wirelut’ which results in an error. In Quartus 20.4 this wasn’t an issue but with 22.4 I got the AIB_3VIO_OE error. If you need any further information do not hesitate to contact me. Kind regards, Michael Re: Plan error after upgrading Quartus Pro version 20.4 to 22.4 Hi Richard, thanks for your reply. Sadly I cannot share the design in public. Should I open a ticket in IPS to share the design? Regards, Michael Plan error after upgrading Quartus Pro version 20.4 to 22.4 Hi, I just upgrade my project for a Stratix 10 MX FPGA from 20.4 to 22.4. With 20.4 the design run was successful but after upgrading to 22.4 I see planning errors with the same RTL / IP set: Error(22412): The design requires at least 3 elements of type AIB_3VIO_OE but the device has only 2. Info(22415): AIB_3VIO_OE node(s) not associated with an IP require elements of this type: Info(22414): Node: AIB_3VIO_OE AIB_3VIO_CELL~u_i2c_master|u_i2c_master_top|byte_ctrl|bit_ctrl|isda_oen. Info(22414): Node: AIB_3VIO_OE AIB_3VIO_CELL~u_i2c_master|u_i2c_master_top|byte_ctrl|bit_ctrl|iscl_oen. Info(22414): Node: AIB_3VIO_OE AIB_3VIO_CELL~u_ltc2226_clk_output_gpio_top|u_ltc2226_clk_output_gpio|ltc2226_clk_output_gpio|core|i_loop[0].altera_gpio_bit_i|output_buffer.obuf~quartus_inserted_3vio_obuf_oe_wirelut. The GPIO for I2C master SDA/SCL is defined as bidirectional with open-drain and output enable selected: LTC2226 clock output GPIO does is only configured for output: The OEN path is implemented in 22.4 which was not the case with 20.4 which doesn't seem to be allowed and results in an error (all three GPIO IP files are attached). - Could you help me to resolve that problem? - Do I need any additional qsf setting for this port which was not necessary in 20.4? Best regards, Michael Re: Avalon to AXI implementation Hi Pramod, here a screenshot of our current controller settings in the HBM2 controller IP core (Quartus 20.4 / IP version 19.6.1): HBM2 controller settings Unfortunately we did not test the example design. Furthermore we wrote our own driver for the PCIe DMA access which I'm allowed to share due to confidential information. We implemented our own converter (confidential, too) because the Intel AXI Bridge could not achieved the promised data rate. We had several discussion/debug sessions with Intel Premier support over weeks but all provided possibilities with Intel IP only were not successful. We added a signal tap in the mm_interconnect (which is autogenerated by QSYS) and AXI Bridge IP core and saw wrong AXI transactions ongoing which blocked/decreased the throughput tremendously. We proved that in our simulations, too. Further, the data rate between PCIe DMA and HBM2 wasn't our main topic. We had some further AXI busses connected to the HBM2 which are providing the data (here we had a high data rate requirement). The readout of the data using PCIe DMA wasn't used to have a specific data rate. For the AXI connection to the AXI HBM2 we implemented our own fabric, too, instead of using the autogenerated mm_interconnect. Kind regards, Michael Re: Avalon to AXI implementation Hi Pramod, ok I would assume that should be resolved already. Could you share the configuration of the HBM2 controller IP file? Then I could compare it with our IP file. Maybe there are some further settings necessary to enable the burst support. We are using the PCIe AVMM (DMA) connected to the HBM2 (Stratix 10 MX). Do you use any module in between PCIe <-> HBM2? We faced some issue with the AXI bridge in between and just connected the AVMM of the PCIe directly to the HBM2 AXI ports. Kind regards, Michael Re: Avalon to AXI implementation Hi Pramod, we did a similar architecture where we had multiple masters connected to a single HBM2 channel + PCIe DMA access, too. Here the HBM2 burst controller seems to have a bug in some specific Quartus versions (Q20.4 and lower). We succeeded with a solution provided by Intel to patch the IP files after IP generation (yes, you won't be able to use the common tool flow anymore sadly). With Quartus 21.1 you will have this fix provided in the IP again. https://www.intel.com/content/www/us/en/support/programmable/articles/000086781.html For us it was a long way to debug this. Hopefully this will help you. FYI: We currently didn't switch to 21.1 because with 20.4 we got no timing violations in our design and with 21.1 the retiming process is not working properly again. With this we got tremendous timing violations and there we did not get a clear information from Intel support team why this is happening with a version upgrade. Let's see if this is fixed in further versions... Kind regards, Michael Re: Avalon to AXI implementation Hi Richard, yes, I won't use the outstanding transactions due it is not supported by EMIF core anyway. Furthermore I think it is not very beneficial to edit a standard component with an own component due the outstanding is not supported anyway by the EMIF core itself. Would you recommend to use a CCB or an autogenerated CCB between two Pipeline Bridges? (mm_interconnect) Here I would then configure without any outstanding transactions and just defining the BURST size. Would this be a valid design for high throughput from AXI master to DDR? Again this is my main reason why I opened this thread. With > 3 Gbps I would get overflows on the AXI master side - here I'm running with 160 MHz @ 256b and don't know why I have overflows. DDR is working with 200 MHz @ 512b and it doesn't make sense for me why I get overflows then - We faced such issues previously and we checked in simulation that Avalon <-> AXI (mm_interconnect) does not response fast enough with a valid indication. Furthermore we figured out to do protocol conversion and CDC (AXI 160 MHz @ 256b <-> Avalon 200 MHz @ 512b) is even worse in throughput than doing the protocol conversion first and then the CDC from Avalon <-> Avalon only. Here I really want to be sure to support a data rate > 10 Gbps which should be possible with a BURST size of 32 in a 160 MHz @ 256b domain. Would you recommend to just set max. read/write outstanding to 0 and just do the connection with Avalon <-> AXI BURST/bitwidth conversion? Kind regards, Michael Re: Avalon to AXI implementation Hi Richard, yes, I recognised this, too, that here are some signals missing (EMIF core & Avalon CCB). Is there an option in the Avalon CCB & EMIF core to enable those? How can I create a custom component for an standard IP? Would this be a custom component instantiating the EMIF core? I tried to edit the interface of those IP cores in the component section in QSYS but I cannot add further signals. I already read through the EMIF user guide (https://www.intel.com/content/dam/www/programmable/us/en/pdfs/literature/hb/stratix-10/ug-s10-emi.pdf) but I could not found any setting to enable/disable pending write transactions. Currently I am using the autogenerated interconnect from QSYS to resolve 256b AXI (BL = 16) to 512b Avalon (BL = 8). Here I did not see any errors in the QSYS only the hint that an Avalon adapter will be inserted between AXI <-> Avalon. Are there any special settings of the mm_interconnect I have to configure to do the bus conversion? From the documentation of the Platform Designer I assumed this will be done by mm_interconnect automatically. Do you have any reference design (QSYS) where a AXI <-> Avalon connection with bitwidth conversion + burst conversion is done? That would be helpful to understand the settings on both sides to align them for the best throughput performance. Kind regards, Michael Re: Avalon to AXI implementation Hi Richard, thanks for your reply! In the EMIF DDR4 & Avalon CCB settings I'd like to increase pending writes (CCB Avalon_M will write data to Avalon_S of DDR4): AXI_M (write) -> Avalon CCB -> DDR4 AXI Burst length = 16 (data width = 256) & Avalon Burst length = 8 (data width = 512) based on my calculation: 16*256 = 8*512 Will the interconnect resolve data width conversion and align the bursts? Screenshots of CCB & EMIF: EMIF DDR4 parameters EMIF DDR4 AVM settings Avalon CCB AVM settings Avalon CCB parameters Let me know if you need further information! Best regards, Michael Avalon to AXI implementation Hi, currently I'm thinking to implement an own Avalon <-> AXI4 (MM) adapter and not using the QSYS autogenerated adapter. Currently we are using an AXI4 DMA which will stream data into the DDR4. Because the DDR4 is using an Avalon interface I already tried the autogenerated converter which is too slow to support the data rates. Even with pending transactions set to 64 & burst size 16 we are not able to achieve data rate > 3 Gbps (AXI has burst size 16, too). DDR4 (1600 MHz) Avalon running with 200 MHz @ 512b and DMA with 160 MHz @ 256b. Due the DMA on the 160 MHz got overflows I can be sure the transaction is the issue (I did not expect this because I have a higher frequency and double data width on receiving side..) We had the same issues for AXI DMA <-> AXI HBM2 as well. Here we implemented our own AXI <-> AXI connection which was much better and supports our required data rates much better than the autogenerated AXI converter (verified in simulation & on FPGA). Regarding this we already had several debug sessions with Premier support - final result was to NOT use the autogenerated adapter and use our own... Again in DDR4 we are facing the same throughput limitations (AXI <-> Avalon is the limitation). Could you give me advices to implement this conversion? Are there any data sheets which already describe the adapter autogenerated by QSYS? Furthermore I saw I can edit the maximum pending read transactions on the DDR4 EMIF core & on Avalon Clock Crossing Bridges but not the maximum pending write transactions. Is there a reason why I cannot edit these parameters in those IP cores? Kind regards, Michael