PCIe Hard IP - Can 'valid' De-assert Between SOP and EOP During DMA Read Completion?
Product / IP: Intel PCIe Hard IP (Avalon-ST Interface Device Family: Cyclon 10 GX Reference Manual: https://docs.altera.com/r/docs/683647/18.0/arria-10-and-cyclone-10-gx-avalon-streaming-interface-for-pci-express-user-guide/datasheet I am an FPGA Design Engineer currently working on verifying the application layer logic interfacing with Intel's PCIe Hard IP over the Avalon Streaming (Avalon-ST) interface. As part of the verification effort, I have developed an Avalon-ST Bus Functional Model (BFM) that mimics the RX-side behavior of the PCIe Hard IP — specifically, how it presents TLP data (DMA Read Completions) to the downstream application logic. During simulation, my BFM generates scenarios where the 'valid' signal is de-asserted between 'startofpacket (SOP)' and 'endofpacket (EOP)' — i.e., mid-packet gaps or "bubbles" are introduced within a single TLP transfer. When this occurs, the application layer logic does not handle it correctly, causing functional failures. Before proceeding with fixing the application logic to handle this case, I need to confirm from Intel whether this behavior is actually possible on the real PCIe Hard IP hardware Question 1: On the RX Avalon-ST interface, when the PCIe Hard IP is acting as the SOURCE (driving TLP completion data toward user application logic), is it possible for the 'valid' signal to be de-asserted between SOP and EOP within the same TLP packet? Question 2: If yes, under what conditions can this occur? Question 3: Does the Intel PCIe Hard IP Reference Manual explicitly document this behavior anywhere? If so, could you point to the relevant section?64Views0likes8CommentsMCDMA IP D2H Queue Reset Failure during Channel Re-allocation
After successfully loading the binary and launching D2H (Device-to-Host) operations on my FPGA, the first channel is allocated successfully and transfers data without issue. However, when I try to allocate another channel, it fails with a "queue reset failed" error. Initially, during the initialization phase, all 512 channels were checked and appeared available. Environment: Device Family: [Agilex 7] Quartus Version: [24.1] Driver/Software: VFIO based IP Config: MCDMA configured with 512 channels Why is this happening? How to rectify the same ?98Views0likes9CommentsAgilex 7 R-Tile CXL IP: D2H write bandwidth does not scale with dual CAFU AXI-MM ports
Device: Agilex 7 I-Series AGI027 Software: Quartus Prime Pro 24.3 IP Core: CXL Type 2 IP Issue Description: We are attempting to increase CXL Device-to-Host (D2H) write bandwidth by utilizing both CAFU AXI-MM ports (port 0 and port 1) provided in the CXL Type 2 IP design example. However, our measurements show that enabling both AXI ports does not improve bandwidth as expected. For Non-cacheable writes, bandwidth remains unchanged when moving from one port to two ports. For Cacheable Owned writes, bandwidth decreases when using two ports. Please refer to the figures blow for detailed results. We are using the design example configured with two DCOH slices. To avoid potential DCOH contention, we've implemented address interleaving such that: - AXI port 0 only accesses addresses corresponding to "even number × 64B" - AXI port 1 only accesses addresses corresponding to "odd number × 64B" Despite this, no bandwidth improvement is observed for either Non-cacheable or Cacheable Owned traffic. Additionally, the non-cacheable bandwidth curve remains almost identical regardless of whether one or both AXI ports are used. This suggests that the exercised hardware path may contain a bottleneck or contention point within (either soft or hard part of) the CXL Type-2 IP. We would like to understand how to resolve this bandwidth limitation. If it cannot be improved, we would appreciate clarification on the underlying cause of this behavior. Thank you for your time and support.21Views0likes0CommentsAgilex-7 AXI MCDMA for PCIe hang
Hi! I'm working with AGIB023R18A1E1VC device and having issues with AXI Multichanned DMA IP for PCIe. Since I require a PCIe bridge, I configured the IP in MCDMA+BAS+BAM mode (PCIe Gen 4, 512-bit), generated an example design, and integrated the subsystem into my project. Although I do not use MCDMA, I rely heavily on the BAS and BAM functionality. The issue I’m seeing is that writing more than 448 bytes to the BAS causes the host to hang and subsequently reset. Notably, between the write transaction and the host reset, the FPGA internal logic is still able to write to the BAS, indicating no hang on the AXI bus. There are no issues with read transactions. We observe this issue not with only one card. At first we run into it in Q25.1 but still have it in Q25.3.1 If you need some captures from the signal tap or any additional details I may provide them. Thank you in advance! Mikhail.155Views0likes11CommentsCyclone IV GX project failed migration from 20.1 to 23.1std
I am currently facing an issue with the conversion of an Intel IP while migrating from Quartus version 20.1 to version 23.1std. Despite following the references {1} {2} and {3}, I have encountered the following error: Error: altgx_internal: error renaming "C:/intelfpga_lite/23.1std/quartus/../ip/altera/altera_pcie/altera_pcie_avmm/lib/altpcie_serdes_3cgx_x1d_gen1_08p.v.new" to "C:/intelfpga_lite/23.1std/quartus/../ip/altera/altera_pcie/altera_pcie_avmm/lib/altpcie_serdes_3cgx_x1d_gen1_08p.v": best guess at reason: permission denied while executing "file rename -force $temp $filename " (procedure "my_generation_callback" line 91) invoked from within "my_generation_callback" Error: altgx_internal: Generation callback did not provide a top level file (expected `add_file $output_dir/system_plus_pcie_altgx_internal.v|vhd|sv {SIMULATION SYNTHESIS}`) Error: Generation stopped, 636 or more modules remaining Error: qsys-generate failed with exit code 1: 3 Errors, 48 Warnings I have attempted various solutions, but none seem to resolve the issue. Your insights and guidance on how to address this problem would be highly appreciated. {1} https://community.intel.com/t5/Intel-Quartus-Prime-Software/Cyclone-IV-GX-project-failed-migration-from-18-1-to-20-1/m-p/1496532 {2} https://www.terasic.com.tw/wiki/Fix_Known_Issues_Quartus_QSYS_regenerating_error {3} https://community.intel.com/t5/FPGA-Intellectual-Property/Using-Quartus-19-1-and-Platform-Designer-on-Windows-I-get-an/td-p/6688522.2KViews0likes10CommentsPCIe Enumeration Failure for CXL IP
When attempting to validate the Agilex 7 R-Tile Compute Express Link (CXL) 1.1/2.0 IP (Type 2 and Type 3) using a CXL compatible host server, the host server is unable to complete PCIe bus enumeration. The host server stalls while attempting to complete PCIe bus enumeration. The stall never resolves after boot, and access to to the host is never granted. Depiction of the stall and its status code from the host server's perspective is provided as an attached PNG file titled: "pcie_enumeration_stall". Debugging Information: A PCIe Gen 5.0 reference design using the Altera R-Tile Avalon Streaming IP For PCI Express was used to validate that PCIe enumeration could complete fully without failure, and that the host server could exchange data with the FPGA. While running the CXL example design, the Quartus System Console's Link Logger indicates that the LTSSM state is in the "UP_L0" before the PCIe bus enumeration stall. The state may sometimes change when attempting to "Refresh" the status during the PCIe bus enumeration stall. The state may briefly enter recovery (UP_L0 -> REC_IDLE -> REC_RCVRCFG -> REC_RCSVLOCK -> REC_COMPLETE -> UP_L0). Depiction of the Quartus System Console's Link Logger when this occurs is provided as an attached PNG file titled: "ltssm_link_logger". While running the CXL example design, the Quartus System Console's Link Logger indicates that the advertised and negotiated link speeds and widths are both 32.0 GT and x16. Depiction of a CXL Type 3 Quartus System Console's Overview is provided as an attached PNG file titled: "cxl_ip_systemconsole_overview". Instead of generating the example design, the pre-compiled binary offered by Altera for Type 2 and Type 3 CXL IP designs was used and resulted in the exact same failures as shown above. CXL.mem transaction registers (M2S and S2M) are 0x00, indicating that the host server never progresses far enough to begin sending/receiving transactions/requests. Between the PCIe build that functions and the CXL build that does not function (stalls at enumeration), no server UEFI settings were changed. A CXL enable function was enabled for all tests. Several PCIe UEFI settings were changed in an attempt to resolve the enumeration stall, but none changed the outcome. Attempting to disable the CXL Compliance 2.0 and the HDM decoder registers both did not resolve the issue. The FPGA was powered and programmed before the server was launched. Two different CXL servers were tested and both resulted in the same behavior. The relevant PCIe and CXL settings from BIOS is provided as an attached PNG file titled: "cxl_server_settings". The CXL REFCLK was tested as both COMMON and SRIS/SRNS. Each test changed SW3 to use relevant onboard and connected based clocks. IP Settings: CXL IP settings are uploaded as PNG files titled: "cxl_ip_settings_N". The settings tested are the default provided settings as well as a version with a 300 MHz PLD clock (SRIS). Hardware Details: FPGA is connected to host server via PCIe Gen 5.0 x16 slot on Tile 14C. FPGA device is the Altera Agilex 7 FPGA I-Series Development Kit (Production 2x R-Tile & 1x F-Tile) (AGIB027R29A1E1VB) The DIMM provided with the development kit is slotted into DIMM Slot A. SW1 is set to 1000 (PCIe PRSNT x16). SW3 is set to 0110 for designs using the CXL/PCIe common clock and 0000 for designs using the CXL/PCIe onboard REFCLK (SRIS). Software Details: Quartus Prime Pro Edition v25.1 was used to generate the designs. R-Tile Altera FPGA IP for Compute Express Link (CXL) was generated with version 1.17.0. FPGA Design: The FPGA design is generated using the example design with the IP settings given above. A pre-compiled binary provided by Altera was also used to test instead of a generated design. Server details: SMC AS-1126HS-TN (CXL 2.0 via 4x PCIe gen5 x16 slots) CPU: 2x AMD EPYC 9135 (CXL 2.0) RAM: 4x Micron 64GB @ 6000 MT/s UEFI: AMI 1.7a 10/30/2025 Attachments: The system console debug register outputs are saved to CSV files attached to this post. These CSV files are taken from a CXL Type 3 reference design with PLD REFCLK at 300 MHz (SRIS). Questions: Can you provide guidance on how to obtain more information on the enumeration status other than the LTSSM register? Can you provide the UEFI/BIOS settings for PCIe/CXL that was used to test this IP as reference? Could the configuration space registers (DVSEC/HDM) or the TLP handling implemented in the CXL example design RTL create this PCIe enumeration failure? Can you provide guidance on what debug/status registers the CXL IP provides that could be relevant to this issue?237Views0likes4CommentsAvalon-MM Cyclone V Hard IP for PCI Express Intel FPGA IP Soft Reset and Hard Reset
Dear Intel, Based on the forum info and datasheet it is allow to use soft reset rather than hard reset. In order to do so, changing <parameter name="force_src" value="0" /> to <parameter name="force_src" value="1" /> Should basically turn the HRC to SRC. However during actual system test SRC stuck on driver loaded while HRC does not. According to the above background informations: 1: do SRC allowed in GEN1 PCIe 2: How to properly driven the reset signal under verilog possible example could be good. Thanks, BrianSolved90Views0likes9CommentsCyclone 10GX PCIe / Raspberry Pi
Hi, We have a PCIe controller using the Cyclone 10Gx with the PCIe hard IP. It works when connected to a Windows system but it isn't getting detected when connected on a Raspberry Pi 5 or CM5 system. On the Pi, i see that the LTSSM transitions from Detect.Active to Polling.Active to Polling.Compliance to link down. I think this suggests that the Pi isn't detecting any device on the other end. I tried isolating the power on sequencing by hooking up an external power supply to the PCIe card, but it didn't help. Any guidance would be much appreciated. Thanks!96Views0likes8Commentsagilex 7 Platform Designer PIO addr width
I am using the PIO example design of the P-Tile AVST PCIe IP on Intel Agilex 7. The original design maps a 16KB RAM to BAR0. Since my design requires a larger address space, I modified the PCIe IP's BAR configuration to increase BAR0 size to 2MB. However, I soon discovered that in the "PIO Design Example for PCI Express Gen4," the module converting AVST to AVMM only supports a 16KB address space after conversion. I then replaced it with the "PCIe PIO" IP core, but found that it can only access a 1MB region. What should I do?49Views0likes6Comments