Recent Content
What Agilex® 7 FPGA configuration scheme should be used to ensure that the PCI Express* link active time of 120ms is met?
Description To meet the PCIe* spec requirement of 120 ms, the PCIe® REFCLK needs to be running prior to configuring the device, you must specify the OSC_CLK_1 pin as 25 MHz, 100 MHz, or 125 MHz, and use AS x4 Fast Mode configuration with an AS_CLK clock set to 166 MHz. Note: For PCIe designs including Configuration via Protocol (CvP), Altera recommends you to use Micron* QSPI flash in order to load the initial configuration firmware faster to meet the PCIe wake up time for host enumeration. This is because boot ROM reads the initial configuration firmware using x4 mode when using the Micron QSPI flash. For a non-Micron flash, the boot ROM reads the firmware using x1 mode. If you need to use the non-Micron QSPI flash for PCIe design, Altera recommends you to assert PERST# signal low for a minimum of 200 ms from the FPGA POR to ensure the PCIe end point enters link training state before PERST# is deasserted. This should be considered for closed-systems only. Resolution This information has been scheduled for inclusion in the next release of the Agilex® PCI Express* IP User Guides. Related IP Cores F-Tile Avalon® Streaming IP for PCI Express* Multi Channel DMA FPGA IP for PCI Express* P-Tile Avalon® Streaming IP for PCI Express* R-Tile Avalon® Streaming IP for PCI Express* AXI Streaming IP for PCI Express* AXI Multichannel DMA IP for PCIe*How to tell Quartus my Arria10 target system CLKUSR frequency is 100MHz?
Timing analyser reports CLKUSR at 125MHz but target system has 100MHz on this pin to accommodate AS in combination with transceiver calibration as per the manuals -- there doesn't seem to be any consequences as the project seems to run fine but is a bit unsettling34Views0likes6CommentsWhy are PCI Express RX TLP Packets lost when using the AXI Streaming IP for PCI Express* at lower rates (Gen1/2/3)?
Description Due to a problem in the 25.1 and earlier versions of the AXI Streaming IP for PCI Express*, RX TLP packets can be lost if the IP is generated for Gen5 data rates but is actually operating at lower rates, and back-to-back small RX TLPs are received where the TLPs appear in each segment. The R-Tile Avalon® Streaming IP for PCI Express* is not affected by this problem. Resolution No workaround to this problem exists in the 25.1 and earlier releases of the AXI Streaming IP for PCI Express*. The problem has been fixed starting with Quartus® Prime Pro Edition software version 25.1.1.Agilex 7 Decoupling capacitor scaling factor
Hi! I have got my power estimate from quartus prime based on our requirement, which is " board's power consumption". Now to find the scaling factor i need to know the "maximum power per power rail" , where would i get this value? is it related to FPGA part(AGFB027R24C2I2V) or the max current rating of my convertors? Thanks, Vigneswaran54Views0likes5CommentsLLM Implementation on Agilex 5 E-Series 065B Modular Dev Kit
I am currently working on deploying Large Language Model (LLM) inference using FPGA AI Suite on the Agilex 5 E-Series 065B Modular Development Kit. I have two clear and specific questions: Is the Agilex 5 E-Series 065B officially supported for LLM / Transformer inference with FPGA AI Suite? Is the following workflow officially supported for LLM inference on this board? Step 1: Export a pre-trained LLM from Hugging Face to OpenVINO IR format using optimum-intel Step 2: Generate the target FPGA architecture file using architecture_optimizer for Agilex 5. Step 3: Compile the OpenVINO IR model for the FPGA using: • dla_compiler → for Sequential flow, or • Spatial Compiler → for Spatial flow. Step 4: Integrate the generated FPGA AI Suite IP into a Quartus Prime project, generate the bitstream, and program it onto the Agilex 5 E-Series 065B board. Step 5: Run inference using the FPGA AI Suite runtime (host application). I understand this may not be a push-button process and could require significant modifications to the generated RTL — but is this workflow still considered a viable starting point for implementing LLM / Transformer inference on the Agilex 5 E-Series 065B? Thank you.22Views0likes0CommentsNeed to delete the 3 rehosts (wrongly generated)
Hello, I just started learning to use Questa sim last week and I applied for one license on SSLC site. But I was wrong about the license type for 3 times so that I do not have any rehost left (all the 3 licenses didn't work with my PC). The informations used : (deleted) According to the notices on site SSLC, I do not have the right to delete those rehosts. So, could you please help me to delete them so that I can start over again? Waiting for your help and thank you very much ! Best wishes31Views0likes5CommentsLLM Implementation on Agilex 5 E-Series 065B Modular Dev Kit
I am currently working on deploying Large Language Model (LLM) inference using FPGA AI Suite on the Agilex 5 E-Series 065B Modular Development Kit. I have two clear and specific questions: Is the Agilex 5 E-Series 065B officially supported for LLM / Transformer inference with FPGA AI Suite? Is the following workflow officially supported for LLM inference on this board? Step 1: Export a pre-trained LLM from Hugging Face to OpenVINO IR format using optimum-intel Step 2: Generate the target FPGA architecture file using architecture_optimizer for Agilex 5. Step 3: Compile the OpenVINO IR model for the FPGA using: • dla_compiler → for Sequential flow, or • Spatial Compiler → for Spatial flow. Step 4: Integrate the generated FPGA AI Suite IP into a Quartus Prime project, generate the bitstream, and program it onto the Agilex 5 E-Series 065B board. Step 5: Run inference using the FPGA AI Suite runtime (host application). I understand this may not be a push-button process and could require significant modifications to the generated RTL — but is this workflow still considered a viable starting point for implementing LLM / Transformer inference on the Agilex 5 E-Series 065B? Thank you.45Views0likes0CommentsTranceiver Enhanced PCS Basic mode questions
Hello Guys, We used Arria GX and Straitx IV GX devices before, now we switch to use Cyclone 10 GX FPGA. I have several simple questions about XCVR's control port/signal. Why I can't see rx_control and tx_control ports when I make "Enable simplified data interface" ON? In basic or custom mode, 1 bit control signal corresponding 8-bit parallel data bits. I read XCVR user guide, it seems that only LSB 2-bits of the control bus will be used to recognize/indicate data word or control word?11Views0likes3Commentsretiming issue
we develop a project with agilex7 fpga and using quartus pro 25.3 version. now we fix timing by analyze the retiming report. we see the following retiming restriction: But the register GEN_REG_INUT.R_DATA[0][0] is directly driven by quartus hyper register, and it has no power up value and no sync reset. the retiming critical path as follow: -------------------------------------------+ ; Critical Chain Details ; +--------------------------+-------------------------+-------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ ; Path Info ; Register ; Register ID ; Element ; +--------------------------+-------------------------+-------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ ; Power-up Restriction ; ALM Register ; #1 ; x_blk_pe|x_pe|x_pe_top_no_hmc|x_pe_pkg_rst_f|GEN_REG_INPUT.R_data[0][0] ; ; Long Path (Critical) ; ; ; x_blk_pe|x_pe|x_pe_top_no_hmc|x_pe_pkg_rst_f|GEN_REG_INPUT.R_data[0][0]|q ; ; Long Path (Critical) ; ; ; x_blk_pe|x_pe|x_pe_top_no_hmc|x_pe_pkg_rst_f|GEN_REG_INPUT.R_data[0][0]~LAB_RE_X221_Y195_N0_I99 ; ; Long Path (Critical) ; ; ; x_blk_pe|x_pe|x_pe_top_no_hmc|x_pe_pkg_rst_f|GEN_REG_INPUT.R_data[0][0]~R1_X221_Y195_N0_I10 ; ; Long Path (Critical) ; Bypassed Hyper-Register ; ; x_blk_pe|x_pe|x_pe_top_no_hmc|x_pe_pkg_rst_f|GEN_REG_INPUT.R_data[0][0]~C4_X220_Y191_N0_I11 ; ; Long Path (Critical) ; ; ; x_blk_pe|x_pe|x_pe_top_no_hmc|x_pe_pkg_rst_f|GEN_REG_INPUT.R_data[0][0]~LOCAL_INTERCONNECT_X220_Y191_N0_I26 ; ; Long Path (Critical) ; Bypassed Hyper-Register ; ; x_blk_pe|x_pe|x_pe_top_no_hmc|x_pe_pkg_rst_f|GEN_REG_INPUT.R_data[0][0]~BLOCK_INPUT_MUX_PASSTHROUGH_X220_Y191_N0_I35 ; ; Long Path (Critical) ; ; ; x_blk_pe|x_pe|x_pe_top_no_hmc|x_pe_pkg_rst_f|GEN_REG_INPUT.R_data[0][0]~LAB_RE_X220_Y191_N0_I39 ; ; Long Path (Critical) ; ; ; x_blk_pe|x_pe|x_pe_top_no_hmc|x_pe_pkg_rst_f|GEN_REG_INPUT.R_data[0][0]~xsyn|dataf ; ; Long Path (Critical) ; ; ; x_blk_pe|x_pe|x_pe_top_no_hmc|x_pe_pkg_rst_f|GEN_REG_INPUT.R_data[0][0]~xsyn|combout ; ; Long Path (Critical) ; ; ; x_blk_pe|x_pe|x_pe_top_no_hmc|x_pe_pkg_rst_f|GEN_REG_INPUT.R_data[0][0]~xsyn~cw_la_lab/lab_lut6outt[4] ; ; Long Path (Critical) ; ; ; x_blk_pe|x_pe|x_pe_top_no_hmc|x_pe_pkg_rst_f|GEN_REG_INPUT.R_data[0][0]~xsyn~_LAB_RE_X220_Y191_N0_I138 ; ; Long Path (Critical) ; ; ; x_blk_pe|x_pe|x_pe_top_no_hmc|x_pe_pkg_rst_f|GEN_REG_INPUT.R_data[0][0]~xsyn~_C1_X220_Y190_N0_I17 ; ; Long Path (Critical) ; ; ; x_blk_pe|x_pe|x_pe_top_no_hmc|x_pe_pkg_rst_f|GEN_REG_INPUT.R_data[0][0]~xsyn~_R1_X220_Y190_N0_I16 ; ; Long Path (Critical) ; Bypassed Hyper-Register ; ; x_blk_pe|x_pe|x_pe_top_no_hmc|x_pe_pkg_rst_f|GEN_REG_INPUT.R_data[0][0]~xsyn~_R12_X208_Y190_N0_I2 ; ; Long Path (Critical) ; Bypassed Hyper-Register ; ; x_blk_pe|x_pe|x_pe_top_no_hmc|x_pe_pkg_rst_f|GEN_REG_INPUT.R_data[0][0]~xsyn~_R12_X196_Y190_N0_I2 ; ; Long Path (Critical) ; Bypassed Hyper-Register ; ; x_blk_pe|x_pe|x_pe_top_no_hmc|x_pe_pkg_rst_f|GEN_REG_INPUT.R_data[0][0]~xsyn~_C8_X195_Y182_N0_I0 ; ; Long Path (Critical) ; Bypassed Hyper-Register ; ; x_blk_pe|x_pe|x_pe_top_no_hmc|x_pe_pkg_rst_f|GEN_REG_INPUT.R_data[0][0]~xsyn~_C8_X195_Y174_N0_I0 ; ; Long Path (Critical) ; Bypassed Hyper-Register ; ; x_blk_pe|x_pe|x_pe_top_no_hmc|x_pe_pkg_rst_f|GEN_REG_INPUT.R_data[0][0]~xsyn~_R12_X184_Y174_N0_I0 ; ; Long Path (Critical) ; Bypassed Hyper-Register ; ; x_blk_pe|x_pe|x_pe_top_no_hmc|x_pe_pkg_rst_f|GEN_REG_INPUT.R_data[0][0]~xsyn~_R12_X172_Y174_N0_I0 ; ; Long Path (Critical) ; Bypassed Hyper-Register ; ; x_blk_pe|x_pe|x_pe_top_no_hmc|x_pe_pkg_rst_f|GEN_REG_INPUT.R_data[0][0]~xsyn~_R12_X160_Y174_N0_I0 ; ; Long Path (Critical) ; Bypassed Hyper-Register ; ; x_blk_pe|x_pe|x_pe_top_no_hmc|x_pe_pkg_rst_f|GEN_REG_INPUT.R_data[0][0]~xsyn~_R12_X148_Y174_N0_I0 ; ; Long Path (Critical) ; Bypassed Hyper-Register ; ; x_blk_pe|x_pe|x_pe_top_no_hmc|x_pe_pkg_rst_f|GEN_REG_INPUT.R_data[0][0]~xsyn~_R12_X136_Y174_N0_I0 ; ; Long Path (Critical) ; Bypassed Hyper-Register ; ; x_blk_pe|x_pe|x_pe_top_no_hmc|x_pe_pkg_rst_f|GEN_REG_INPUT.R_data[0][0]~xsyn~_C8_X135_Y166_N0_I0 ; ; Long Path (Critical) ; Bypassed Hyper-Register ; ; x_blk_pe|x_pe|x_pe_top_no_hmc|x_pe_pkg_rst_f|GEN_REG_INPUT.R_data[0][0]~xsyn~_C8_X135_Y158_N0_I0 ; ; Long Path (Critical) ; Bypassed Hyper-Register ; ; x_blk_pe|x_pe|x_pe_top_no_hmc|x_pe_pkg_rst_f|GEN_REG_INPUT.R_data[0][0]~xsyn~_C8_X135_Y150_N0_I0 ; ; Long Path (Critical) ; Bypassed Hyper-Register ; ; x_blk_pe|x_pe|x_pe_top_no_hmc|x_pe_pkg_rst_f|GEN_REG_INPUT.R_data[0][0]~xsyn~_C8_X135_Y142_N0_I0 ; ; Long Path (Critical) ; Bypassed Hyper-Register ; ; x_blk_pe|x_pe|x_pe_top_no_hmc|x_pe_pkg_rst_f|GEN_REG_INPUT.R_data[0][0]~xsyn~_C8_X135_Y134_N0_I0 ; ; Long Path (Critical) ; Bypassed Hyper-Register ; ; x_blk_pe|x_pe|x_pe_top_no_hmc|x_pe_pkg_rst_f|GEN_REG_INPUT.R_data[0][0]~xsyn~_R12_X124_Y134_N0_I0 ; ; Long Path (Critical) ; Bypassed Hyper-Register ; ; x_blk_pe|x_pe|x_pe_top_no_hmc|x_pe_pkg_rst_f|GEN_REG_INPUT.R_data[0][0]~xsyn~_C8_X123_Y126_N0_I0 ; ; Long Path (Critical) ; Bypassed Hyper-Register ; ; x_blk_pe|x_pe|x_pe_top_no_hmc|x_pe_pkg_rst_f|GEN_REG_INPUT.R_data[0][0]~xsyn~_C8_X123_Y118_N0_I0 ; ; Long Path (Critical) ; ; ; x_blk_pe|x_pe|x_pe_top_no_hmc|x_pe_pkg_rst_f|GEN_REG_INPUT.R_data[0][0]~xsyn~_R0_X123_Y118_N0_I2 ; ; Long Path (Critical) ; ; ; x_blk_pe|x_pe|x_pe_top_no_hmc|x_pe_pkg_rst_f|GEN_REG_INPUT.R_data[0][0]~xsyn~_R2_X124_Y118_N0_I15 ; ; Long Path (Critical) ; ; ; x_blk_pe|x_pe|x_pe_top_no_hmc|x_pe_pkg_rst_f|GEN_REG_INPUT.R_data[0][0]~xsyn~_R1_X126_Y118_N0_I31 ; ; Long Path (Critical) ; ; ; x_blk_pe|x_pe|x_pe_top_no_hmc|x_pe_pkg_rst_f|GEN_REG_INPUT.R_data[0][0]~xsyn~_C1_X126_Y118_N0_I30 ; ; Long Path (Critical) ; ; ; x_blk_pe|x_pe|x_pe_top_no_hmc|x_pe_pkg_rst_f|GEN_REG_INPUT.R_data[0][0]~xsyn~_C1_X126_Y119_N0_I30 ; ; Long Path (Critical) ; ; ; x_blk_pe|x_pe|x_pe_top_no_hmc|x_pe_pkg_rst_f|GEN_REG_INPUT.R_data[0][0]~xsyn~_LOCAL_INTERCONNECT_X126_Y120_N0_I61 ; ; Long Path (Critical) ; ; ; x_blk_pe|x_pe|x_pe_top_no_hmc|x_pe_pkg_rst_f|GEN_REG_INPUT.R_data[0][0]~xsyn~_BLOCK_INPUT_MUX_PASSTHROUGH_X126_Y120_N0_I80 ; ; Long Path (Critical) ; ; ; x_blk_pe|x_pe|x_pe_top_no_hmc|x_pe_pkg_rst_f|GEN_REG_INPUT.R_data[0][0]~xsyn~_LAB_RE_X126_Y120_N0_I82 ; ; Long Path (Critical) ; Bypassed Hyper-Register ; ; x_blk_pe|x_pe|x_pe_top_no_hmc|x_pe_pkg|x_pe_pkg_pkt_pack|pack_data_padding_cross~xtophalf/xale2/xcw_ml_le_regctrl/xcw_la_le_regctrl_reg/sclr_out ; ; Long Path (Critical) ; ; ; x_blk_pe|x_pe|x_pe_top_no_hmc|x_pe_pkg|x_pe_pkg_pkt_pack|x_pack_data_shifter|bcnt[4]|sclr ; Long Path (Critical) ; ALM Register ; #2 ; x_blk_pe|x_pe|x_pe_top_no_hmc|x_pe_pkg|x_pe_pkg_pkt_pack|x_pack_data_shifter|bcnt[4] ; why quartus report the retiming restriction?9Views0likes3Comments
Featured Places
Community Resources
Check out the support articles on personalizing your community account, contributing to the community, and providing community feedback directly to the admin team!Tags
- troubleshooting10,385 Topics
- fpga dev tools quartus® prime software pro4,273 Topics
- FPGA Dev Tools Quartus II Software3,160 Topics
- stratix® 10 fpgas and socs1,543 Topics
- agilex™ 7 fpgas and socs1,454 Topics
- arria® 10 fpgas and socs1,365 Topics
- stratix® v fpgas1,316 Topics
- arria® v fpgas and socs1,228 Topics
- cyclone® v fpgas and socs1,055 Topics
- Configuration1,000 Topics
Recent Blogs
2 MIN READ
Altera introduced three new Agilex® 7 M-Series FPGA package options, R31G, R47C, and R47D, to give customers more flexibility in balancing bandwidth, connectivity, and performance for AI, networking, video, embedded, and acceleration applications. The new options support DDR5-6400, up to 204.8 GB/s memory bandwidth, and expanded transceiver configurations, enabling designers to optimize systems for PCIe connectivity, maximum data throughput, or efficient right-sized scaling.
13 hours ago0likes
This post demonstrates a 200G-4 Ethernet link running on Agilex 7 FPGAs using F-Tile transceivers. It walks through the full link bring-up process, including Auto-Negotiation and Link Training, followed by stable high-speed data transmission using 53.125G PAM4 lanes over QSFP-DD. The demo provides real-time visibility into link status, signal integrity, and error metrics, and evaluates performance across loopback configurations and varying cable lengths. The result highlights reliable link initialization, consistent throughput, and robust operation under practical system conditions.
13 days ago0likes
Edge AI needs to meet stringent size, weight, and power requirements while also satisfying deterministic latency for high levels of safety needed by physical AI systems that interact with humans and machines alike. Validation of accuracy and flexible placement of highly optimized inference pipelines needs multiple approaches. Altera addresses this need with its latest release of FPGA AI Suite software (2026.1.1), that introduces a spatial compilation mode. This mode generates dedicated per-model RTL through which the AI inference inputs streams, lowering latency, significantly reducing the delays, and improving safety in the physical AI chain; sense-think-act. Release 2026.1.1 also unlocks further improvements: Scales the AI IP to handle more complex/larger sequential mode inferencing (up to 500k ALMs) for overlay instances Increases memory bandwidth by utilizing multiple memory interfaces Easily explore more design options with architecture optimizer support for two new modes: multi-lane (achieving higher performance) and DDR-free (reducing latency by keeping all memory use within the FPGA). Reminder: Sequential mode is where a fixed overlay with precompiled microcode sequentially processes the inference model layers Other key highlights in software-based model evaluation include multi-core software emulation and RTL simulation for both IP types. Models targeting the FPGA’s Arm Hard Processor SoC Subsystem (HPS)-based host can now be compiled directly from an x86 machine via a Docker-based Arm emulator, with no physical Arm processor required. This further accelerates time to market with pre-silicon validation earlier in the design cycle. Spatial IP Compiler: AI Inference for the Physical AI Era Prior FPGA AI Suite releases compiled models into sequential IP, a configurable overlay architecture, analogous to a soft processor, where control logic orchestrates a parameterized datapath through microcode delivered via a configuration network. The overlay is flexible: one bitstream can run different models by loading new microcode and weights. This generality carries overhead; the microcode control layer, configuration decoding, and runtime scheduling all consume more FPGA resources and impact latency that a fixed-function design would not need. The spatial compiler takes a different approach. Instead of programming a general-purpose overlay, it generates dedicated RTL where individual model layers map to optimized library blocks and inter-layer connections become physical communication channels in the FPGA’s logic fabric. There is no microcode, no overlay control layer. For suitable workloads, especially smaller networks, this yields higher throughput at lower power with deterministic per-layer latency. For example, an internal MLP benchmark (two fully connected layers with batch normalization, tanh on a hidden layer, linear on the output, 8,000 trainable parameters, total of 52 neurons) shows the advantage of spatial: Targeting spatial results in the FPGA IP using 6K ALMs operating at 3.09 million Inferences per second with 28x lower latency versus Targeting sequential results in 28K ALMs, running at 0.11 million inferences per second. More new features include: Weights can be embedded in an FPGA’s logic fabric as part of the configuration bitstream via .mif initialization to achieve lower latency operation or streamed to the IP at runtime if model switching is required. A hostless DDR-free design example on Altera’s Agilex® 5 E-Series Modular Dev. Kit demonstrates the full flow from compilation through JTAG-based inference on hardware, with bit-accurate simulation for pre-silicon validation. Architecture Optimizer: Multi-Lane and DDR-Free Search Two deployment modes that previously required manual configuration are now searchable by the optimizer. Multi-lane execution and DDR-free architectures (all weights stored in on-chip M20K FPGA memory blocks) can now be swept automatically alongside other parameters, eliminating manual architecture exploration for these modes. Performance: 500k ALMs, Multi-External Memory, Burst Optimization FPGA AI Suite IP is now validated to 500,000 Adaptive Logic Modules (ALMs), up from 225k, unlocking larger Agilex 7 and Stratix 10 devices for maximum-throughput overlay configurations. Multi-External memory interface support lets a single FPGA AI Suite IP instance use two or more memory interfaces for higher aggregate DDR bandwidth. AXI burst size optimization improves effective throughput when multiple IP modules share memory, reducing latency and power with no RTL changes. Emulation, Simulation, and ARM Cross-Compilation Multi-core software emulation parallelizes the bit-accurate emulation kernel across CPU cores, making regression testing and quantization sweeps practical before hardware is available. The emulation model remains bit-exact with hardware output. RTL simulation now supports both sequential and spatial IP via Questa*-Altera FPGA Edition and VCS simulation software, enabling pre-silicon verification for both architecture types. A new --arm compiler flag enables ARM HPS model compilation from x86 via a Docker-based Arm emulator. The compilation targets SoC deployments where subgraph layers execute on the Arm CPU, no physical Arm hardware or Yocto cross-compilation required. Try It Now Once downloaded, no license or purchase is required for up to 100,000 consecutive inferences. Download the software or browse the handbook at FPGA AI Suite - AI Inference Development Platform | Altera
13 days ago0likes
This post explains how the definition of mid-range FPGAs has evolved from logic density to system-level capability. It highlights how Agilex 5 FPGAs address modern embedded and edge requirements by integrating compute, AI acceleration, memory, connectivity, and security into a single platform. The article also covers how Agilex 5 D-Series extends mid-range performance with higher logic density, increased bandwidth, and enhanced AI capabilities, enabling more complex and data-intensive workloads while maintaining efficiency and design simplicity.
14 days ago0likes
This post explores a practical shift from a fixed-function ASSP to a programmable FPGA platform in response to evolving system requirements. As bandwidth demands, protocol diversity, and feature complexity increased, limitations in a 400G optical transport ASSP and uncertainty in vendor roadmap made continued reliance difficult. The team transitioned to an FPGA-based approach, enabling customization of protocols and features while aligning the system more closely with real usage needs. The article also highlights benefits such as design reuse, reduced hardware variants, simplified inventory management, and greater control over long-term system evolution.
14 days ago0likes