Welcome to the Altera Community!

Your space to explore our products, ask questions, and join popular conversations. We're glad you're here!

Recent Content

Featured Places

Community Resources

Check out the support articles on personalizing your community account, contributing to the community, and providing community feedback directly to the admin team!

Recent Blogs

This post demonstrates a 200G-4 Ethernet link running on Agilex 7 FPGAs using F-Tile transceivers. It walks through the full link bring-up process, including Auto-Negotiation and Link Training, followed by stable high-speed data transmission using 53.125G PAM4 lanes over QSFP-DD. The demo provides real-time visibility into link status, signal integrity, and error metrics, and evaluates performance across loopback configurations and varying cable lengths. The result highlights reliable link initialization, consistent throughput, and robust operation under practical system conditions.
17 hours ago
Edge AI needs to meet stringent size, weight, and power requirements while also satisfying deterministic latency for high levels of safety needed by physical AI systems that interact with humans and machines alike. Validation of accuracy and flexible placement of highly optimized inference pipelines needs multiple approaches. Altera addresses this need with its latest release of FPGA AI Suite software (2026.1.1), that introduces a spatial compilation mode. This mode generates dedicated per-model RTL through which the AI inference inputs streams, lowering latency, significantly reducing the delays, and improving safety in the physical AI chain; sense-think-act. Release 2026.1.1 also unlocks further improvements:  Scales the AI IP to handle more complex/larger sequential mode inferencing (up to 500k ALMs) for overlay instances Increases memory bandwidth by utilizing multiple memory interfaces Easily explore more design options with architecture optimizer support for two new modes: multi-lane (achieving higher performance) and DDR-free (reducing latency by keeping all memory use within the FPGA). Reminder: Sequential mode is where a fixed overlay with precompiled microcode sequentially processes the inference model layers Other key highlights in software-based model evaluation include multi-core software emulation and RTL simulation for both IP types. Models targeting the FPGA’s Arm Hard Processor SoC Subsystem (HPS)-based host can now be compiled directly from an x86 machine via a Docker-based Arm emulator, with no physical Arm processor required. This further accelerates time to market with pre-silicon validation earlier in the design cycle. Spatial IP Compiler: AI Inference for the Physical AI Era Prior FPGA AI Suite releases compiled models into sequential IP, a configurable overlay architecture, analogous to a soft processor, where control logic orchestrates a parameterized datapath through microcode delivered via a configuration network. The overlay is flexible: one bitstream can run different models by loading new microcode and weights. This generality carries overhead; the microcode control layer, configuration decoding, and runtime scheduling all consume more FPGA resources and impact latency that a fixed-function design would not need. The spatial compiler takes a different approach. Instead of programming a general-purpose overlay, it generates dedicated RTL where individual model layers map to optimized library blocks and inter-layer connections become physical communication channels in the FPGA’s logic fabric. There is no microcode, no overlay control layer. For suitable workloads, especially smaller networks, this yields higher throughput at lower power with deterministic per-layer latency. For example, an internal MLP benchmark (two fully connected layers with batch normalization, tanh on a hidden layer, linear on the output, 8,000 trainable parameters, total of 52 neurons) shows the advantage of spatial: Targeting spatial results in the FPGA IP using 6K ALMs operating at 3.09 million Inferences per second with 28x lower latency versus Targeting sequential results in 28K ALMs, running at 0.11 million inferences per second. More new features include: Weights can be embedded in an FPGA’s logic fabric as part of the configuration bitstream via .mif initialization to achieve lower latency operation or streamed to the IP at runtime if model switching is required. A hostless DDR-free design example on Altera’s Agilex® 5 E-Series Modular Dev. Kit demonstrates the full flow from compilation through JTAG-based inference on hardware, with bit-accurate simulation for pre-silicon validation. Architecture Optimizer: Multi-Lane and DDR-Free Search Two deployment modes that previously required manual configuration are now searchable by the optimizer. Multi-lane execution and DDR-free architectures (all weights stored in on-chip M20K FPGA memory blocks) can now be swept automatically alongside other parameters, eliminating manual architecture exploration for these modes. Performance: 500k ALMs, Multi-External Memory, Burst Optimization FPGA AI Suite IP is now validated to 500,000 Adaptive Logic Modules (ALMs), up from 225k, unlocking larger Agilex 7 and Stratix 10 devices for maximum-throughput overlay configurations. Multi-External memory interface support lets a single FPGA AI Suite IP instance use two or more memory interfaces for higher aggregate DDR bandwidth. AXI burst size optimization improves effective throughput when multiple IP modules share memory, reducing latency and power with no RTL changes. Emulation, Simulation, and ARM Cross-Compilation Multi-core software emulation parallelizes the bit-accurate emulation kernel across CPU cores, making regression testing and quantization sweeps practical before hardware is available. The emulation model remains bit-exact with hardware output. RTL simulation now supports both sequential and spatial IP via Questa*-Altera FPGA Edition and VCS simulation software, enabling pre-silicon verification for both architecture types. A new --arm compiler flag enables ARM HPS model compilation from x86 via a Docker-based Arm emulator. The compilation targets SoC deployments where subgraph layers execute on the Arm CPU, no physical Arm hardware or Yocto cross-compilation required. Try It Now Once downloaded, no license or purchase is required for up to 100,000 consecutive inferences. Download the software or browse the handbook at FPGA AI Suite - AI Inference Development Platform | Altera
19 hours ago
This post explains how the definition of mid-range FPGAs has evolved from logic density to system-level capability. It highlights how Agilex 5 FPGAs address modern embedded and edge requirements by integrating compute, AI acceleration, memory, connectivity, and security into a single platform. The article also covers how Agilex 5 D-Series extends mid-range performance with higher logic density, increased bandwidth, and enhanced AI capabilities, enabling more complex and data-intensive workloads while maintaining efficiency and design simplicity.
1 day ago
This post explores a practical shift from a fixed-function ASSP to a programmable FPGA platform in response to evolving system requirements. As bandwidth demands, protocol diversity, and feature complexity increased, limitations in a 400G optical transport ASSP and uncertainty in vendor roadmap made continued reliance difficult. The team transitioned to an FPGA-based approach, enabling customization of protocols and features while aligning the system more closely with real usage needs. The article also highlights benefits such as design reuse, reduced hardware variants, simplified inventory management, and greater control over long-term system evolution.
1 day ago
This post demonstrates how F-Tile Dynamic Reconfiguration in Agilex 7 FPGAs enables real-time switching between 400G and 4×100G Ethernet without system downtime. It explains how predefined configuration profiles, system-level data path reconfiguration (MAC, PCS, FEC, PMA), and software control enable predictable, production-ready transitions. The article also highlights support for multi-rate Ethernet, protocol flexibility, and continuous traffic validation, showing how FPGA-based systems can adapt dynamically to changing network conditions.
9 days ago