Addressing the Greatest Memory and Compute Challenges with Intel® Agilex™ M-Series FPGAs
We are enabling our customers to achieve higher performance with better power efficiency across all end markets and applications with our flagship Intel® Agilex™ FPGA family. We continue to build the momentum, with the Intel® Agilex™ M-Series family variant.5.2KViews0likes0CommentsBuild Better 5G NR Radios with Intel® Agilex™ FPGAs
Intel provides silicon technologies and solutions that address every element in 5G O-RAN architecture, including O-RUs (open radio units), O-DUs (open distributed units), O-CUs (open central units), and O-DRUs (open distributed unit and radio unit combos).3.7KViews0likes0CommentsAyar Labs and Intel demo FPGA with optical transceivers in DARPA PIPES project: 2 Tbps now, >100 Tbps is the goal
The US Defense Advanced Research Projects Agency (DARPA), has announced that researchers from Intel and Ayar Labs have demonstrated early progress towards improving chip connectivity using photonic interconnect under DARPA’s Photonics in the Package for Extreme Scalability (PIPES) program, which is exploring ways to expand the use of optical components to address the performance, efficiency, and distance constraints of copper interconnect. The demonstration successfully supplemented an Intel® FPGA’s electrical I/O with optical signaling by substituting a TeraPHY optical I/O chiplet developed by Ayar Labs for the SerDes chiplet – which Intel calls a “tile” – used in the original version of the Intel FPGA. The team integrated the Ayar Labs’ TeraPHY chiplet with the Intel FPGA core die into a multi-chip module (MCM) with in-package optics. This image shows the optically connected FPGA board developed by Intel and Ayar Labs. Source: Ayar Labs The packaged device communicates over an 8-channel optical link at 2 Tbps through its integrated, in-package optics. According to a blog titled “Breaking Optical I/O milestones while adjusting to the ‘new normal’” just published by Mark Wade, President and CTO of Ayar Labs, the demo “achieved a few key milestones” including a “512 Gbps transmitter with all 8 optical channels simultaneously locked and transmitting.” However, 2 Tbps is not the ultimate goal of the project. Dr. Gordon Keeler, the DARPA program manager leading PIPES, stated: “A key goal of the [PIPES] program is to develop advanced ICs with photonic interfaces capable of driving >100 terabits per second (Tbps) I/O per package at energies below one picojoule per bit.” The co-packaged TeraPHY chiplet communicates with the Intel FPGA die over an Advanced Interface Bus (AIB), which is a publicly available, open interface standard that enables silicon IP providers working under the PIPES program to create interoperable chiplets. (For more information about the AIB, see: Intel releases Royalty-Free, High-Performance AIB Interconnect Standard to Spur Industry’s Chiplet Adoption and Grow the Ecosystem, How Heterogeneous Device Design and Manufacturing Leads to Success, and Enabling Next-Generation Platforms Using Intel’s 3D System-in-Package Technology.) The integrated optical MCM developed by this research team substantially improves interconnect reach, efficiency, and latency using optical I/O and enables high-speed data links over single-mode optical fibers connected directly to the MCM package. The Intel FPGA die used in this project was manufactured with an advanced Intel process technology and uses the same, established AIB interconnect employed by Intel® Stratix 10 and Intel® Agilex FPGAs to allow the FPGA die to communicate with a wide variety of co-packaged I/O tiles. Intel’s silicon and software portfolio empowers our customers’ intelligent services from the cloud to the edge. Note: For a detailed technical discussion of the TeraPHY chiplet, see this paper presented by Ayar Labs and Intel at the Hot Chips 2019 conference. Notices and Disclaimers Intel’s compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice. Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit www.intel.com/benchmarks. Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available updates. See backup for configuration details. No product or component can be absolutely secure. Intel does not control or audit third-party data. You should consult other sources to evaluate accuracy. Your costs and results may vary. Intel technologies may require enabled hardware, software or service activation. Statements in this document that refer to future plans or expectations are forward-looking statements. These statements are based on current expectations and involve many risks and uncertainties that could cause actual results to differ materially from those expressed or implied in such statements. For more information on the factors that could cause actual results to differ materially, see our most recent earnings release and SEC filings at www.intc.com. © Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.3.6KViews0likes0CommentsBittWare announces IA-420F PCIe accelerator card and IA-220-U2 computational storage accelerator based on Intel® Agilex™ FPGAs and SoC FPGAs
BittWare has added two new accelerator products based on Intel® Agilex™ FPGAs and SoC FPGAs to the previously announced IA-840F Enterprise-Class FPGA Accelerator. (See “BittWare IA-840F FPGA Accelerator PCIe Card bristles with high-speed I/O, is based on an Intel® Agilex™ FPGA.”) The new IA-420F half-height, half-length, single-width PCIe card with multiple 100G network ports is designed for SmartNIC and computational storage applications and the IA-220-U2 computational storage processor in a U.2 form factor is specifically designed to accommodate NVMe computational storage workloads. BittWare provides all three FPGA-based accelerators with application reference designs and additional support for the Intel® oneAPI programming model, which provides hardware developers with the ability to create domain-specific FPGA platforms and allows application developers to build cross-architecture, single-source compilation designs. BittWare has added the IA-420F half-height, half-length, single-width PCIe card and the IA-220-U2 computational storage processor in a U.2 form factor to its existing line of workload acceleration products based on Intel® Agilex™ FPGAs and FPGA SoCs. According to BittWare, the IA-420F features a PCIe Gen4 x16 host interface that provides customers with as much as 2X the bandwidth normally available for FPGA-augmented systems, making it a compelling resource for accelerating data analytic workloads, while the IA-220-U2 can serve as a deployment-friendly computational storage processor in a U.2 NVMe storage array rack. For more information about these accelerators, please contact BittWare directly. For more information about the broad and growing line of Intel Agilex FPGA and SoC devices, click here, and be sure to read “Breakthrough FPGA News from Intel.” Notices & Disclaimers Intel technologies may require enabled hardware, software or service activation. No product or component can be absolutely secure. Intel does not control or audit third-party data. You should consult other sources to evaluate accuracy. Your costs and results may vary. © Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.2.4KViews0likes0CommentsArkville PCIe Gen4 Data Mover Using Intel® Agilex™ FPGAs Webinar: Tuesday 14 Dec 2021 @11:00 am EST
Hear from Jeff Milrod at BittWare, who will be introducing products supporting Intel® Agilex™ FPGAs and the use of data mover IP in a variety of markets. Tom Schulte from Intel will provide perspective on the Agilex product line, including future features such as PCIe Gen5 x16 support. The webinar will conclude with Shep Siegel from Atomic Rules, who will give a demo and explain the performance achieved with the Arkville data mover IP on Agilex FPGAs.2.4KViews0likes1CommentUpcoming Webinar: Computational Storage Acceleration Using Intel® Agilex™ FPGAs
Today’s computational workloads are larger, more complex, and more diverse than ever before. The explosion of applications such as high-performance computing (HPC), artificial intelligence (AI), machine vision, analytics, and other specialized tasks is driving the exponential growth of data. At the same time, the trend towards using virtualized servers, storage, and network connections means that workloads are growing in scale and complexity. Traditionally, data is conveyed to a computational engine -- such as a central processing unit (CPU) -- for processing, but transporting the data takes time, consumes power, and is increasingly proving to be a bottleneck in the overall process. The solution is computational storage, also known as in-storage processing (ISP). The idea here is that, instead of bringing the data to the computational engine, computational storage brings the computational engine to the data. In turn, this allows the data to be processed and analyzed where it is generated and stored. To learn more on this concept, please join us for a webinar led by Sean Lundy from Eideticom, Craig Petrie from BittWare, and myself: Computational Storage Using Intel® Agilex™ FPGAs: Bringing Acceleration Closer to Data The webinar will take place on Thursday 11 November 2021 starting at 11:00 a.m. EST. Sean will introduce Eideticom’s NoLoad computational storage technology for use in data center storage and compute applications. NoLoad technology provides CPU offload for applications, resulting in the dramatic acceleration of compute-plus-data intensive tasks like storage workloads, data bases, AI inferencing, and data analytics. NoLoad’s NVMe-compliant interface simplifies the deployment of computational offload by making it straightforward to deploy in servers of all types and across all major operating systems. Craig will introduce BittWare’s new IA-220-U2 FPGA-based Computational Storage Processor (CSP) that supports Eideticom’s NoLoad technology as an option. The IA-220-U2 CSP, which is powered by an Intel Agilex F-Series FPGA with 1.4M logic elements (LEs), features PCIe Gen 4 for twice the bandwidth offered by PCIe Gen 3 solutions. This CSP works alongside traditional Flash SSDs, providing accelerated computational storage services (CSS) by performing compute-intensive tasks, including compression and/or encryption. This allows users to build out their storage using standard SSDs instead of being locked into a single vendor’s storage solutions. BittWare’s IA-220-U2 accelerates NVMe FLASH SSDs by sitting alongside them as another U.2 Module. (Image source: Bittware) We will also discuss the Intel Agilex™ FPGAs that power BittWare’s new CSP. Built on Intel’s 10nm SuperFin Technology, these devices leverage heterogeneous 3D system-in-package (SiP) technology. Agilex I-Series FPGAs and SoC FPGAs are optimized for bandwidth-intensive applications that require high-performance processor interfaces, such as PCIe Gen 5 and Compute Express Link (CXL). Meanwhile, Agilex F-Series FPGAs and SoC FPGAs are optimized for applications in data center, networking, and edge computing. With transceiver support up to 58 Gbps, advanced DSP capabilities, and PCIe Gen 4 x16, the Agilex F-Series FPGAs that power BittWare’s new CSP provide the customized connectivity and acceleration required by compute-plus-data intensive power sensitive applications such as HPC, AI, machine vision, and analytics. This webinar will be of interest to anyone involved with these highly complex applications and environments. We hope to see you there, so Register Now before all the good virtual seats are taken.2.2KViews0likes0Comments