The Growing Need for Scalable Solutions in the Emulation and Prototyping of Complex Designs
As chips get larger and more complex, with more interfaces and needing early hardware and software code integration, Emulation and Prototyping have become essential tools in the verification landscape.4.4KViews0likes0CommentsAyar Labs and Intel demo FPGA with optical transceivers in DARPA PIPES project: 2 Tbps now, >100 Tbps is the goal
The US Defense Advanced Research Projects Agency (DARPA), has announced that researchers from Intel and Ayar Labs have demonstrated early progress towards improving chip connectivity using photonic interconnect under DARPA’s Photonics in the Package for Extreme Scalability (PIPES) program, which is exploring ways to expand the use of optical components to address the performance, efficiency, and distance constraints of copper interconnect. The demonstration successfully supplemented an Intel® FPGA’s electrical I/O with optical signaling by substituting a TeraPHY optical I/O chiplet developed by Ayar Labs for the SerDes chiplet – which Intel calls a “tile” – used in the original version of the Intel FPGA. The team integrated the Ayar Labs’ TeraPHY chiplet with the Intel FPGA core die into a multi-chip module (MCM) with in-package optics. This image shows the optically connected FPGA board developed by Intel and Ayar Labs. Source: Ayar Labs The packaged device communicates over an 8-channel optical link at 2 Tbps through its integrated, in-package optics. According to a blog titled “Breaking Optical I/O milestones while adjusting to the ‘new normal’” just published by Mark Wade, President and CTO of Ayar Labs, the demo “achieved a few key milestones” including a “512 Gbps transmitter with all 8 optical channels simultaneously locked and transmitting.” However, 2 Tbps is not the ultimate goal of the project. Dr. Gordon Keeler, the DARPA program manager leading PIPES, stated: “A key goal of the [PIPES] program is to develop advanced ICs with photonic interfaces capable of driving >100 terabits per second (Tbps) I/O per package at energies below one picojoule per bit.” The co-packaged TeraPHY chiplet communicates with the Intel FPGA die over an Advanced Interface Bus (AIB), which is a publicly available, open interface standard that enables silicon IP providers working under the PIPES program to create interoperable chiplets. (For more information about the AIB, see: Intel releases Royalty-Free, High-Performance AIB Interconnect Standard to Spur Industry’s Chiplet Adoption and Grow the Ecosystem, How Heterogeneous Device Design and Manufacturing Leads to Success, and Enabling Next-Generation Platforms Using Intel’s 3D System-in-Package Technology.) The integrated optical MCM developed by this research team substantially improves interconnect reach, efficiency, and latency using optical I/O and enables high-speed data links over single-mode optical fibers connected directly to the MCM package. The Intel FPGA die used in this project was manufactured with an advanced Intel process technology and uses the same, established AIB interconnect employed by Intel® Stratix 10 and Intel® Agilex FPGAs to allow the FPGA die to communicate with a wide variety of co-packaged I/O tiles. Intel’s silicon and software portfolio empowers our customers’ intelligent services from the cloud to the edge. Note: For a detailed technical discussion of the TeraPHY chiplet, see this paper presented by Ayar Labs and Intel at the Hot Chips 2019 conference. Notices and Disclaimers Intel’s compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice. Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit www.intel.com/benchmarks. Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available updates. See backup for configuration details. No product or component can be absolutely secure. Intel does not control or audit third-party data. You should consult other sources to evaluate accuracy. Your costs and results may vary. Intel technologies may require enabled hardware, software or service activation. Statements in this document that refer to future plans or expectations are forward-looking statements. These statements are based on current expectations and involve many risks and uncertainties that could cause actual results to differ materially from those expressed or implied in such statements. For more information on the factors that could cause actual results to differ materially, see our most recent earnings release and SEC filings at www.intc.com. © Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.3.6KViews0likes0CommentsSilicom Showcases FPGA SmartNIC Acceleration Cards for O-RAN Apps at MWC Barcelona 2022
Silicom specifically designed the Silicom FPGA SmartNIC N6011 as a 5G O-RAN platform using open Intel® technologies. It is custom designed to meet the real-time processing needs of 5G DUs and can be configured for a wide range of network deployments.3.3KViews0likes0CommentsS2C announces FPGA Prototyping System with 300M ASIC gate capacity based on four Intel® Stratix® 10 GX 10M FPGAs
When it comes to validating next-generation SoC designs in cutting-edge applications such as 5G, AI, and Networking using an FPGA prototyping system… bigger is better. S2C’s new Quad 10M Prodigy Logic System embodies that concept by delivering a platform comprised of four Intel® Stratix® 10 GX 10M FPGAs. Each FPGA delivers 10.2 million logic elements (LEs), for a combined capacity of 40.8 million LEs, along with 1,012 Mbits of on-chip SRAM and 13,824 DSP blocks. S2C says that the system has a prototyping capacity equivalent to 300 million ASIC gates. The S2C Quad 10M Prodigy Logic System is based on four Intel® Stratix® 10 GX 10M FPGAs In addition, the Quad 10M Prodigy Logic System provides 4,608 high-performance I/O pins and 160 high-speed transceivers that can run up to 16Gbps. The high-performance I/O pins are used for inter-FPGA connections and are also available for use with S2C’s Prodigy daughter cards. There are currently more than 90 Prodigy daughter cards in S2C’s library. S2C’s multitalented Prodigy Player Pro tool manages the Quad 10M Prodigy Logic System. It configures your design in the FPGA prototyping system using an automated compilation flow that assigns I/O pins, partitions your design across multiple FPGAs, and runs remote system management over a USB or Ethernet connection. It also supports multi-FPGA debugging with deep tracing for as many as 32K probes per FPGA. S2C’s Quad 10M Prodigy Logic System is the company’s second FPGA prototyping system to be based on the Intel Stratix 10 GX 10M FPGA. The first such system, the expandable S10 10M Prodigy Logic System, was announced late last year. (See “S2C launches Prodigy Logic Systems for prototyping large ASIC designs based on the world’s highest capacity FPGA, the Intel® Stratix® 10 GX 10M.”) The S2C Quad 10M Prodigy Logic System is available for purchase now. For more information about S2C’s Quad 10M Prodigy Logic System, please contact S2C directly. For more information about the Intel Stratix 10 GX 10M FPGA, see “Intel announces Intel® Stratix® 10 GX 10M FPGA, world’s highest capacity with 10.2 million logic elements. Targets ASIC Prototyping and Emulation Markets.” Notices & Disclaimers Intel technologies may require enabled hardware, software or service activation. No product or component can be absolutely secure. Your costs and results may vary. Intel does not control or audit third-party data. You should consult other sources to evaluate accuracy. © Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.3.2KViews0likes0CommentsIntel has just announced its first AI-optimized FPGA – the Intel® Stratix® 10 NX FPGA – to address the rapid increase in AI model complexity
FPGAs have been used in the area of hardware customization for decades. The hardware customization capability taps into the value proposition of FPGAs such as pipelining for applications that require low batch size and low latency, and flexible fabric and I/O functions to scale up and deliver entire systems. Intel’s silicon and software portfolio, which includes FPGAs, empowers our customers’ intelligent services from the cloud to the edge. Many Intel® FPGA customers have already started implementing AI accelerators using the hardware customization available through Intel FPGA technologies. It starts with Intel’s vision for device agnostic AI development. This vision allows developers to focus on building their solutions rather than focusing on specific devices. Intel has been fitting FPGAs into that vision for a while. Intel is focusing on acceleration, specifically on the trend of increasing AI model size and complexity. AI model complexity continues to double every 3.5 months. That’s a factor of 10X per year. These AI models are used in applications such as Natural Language Processing (NLP), Fraud Detection, and Surveillance. Intel has just announced its first AI-optimized FPGA – the Intel® Stratix® 10 NX FPGA – to address the rapid increase in AI model complexity. The Intel Stratix 10 NX FPGA embeds a new type of AI-optimized block called the AI Tensor Block, which delivers up to 15X more INT8 compute performance than today’s Stratix 10 MX. The INT8 data type is often used by AI inferencing algorithms. The AI Tensor Block is tuned for the common matrix-matrix or vector-matrix multiplications used by these AI algorithms, with capabilities designed to work efficiently for both small and large matrix sizes. David Moore, Corporate Vice President and General Manager for the Intel Programmable Solutions Group, holds up an Intel Stratix 10 NX FPGA, the company’s first AI-optimized FPGA The Intel Stratix 10 NX FPGA has several additional in-package features that support high-performance AI inferencing. These features include high-speed HBM2 memory and high-speed transceivers for fast networking. Note that Intel was able to develop the Intel Stratix 10 NX FPGA quickly due to its chiplet-based FPGA architecture strategy. Intel partnered with Microsoft to develop the AI Tensor block to help accelerate AI workloads in the data center. “As Microsoft designs our real-time multi-node AI solutions, we need flexible processing devices that deliver ASIC-level tensor performance, high memory and connectivity bandwidth, and extremely low latency. Intel® Stratix® 10 NX FPGAs meet Microsoft’s high bar for these requirements, and we are partnering with Intel to develop next-generation solutions to meet our hyperscale AI needs.” – Doug Burger, Technical Fellow, Microsoft Azure Hardware Intel Stratix 10 NX FPGAs serve as multi-function AI accelerators for Intel® Xeon® processors. They specifically address applications that require hardware customization, low latency, and real-time capabilities. The AI Tensor Blocks in the Intel Stratix 10 NX FPGA deliver more compute throughput by implementing more multipliers and accumulators compared to the DSP block found in other Intel Stratix 10 devices. The AI Tensor Block contains 30 multipliers and 30 accumulators instead of the two multipliers and two accumulators in the DSP block. The multipliers in the AI Tensor Block are tuned for lower precision numerical formats such as INT4, INT8, Block Floating Point 12, and Block Floating Point 16. These specific precisions are frequently used for AI inferencing workloads. The Intel Stratix 10 NX FPGAs addresses today’s AI challenges. For example, NLP typically uses large AI models, and these models are growing larger. The need to detect, recognize, and understand the context of various languages, followed by translation to the target language is a growing use for language translation applications, which are one NLP workload. These expanded workload requirements drive model complexity, which results in the need for more compute cycles, more memory, and more networking bandwidth. The Intel Stratix 10 NX FPGA’s in-package HBM2 memory allows large AI models to be stored on chip. Estimates suggest that a Stratix 10 NX FPGA running a large AI model like BERT at batch size 1 delivers 2.3X better compute performance than an NVIDIA V100. Fraud Detection is another application where Intel FPGAs enable real-time data processing applications where every microseconds matters. Intel FPGAs’ ability to create custom hardware solutions with direct ingestion of data through its transceivers and deterministic, low latency compute elements make microsecond-class real-time performance possible. Typically, Fraud Detection employs LSTM (Long Short Term Memory) AI models at batch size 1. Estimates suggest that the Intel Stratix 10 NX FPGA will deliver 34X better compute performance than an NVIDIA T4 GPU for LSTM models at batch size 1. Finally, consider a video surveillance application. Intel FPGAs excel in video surveillance applications because of their hardware customization ability, which allows implementation of custom processing and custom I/O protocols for direct data ingestion. For example, estimates suggest that the Intel Stratix 10 NX FPGA will provide 3.8X better compute performance than an NVIDIA T4 GPU for video surveillance using the ResNet50 model at batch size 1. The Intel Stratix 10 NX extends the benefits of FPGA based, high performance, hardware customization for AI inferencing through the introduction of the AI Tensor block. The Intel Stratix 10 NX FPGA delivers as much as 15X more compute performance for AI inferencing. This FPGA is Intel’s first AI-optimized FPGA and it will be available later this year. For more information about the Intel Stratix 10 NX FPGA, click here. Intel’s silicon and software portfolio empowers our customers’ intelligent services from the cloud to the edge. Notices and Disclaimers 15X more INT8 compute performance than today’s Stratix 10 MX for AI workloads: When implementing INT8 computations using the standard Stratix 10 DSP Block, there are 2 multipliers and 2 accumulators used. On the other hand, when using the AI Tensor Block, you have 30 multipliers and 30 accumulators. Therefore 60/4 provides up to 15X more INT8 compute performance when comparing the AI Tensor Block with the standard Stratix 10 DSP block. BERT 2.3X faster, LSTM 10X faster, ResNet50 3.8X faster: BERT batch 1 performance 2.3X faster than Nvidia V100 (DGX-1 server w/ 1x NVIDIA V100-SXM2-16GB | TensorRT 7.0 | Batch Size = 1 | 20.03-py3 | Precision: Mixed | Dataset: Sample Text); LSTM batch 1 performance 9.5X faster than Nvidia V100 (Internal server w/Intel® Xeon® CPU E5-2683 v3 and 1x NVIDIA V100-PCIE-16GB | TensorRT 7.0 | Batch Size = 1 | 20.01-py3 | Precision: FP16 | Dataset: Synthetic); ResNet50 batch 1 performance 3.8X faster than Nvidia V100 (DGX-1 server w/ 1x NVIDIA V100-SXM2-16GB | TensorRT 7.0 | Batch Size = 1 | 20.03-py3 | Precision: INT8 | Dataset: Synthetic). Estimated on Stratix 10 NX FPGA using -1 speed grade, tested in May 2020. Each end-to-end AI model includes all layers and computation as described in Nvidia’s published claims as of May 2020. Result is then compared against Nvidia’s published claims. Link for Nvidia: https://developer.nvidia.com/deep-learning-performance-training-inference. Results have been estimated or simulated using internal Intel analysis, architecture simulation, and modeling, and provided to you for informational purposes. Any differences in your system hardware, software or configuration may affect your actual performance.Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. No product or component can be absolutely secure. Tests document performance of components on a particular test, in specific systems. Differences in hardware, software, or configuration will affect actual performance. For more complete information about performance and benchmark results, visit http://www.intel.com/benchmarks . Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit http://www.intel.com/benchmarks . Intel Advanced Vector Extensions (Intel AVX) provides higher throughput to certain processor operations. Due to varying processor power characteristics, utilizing AVX instructions may cause a) some parts to operate at less than the rated frequency and b) some parts with Intel® Turbo Boost Technology 2.0 to not achieve any or maximum turbo frequencies. Performance varies depending on hardware, software, and system configuration and you can learn more at http://www.intel.com/go/turbo. Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice. Cost reduction scenarios described are intended as examples of how a given Intel-based product, in the specified circumstances and configurations, may affect future costs and provide cost savings. Circumstances will vary. Intel does not guarantee any costs or cost reduction. Intel does not control or audit third-party benchmark data or the web sites referenced in this document. You should visit the referenced web site and confirm whether referenced data are accurate. © Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.2.8KViews0likes0CommentsNokia AirFrame Edge Server based on 2nd Gen Intel® Xeon® Scalable CPUs and the Intel® FPGA PAC N3000 suits edge and far-edge cloud RAN, MEC, and 5G deployments
Nokia AirFrame open edge servers feature an ultra-small footprint so they fit well in many locations including existing base station facilities and far-edge sites. These compact servers are provisioned with a real-time, OPNFV compatible, OpenStack distribution that provides both low latency and high throughput for cloud RAN and other applications. The software runs atop integrated 2 nd Generation Intel® Xeon® Scalable CPUs and the optional Intel® FPGA Programmable Acceleration Card (Intel FPGA PAC) N3000, which enhance the server’s capabilities with respect to artificial intelligence (AI) and machine learning (ML) workloads. An optional Fronthaul Gateway module provides 5G/4G/CPRI connectivity to existing legacy radios and contains an L2/L3 switch and an Intel® Stratix® 10 FPGA, which provides high-performance L1 processing. The servers are available in OCP-accepted 2RU or 3RU chassis with as many as five server sled slots and dual redundant AC or DC power supplies. Nokia offers both 1U and 2U server sleds based on 2 nd Generation Intel Xeon Scalable processors with connectivity through front server slots for high accessibility. A new 4-minute Nokia video details the features and benefits of the Nokia AirFrame open edge server for use in a variety of deployments including Cloud RAN, Multi-access Edge Computing (MEC), and 5G. For more technical details about the Nokia AirFrame open edge server, please contact Nokia directly. Notices and Disclaimers Intel technologies may require enabled hardware, software or service activation. No product or component can be absolutely secure. Your costs and results may vary. Intel does not control or audit third-party data. You should consult other sources to evaluate accuracy. © Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.2.3KViews0likes0CommentsMore details on the Intel® Stratix® 10 NX FPGA, the first AI-optimized Intel® FPGA, now available in a new White Paper
The increasing complexity of AI models and the explosive growth of AI model size are both rapidly outpacing innovations in compute resources and memory capacity available on a single device. AI model complexity now doubles every 3.5 months or about 10X per year, driving rapidly increasing demand in AI computing capability. Memory requirements for AI models are also rising due to an increasing number of parameters or weights in a model. The Intel® Stratix® 10 NX FPGA is Intel’s first AI-optimized FPGA, developed to enable customers to scale their designs with increasing AI complexity while continuing to deliver real-time results. The Intel Stratix 10 NX FPGA fabric includes a new type of AI-optimized tensor arithmetic block called the AI Tensor Block. These AI Tensor Blocks are tuned for the common matrix-matrix or vector-matrix multiplications used for AI computations and contain dense arrays of lower precision multipliers typically used for AI model arithmetic. The smaller multipliers in these AI Tensor Blocks can also be aggregated to construct larger-precision multipliers. The AI Tensor Block’s architecture contains three dot-product units, each of which has ten multipliers and ten accumulators for a total of 30 multipliers and 30 accumulators within each block. The AI Tensor Block multipliers’ base precisions are INT8 and INT4 along with shared exponent to support Block Floating Point 16 (Block FP16) and Block Floating Point 12 (Block FP12) numerical formats. Multiple AI Tensor Blocks can be cascaded together to support larger vector calculations. A new White Paper titled “Pushing AI Boundaries with Scalable Compute-Focused FPGAs” covers the new features and performance capabilities of the Intel Stratix 10 NX FPGAs. Click here to download the White Paper. If you’d like to see the Intel Stratix 10 NX FPGA in action, please check out the recent blog “WaveNet Neural Network runs on Intel® Stratix® 10 NX FPGA, synthesizes 256 16 kHz audio streams in real time.” Notices & Disclaimers Intel technologies may require enabled hardware, software or service activation. No product or component can be absolutely secure. Your costs and results may vary. © Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.2.1KViews0likes0CommentsSemiconductor Engineering Magazine wonders whether Chiplet Technology is Good or Bad. Intel’s answer: Good, Very Good
Mark LaPedus, the Executive Editor for Manufacturing at SemiEngineering.com, recently published an article titled “The Good And Bad Of Chiplets.” From the Intel perspective, there’s only good. LaPedus writes: “With chiplets, the goal is to reduce product development times and costs by integrating pre-developed dies in an IC package. So a chipmaker may have a menu of modular dies, or chiplets, in a library. Chiplets could have different functions at various [manufacturing process] nodes.” LaPedus’ article quotes Ramune Nagisetty, Director of Process and Product Integration at Intel, who discussed the use of chiplet technology at Intel: “We’re in the early stages. More and more products from Intel and our competitors are going to reflect this approach moving forward. Every major foundry has a technology roadmap of increasing the interconnect densities for both the 2.5D and 3D integration approaches. In the coming years, we will see it expand in 2.5D and 3D types of implementations. We will see it expand into logic and memory stacking and logic and logic stacking.” As LaPedus notes in his article, “Intel and a few others have the technologies in place to develop these products.” In fact, Intel® FPGAs have incorporated chiplet technology for years. All members of the Intel® Stratix® 10 GX, SX, TX, and MX FPGA and SoC FPGA families and Intel® Agilex™ FPGA devices use chiplet technology to pair FPGA logic die with I/O tiles. (“Tile” is the name Intel uses for “chiplet.”) These I/O tiles connect to the FPGA die using a chiplet-to-chiplet (or die-to-die) interconnect standard developed by Intel called the Advanced Interface Bus (AIB) and Intel Embedded Multi-die Interconnect Bridge (EMIB) technology, which are both used for heterogeneous semiconductor manufacturing. (Note: Early last year, Intel made use of the AIB standard royalty-free to enable a broad ecosystem of chiplets, design methodologies, service providers, foundries, packaging, and system vendors. DARPA has adopted the AIB standard for its Common Heterogeneous Integration and IP Reuse Strategies (CHIPS) program. For details, see “Intel releases royalty-free, high-performance AIB interconnect standard to spur industry’s chiplet adoption and grow the ecosystem.”) In addition to using chiplet manufacturing technology for the I/O functions, the Intel Stratix 10 MX FPGA family uses chiplet manufacturing and packaging technologies to attach high-speed, 3D HBM2 DRAM die stacks to the FPGA fabric die within the FPGA device package, placing several gigabytes of fast memory very close to the FPGA logic. Here, chiplet technology facilitates quick access to data stored in the adjacent HBM2 DRAM stacks. As LaPedus writes, “This is where chiplets fit in. A bigger chip can be broken into smaller pieces and mixed and matched as needed.” LaPedus’ article in Semiconductor Engineering also explains another fundamental reason for the move towards chiplet technology: “…no one technology can meet all needs.” For example, the multiple memory die in an HBM2 die stack are manufactured using a memory-optimized manufacturing process. Memory processes require fewer metal layers than the advanced CMOS processes used for making FPGAs, which reduces the manufacturing cost for the memory. Similarly, I/O functions that must drive long copper traces on circuit boards have functional requirements that diverge from the capabilities of the densest, fastest CMOS logic process available. Here again, the use of chiplet technology makes tremendous sense. Chiplet technology allows the appropriate pairing of function with semiconductor process technology on a case-by-case basis. Rapid product development is yet another important reason for using chiplet technology. The use of tiles (or chiplets) give Intel the ability to reuse proven I/O silicon, which accelerates the product development process by eliminating the need to reimplement, retest, and validate functions that have already been tested and proven in silicon. For example, Intel Agilex devices utilize some of the same I/O tiles developed for and used to manufacture Intel Stratix 10 FPGAs. Intel continues to develop new tiles and to introduce new devices based on these new and existing tiles. Because it’s much easier to design, fabricate, and test a tile rather than an entire FPGA, AIB and tile technology give Intel an express lane for bringing more new products to market faster. Note: For more information, see “Chiplets (tiles) create a high-speed path for getting new FPGAs to market.” You might also be interested in the Intel White Paper titled “Enabling Next-Generation Platforms Using Intel’s 3D System-in-Package Technology.” Intel’s silicon and software portfolio empowers our customers’ intelligent services from the cloud to the edge. Notices & Disclaimers Intel technologies may require enabled hardware, software or service activation. No product or component can be absolutely secure. Your costs and results may vary. © Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.2.1KViews0likes0CommentsBittWare’s 520NX Accelerator Card harnesses AI-optimized power of the Intel® Stratix® 10 NX FPGA
BittWare has just announced the 520NX AI Accelerator PCIe card based on the AI-optimized Intel® Stratix® 10 NX FPGA, which incorporates specialized AI Tensor blocks with a theoretical peak computational speed of 143 INT8 TOPS and 8 Gbytes of in-package, stacked high-bandwidth memory (HBM2). In addition to the Intel Stratix 10 NX FPGA’s internal resources, the 520NX AI Accelerator card’s on-board resources include a PCIe Gen3 x16 host interface, four independently clocked QSFP28 card cages that support as many as four 100G optical transceiver modules, and two DIMM sockets that can accommodate as much as 256 Gbytes of memory. The 520NX offers enterprise-class features and capabilities for application development and deployment including: HDL developer toolkit: API, PCIe drivers, application example designs, and diagnostic self-test Passive, active, or liquid cooling options Multiple OCuLink expansion ports for additional PCIe, storage, or network I/O The BittWare 520NX AI Accelerator card based on the AI-optimized Intel Stratix 10 NX FPGA The Intel Stratix 10 NX FPGA was introduced earlier this year. (See “Intel has just announced its first AI-optimized FPGA – the Intel® Stratix® 10 NX FPGA – to address the rapid increase in AI model complexity.”) More recently, the FPGA’s AI capabilities have been demonstrated by Myrtle.ai, running a WaveNet text-to-speech application that can synthesize 256 simultaneous streams of 16 kbps audio. (See “WaveNet Neural Network runs on Intel® Stratix® 10 NX FPGA, synthesizes 256 16 kHz audio streams in real time.”) The new BittWare 520NX AI Accelerator card makes it much easier to develop applications based on the Intel Stratix 10 NX FPGA by providing the FPGA on a proven, ready-to-integrate PCIe card. For more information about the 520NX AI Accelerator card, please contact BittWare directly. Notices & Disclaimers Intel technologies may require enabled hardware, software or service activation. No product or component can be absolutely secure. Your costs and results may vary. Intel does not control or audit third-party data. You should consult other sources to evaluate accuracy. © Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.2KViews0likes0CommentsMegh Computing demos advanced, scalable Video Analytics Solution portfolio at WWT’s Advanced Technology Center
Megh Computing’s Video Analytics Solution (VAS) portfolio implements a flexible and scalable video analytics pipeline consisting of the following elements: Video Ingestion Video Transformation Object Detection and Inference Video Analytics Visualization Because Megh’s VAS is scalable, it can handle real-time video streams from a few to more than 150 video cameras. Because it’s flexible, you can use the VAS pipeline elements to construct a wide range of video analytics applications such as: Factory floor monitoring to ensure that unauthorized visitors and employees avoid hazardous or secure areas Industrial monitoring to ensure that production output is up to specifications Smart City highway monitoring to detect vehicle collisions and other public incidents Retail foot-traffic monitoring to aid in kiosk, endcap, and product positioning, and other merchandising activities Museum and gallery exhibit monitoring to ensure that safe distances are maintained between visitors and exhibits Because the number of cameras can be scaled to well more than 100 when using the VAS portfolio, Megh clearly needed a foundational technology portfolio that would support the solution’s demanding video throughput, computing, and scalability requirements. Megh selected a broad, integrated Intel technology portfolio that includes the latest 3 rd Generation Intel® Xeon® Scalable processors, Intel® Stratix® FPGAs, Intel® Core™ processors, and the Intel® Distribution of the OpenVINO™ toolkit. Megh also chose WWT’s Advanced Technology Center (ATC), a collaborative ecosystem for designing, building, educating, demonstrating, and deploying innovative technology products and integrated architectural solutions for WWT customers, partners, and employees to demo the capabilities of the VAS. WWT built and is hosting a Megh VAS environment within its ATC that allows WWT and Megh customers to explore this solution in a variety of use cases, including specific customer environment needs and other requirements. For more information about the Megh VAS portfolio and the WWT ATC, check out the WWT blog here. Notices and Disclaimers Intel is committed to respecting human rights and avoiding complicity in human rights abuses. See Intel’s Global Human Rights Principles. Intel’s products and software are intended only to be used in applications that do not cause or contribute to a violation of an internationally recognized human right. Intel does not control or audit third-party data. You should consult other sources to evaluate accuracy. Intel technologies may require enabled hardware, software, or service activation. No product or component can be absolutely secure. Your costs and results may vary. © Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.1.9KViews0likes0Comments