Megh Computing demos advanced, scalable Video Analytics Solution portfolio at WWT’s Advanced Technology Center
Megh Computing’s Video Analytics Solution (VAS) portfolio implements a flexible and scalable video analytics pipeline consisting of the following elements: Video Ingestion Video Transformation Object Detection and Inference Video Analytics Visualization Because Megh’s VAS is scalable, it can handle real-time video streams from a few to more than 150 video cameras. Because it’s flexible, you can use the VAS pipeline elements to construct a wide range of video analytics applications such as: Factory floor monitoring to ensure that unauthorized visitors and employees avoid hazardous or secure areas Industrial monitoring to ensure that production output is up to specifications Smart City highway monitoring to detect vehicle collisions and other public incidents Retail foot-traffic monitoring to aid in kiosk, endcap, and product positioning, and other merchandising activities Museum and gallery exhibit monitoring to ensure that safe distances are maintained between visitors and exhibits Because the number of cameras can be scaled to well more than 100 when using the VAS portfolio, Megh clearly needed a foundational technology portfolio that would support the solution’s demanding video throughput, computing, and scalability requirements. Megh selected a broad, integrated Intel technology portfolio that includes the latest 3 rd Generation Intel® Xeon® Scalable processors, Intel® Stratix® FPGAs, Intel® Core™ processors, and the Intel® Distribution of the OpenVINO™ toolkit. Megh also chose WWT’s Advanced Technology Center (ATC), a collaborative ecosystem for designing, building, educating, demonstrating, and deploying innovative technology products and integrated architectural solutions for WWT customers, partners, and employees to demo the capabilities of the VAS. WWT built and is hosting a Megh VAS environment within its ATC that allows WWT and Megh customers to explore this solution in a variety of use cases, including specific customer environment needs and other requirements. For more information about the Megh VAS portfolio and the WWT ATC, check out the WWT blog here. Notices and Disclaimers Intel is committed to respecting human rights and avoiding complicity in human rights abuses. See Intel’s Global Human Rights Principles. Intel’s products and software are intended only to be used in applications that do not cause or contribute to a violation of an internationally recognized human right. Intel does not control or audit third-party data. You should consult other sources to evaluate accuracy. Intel technologies may require enabled hardware, software, or service activation. No product or component can be absolutely secure. Your costs and results may vary. © Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.1.9KViews0likes0CommentsBittWare announces IA-420F PCIe accelerator card and IA-220-U2 computational storage accelerator based on Intel® Agilex™ FPGAs and SoC FPGAs
BittWare has added two new accelerator products based on Intel® Agilex™ FPGAs and SoC FPGAs to the previously announced IA-840F Enterprise-Class FPGA Accelerator. (See “BittWare IA-840F FPGA Accelerator PCIe Card bristles with high-speed I/O, is based on an Intel® Agilex™ FPGA.”) The new IA-420F half-height, half-length, single-width PCIe card with multiple 100G network ports is designed for SmartNIC and computational storage applications and the IA-220-U2 computational storage processor in a U.2 form factor is specifically designed to accommodate NVMe computational storage workloads. BittWare provides all three FPGA-based accelerators with application reference designs and additional support for the Intel® oneAPI programming model, which provides hardware developers with the ability to create domain-specific FPGA platforms and allows application developers to build cross-architecture, single-source compilation designs. BittWare has added the IA-420F half-height, half-length, single-width PCIe card and the IA-220-U2 computational storage processor in a U.2 form factor to its existing line of workload acceleration products based on Intel® Agilex™ FPGAs and FPGA SoCs. According to BittWare, the IA-420F features a PCIe Gen4 x16 host interface that provides customers with as much as 2X the bandwidth normally available for FPGA-augmented systems, making it a compelling resource for accelerating data analytic workloads, while the IA-220-U2 can serve as a deployment-friendly computational storage processor in a U.2 NVMe storage array rack. For more information about these accelerators, please contact BittWare directly. For more information about the broad and growing line of Intel Agilex FPGA and SoC devices, click here, and be sure to read “Breakthrough FPGA News from Intel.” Notices & Disclaimers Intel technologies may require enabled hardware, software or service activation. No product or component can be absolutely secure. Intel does not control or audit third-party data. You should consult other sources to evaluate accuracy. Your costs and results may vary. © Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.2.4KViews0likes0CommentsIntel at MWC: Panel Discusses “How is the O-RU reference architecture driving 5G Radio Development?”
Telecom operators need to reduce development time and while implementing new solutions to increase the performance and reliability of 5G networks in a cost-effective manner. Operators have already implemented open systems in their network cores and virtualizing functions, and now they want to achieve the same benefits with RANs including Radio Units (RUs). The Open RU (O-RU) roadmap covers both traditional macro radios and Massive MIMO and addresses key challenges including overall design cost reduction and accelerated time-to-market without sacrificing system-level power and performance. A 30-minute panel discussion about O-RU developments titled “How is the O-RU reference architecture driving 5G Radio Development?” in the Intel® Network & Edge Panel Series discusses the key digital, analog, RF technology, and business considerations that will enable radio ODMs, CMs, and System Integrators to offer flexible and scalable end-to-end RAN solutions. Panel participants include: Tero Kola, VP, System Product Management, Mobile Networks Business Group, Nokia Francisco (Paco) Martin, Group Head of OPEN RAN, Vodafone Rajesh Srinivasa, SVP and GM, Radio Business Unit, Mavenir Mike Fitton, VP, Intel Programmable Solutions Network Business Division, Intel Corporation Nitin Sharma, GM, Wireless Communications, Analog Devices, Inc (ADI) Moderator: Guy Daniels, Director of Content, TelecomTV Interested? Click here to watch the panel discussion. Click here to see all of the panel discussions in the Intel® Network & Edge Panel Series. Click here to see all the Intel events at Mobile World Congress. Notices & Disclaimers Intel technologies may require enabled hardware, software or service activation. No product or component can be absolutely secure. Intel does not control or audit third-party data. You should consult other sources to evaluate accuracy. Your costs and results may vary. © Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.2.5KViews0likes0CommentsIntel® Agilex™ FPGAs with high-performance Crypto Blocks target Infrastructure Processing Units (IPUs), SmartNICs, and 5G Networks
From the edge to the cloud, security challenges in the form of cyberattacks and data breaches loom ever larger as attacks on high-speed networks multiply. Massive amounts of data are at risk but so are physical resources including critical physical infrastructure. Cryptography and authentication represent potent countermeasures to these attacks. The latest members of the Intel® Agilex™ FPGA and SoC FPGA families (AGF023/AGF019 and AGI023/AGI019) now feature high-performance crypto blocks paired with MACsec soft IP to help mitigate the risks and limit the effects of these cyberattacks. Here’s a diagram showing the major features of these new Intel Agilex FPGAs with high-performance crypto blocks: A new White Paper titled “Intel® Agilex™ FPGAs target IPUs, SmartNICs, and 5G Networks” describes these Intel Agilex FPGAs with high-performance crypto blocks in more detail. For more information about Infrastructure Processing Units (IPUs), see: "Infrastructure Processing Units (IPUs) intelligently manage and accelerate data center infrastructure" "Want to know more about how Infrastructure Processing Units (IPUs) are revolutionizing data center architecture? A 4-minute video provides clarity" For more information about the broad and growing line of Intel Agilex FPGA and SoC devices, click here. Notices & Disclaimers Intel technologies may require enabled hardware, software or service activation. No product or component can be absolutely secure. Your costs and results may vary. © Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.1.9KViews0likes0CommentsPrincipal Industry Analyst Patrick Moorhead Explores Infrastructure Processing Units (IPUs) on Forbes.com
Navin Shenoy, Executive VP and GM for the Data Platforms Group at Intel, presented a vision for Infrastructure Processing Units (IPUs) earlier this week in a keynote speech titled "Why You Have Been Thinking About the Future of the Data Center All Wrong" at The Six Five Summit, a strategy event presented by Patrick Moorhead’s company, Moor Insights & Strategy, and Futurum Research. That same day, Moorhead published a detailed article about Shenoy’s keynote and IPUs titled “Intel Announces The 'Infrastructure Processing Unit' At The Six Five Summit 2021” on Forbes.com. In the article, Moorhead wrote: “So, what is an IPU? An IPU is designed to offload the main CPU in the datacenter and edge, which gives more predictable and efficient application performance and enables improved virtualization capabilities that CSPs and carriers are seeking.” A bit later in the article, he wrote: “Micro-services are a way to have your cake and eat it too, as you can run them at the most efficient place but keep a sense of management through orchestration. That orchestration, as I will dig into later, creates challenges that an IPU can help solve.” He discusses some alternatives and then asks: “But what makes Intel’s IPU different?” And he immediately answers his own question: “Intel knows the data center and says it’s the only IPU built in collaboration with hyperscale cloud partners. This would mean that Intel would be able to actively innovate and deliver a product that is already addressing real-world problems. Technically, if you equate its SmartNIC as IPU, Intel is already the IPU volume leader in the IPU market with Xeon-D, FPGA and Ethernet components.” There’s more detail in the Forbes.com article. Click on the link above to read it in full. For more information about Infrastructure Processing Units (IPUs), see “Infrastructure Processing Units (IPUs) intelligently manage and accelerate data center infrastructure” and “Want to know more about how Infrastructure Processing Units (IPUs) are revolutionizing data center architecture? A 4-minute video provides clarity.” Notices & Disclaimers Intel technologies may require enabled hardware, software or service activation. No product or component can be absolutely secure. Your costs and results may vary. © Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.1.4KViews0likes0CommentsWant to know more about how Infrastructure Processing Units (IPUs) are revolutionizing data center architecture? A 4-minute video provides clarity
Yesterday, Intel unveiled its vision for infrastructure processing units (IPUs), programmable networking devices that free up CPU cycles by taking over and intelligently managing system-level infrastructure in data centers. (See “Infrastructure Processing Units (IPUs) intelligently manage and accelerate data center infrastructure.”) The company also announced that IPUs based on Intel® CPUs and Intel® FPGAs are already shipping from Intel and Intel Partners. You can find out more about IPUs from Intel and Intel Partners by clicking here. If you’d like a deeper dive into IPUs and their application, Guido Appenzeller, chief technology officer with Intel's Data Platforms Group, has made a 4-minute video giving a concise and easily understood explanation of the IPU concept. You can watch the video here. Notices & Disclaimers Intel technologies may require enabled hardware, software or service activation. No product or component can be absolutely secure. Your costs and results may vary. © Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.1.4KViews0likes0CommentsInfrastructure Processing Units (IPUs) intelligently manage and accelerate data center infrastructure
Today during the virtual Six Five Summit, Intel unveiled its vision for infrastructure processing units (IPUs), programmable networking devices designed to enable cloud and communication service providers (CSPs and CoSPs) to reduce overhead and free up CPU cycles by taking over and intelligently managing system-level infrastructure in data centers. IPUs help data centers better utilize resources by providing a secure, programmable, stable solution that helps to balance processing and storage resource demands. Guido Appenzeller, Chief Technology Officer of the Intel Data Platforms Group, said: “The IPU is a new category of technologies and is one of the strategic pillars of our cloud strategy. It expands upon our SmartNIC capabilities and is designed to address the complexity and inefficiencies in the modern data center. At Intel, we are dedicated to creating solutions and innovating alongside our customer and partners — the IPU exemplifies this collaboration.” IPUs can: Accelerate infrastructure functions – including storage virtualization, network virtualization and security – using dedicated protocol accelerators. Free up CPU cores by offloading storage and network virtualization functions previously performed by software running on CPUs. Improve data center utilization by allowing more flexible workload placement. Enable CSPs to customize infrastructure function deployments quickly, using software alone. “As a result of Intel’s collaboration with a majority of hyperscalers, Intel is already the volume leader in the IPU market with our Xeon-D, FPGA and Ethernet components,” said Patty Kummrow, vice president in the Data Platforms Group and general manager of Ethernet Products Group at Intel. Intel and Intel Partners will continue to roll out FPGA-based IPU platforms and dedicated ASICs that leverage the powerful software foundation that Intel has already developed. For more information, see “Intel Unveils Infrastructure Processing Unit.” For more information about FPGA-based IPUs from Intel and Intel Partners, click here. Notices & Disclaimers Intel technologies may require enabled hardware, software or service activation. No product or component can be absolutely secure. Your costs and results may vary. © Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.2.1KViews0likes0CommentsLearn to use Data Parallel C++ to accelerate MapReduce processing at this ISC oneAPI Dev Summit session on June 23
Many workloads have inherent data parallelism which can be leveraged to achieve optimal performance. However, it is challenging to design data parallel programs and map them to different hardware targets. Data Parallel C++ (DPC++) is an open alternative for cross-architecture development, aiming to address this challenge. A session titled “Word-Count with MapReduce on FPGA, A DPC++ Example” at the upcoming ISC oneAPI Dev Summit, a two-day live virtual conference, discusses the MapReduce distributed programming model for large datasets and how to accelerate MapReduce processing using FPGAs and DPC++. The tutorial will be presented by Dr. Yan Luo, a Professor in the Department of Electrical and Computer Engineering at the University of Massachusetts Lowell. Dr. Luo’s research spans computer architecture, machine learning and data analytics. He teaches undergrad and graduate courses on topics such as embedded systems and heterogeneous computing. To register for the oneAPI Dev Summit (June 22-23) and Dr. Luo’s DPC++ tutorial (June 23), click here. For more information about DPC++, see: Intel’s ‘One API’ Project Delivers Unified Programming Model Across Diverse Architectures Springer and Intel publish new book on DPC++ parallel programming, and you can get a free PDF copy! Free Webinar: Accelerate FPGA Programming using Data Parallel C++ (DPC++) and the Intel® oneAPI Base Toolkit with the Intel® FPGA Add-on Notices & Disclaimers Intel technologies may require enabled hardware, software or service activation. No product or component can be absolutely secure. Your costs and results may vary. © Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.2.1KViews0likes0Comments