NASA’s Perseverance Mars Rover Mission Team relies on 4K video collaboration system based on Crestron DM NVX AV-over-IP streaming media devices
Last month, NASA’s Perseverance rover landed successfully in Jezero Crater on Mars and has taken its first test drive. You may have seen video of the mission team at NASA Jet Propulsion Laboratory (JPL) standing up and cheering when news of the successful landing arrived here on Earth. The Mars 2020 Perseverance Mission Team is the first at NASA JPL to use a new high-resolution, multi-mission AV collaboration system developed by Amplified Design LLC, which was completed in time for the Mars 2020 Perseverance Rover launch in July of 2020. This AV collaboration system spans 45 new collaboration areas across multiple floors at NASA JPL’s Space Flight Operations Facility in Pasadena, California. The system is based on nearly 200 Crestron DM NVX streaming media devices that provide secure, real-time AV encoding and decoding for the transport of real-time 4K video streams over standard Gigabit Ethernet. The DM NVX backbone distributes video to both the large and small work rooms as well as mission support areas for anywhere-to-anywhere video sharing. Crestron developed the DM NVX streaming media devices using Intel® Arria® 10 SoC FPGAs. Click here to see the Amplified Design case study describing the NASA JPL AV collaboration system. Click here for more information about Crestron DM NVX AV-over-IP streaming media devices. Click here for more information about Intel Arria 10 FPGAs and SoC FPGAs. Notices & Disclaimers Intel technologies may require enabled hardware, software or service activation. No product or component can be absolutely secure. Your costs and results may vary. Intel does not control or audit third-party data. You should consult other sources to evaluate accuracy. © Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.3.4KViews0likes0CommentsGold Release of Intel® oneAPI Toolkits arrive in December: One Programming Model for a Heterogeneous World of CPUs, GPUs, and FPGAs
This week at the Supercomputing 2020 (SC20) conference, Intel announced that the gold release of the Intel® oneAPI toolkits will become available next month. The oneAPI industry initiative is creating a unified and simplified cross-architecture programming model that delivers uncompromised performance without proprietary lock-in, while allowing you to integrate legacy code. Intel oneAPI Toolkits allow you to create code for CPUs and XPUs (the term “XPU” means “other processing units”) – such as GPUs based on the Intel® Xe architecture and Intel® FPGAs including Intel® Arria® and Intel® Stratix® FPGAs – within a unified programming environment. With oneAPI, you choose the best processing architecture for the specific problem you’re solving without needing to rewrite software. Intel oneAPI toolkits take full advantage of cutting-edge hardware capabilities and instructions built into Intel® CPUs including Intel® AVX-512 SIMD instruction extensions and Intel® DL Boost, along with features unique to Intel® XPUs. Built on long-standing and proven Intel developer tools, Intel oneAPI toolkits support familiar languages and software standards while providing full continuity with existing code. The gold release of Intel oneAPI toolkits will start shipping in December. They will be available for free, to run locally and in the Intel® DevCloud. Commercial versions that include worldwide support from Intel technical consulting engineers will also be offered. For more information about this and other Intel SC20 announcements, click here. Notices & Disclaimers Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex. Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available updates. See backup for configuration details. No product or component can be absolutely secure. Your costs and results may vary. Intel technologies may require enabled hardware, software or service activation. © Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others. 1.4KViews0likes0CommentsThe Next Platform discusses the latest Intel Networking Innovations including new Intel® SmartNICs based on Intel® FPGAs
Last month, Intel introduced several FPGA-based networking innovations, including the Intel® FPGA SmartNIC C5000X platform architecture – designed to meet the needs of Cloud Service Providers. Also announced: the Inventec FPGA SmartNIC C5020X, based on the Intel FPGA SmartNIC C5000X platform, and the Silicom FPGA SmartNIC N5010, a hardware-programmable 4x100G FPGA SmartNIC that combines an Intel® Stratix® 10 DX FPGA with an Intel® Ethernet 800 series adapter. (See “Intel and partners announce high-performance SmartNICs that deliver programmable network acceleration for cloud data centers and communications infrastructure.”) The Next Platform – an online publication dedicated to covering high-end computing at large enterprises, supercomputing centers, hyperscale data centers, and public clouds – discusses these and other network-related introductions from Intel in an article titled “Intel Networking: Not Just A Bag Of Parts,” written by the publication’s co-editor Timothy Prickett Morgan. The article features quotes from an interview with Hong Hou, Corporate Vice President and General Manager of the Connectivity Group at Intel, and it contains a large amount of analysis of the announced Intel networking innovations and offerings. Early in the article, Prickett Morgan had this to say: “Here is the reality: Companies will splurge on compute, by which we generally mean CPUs but increasingly GPUs and occasionally FPGAs, rather than memory or networking because they understand it better. Here is another fact that cannot be bargained with: The amount of data that is being shuttled around by networks in the datacenter is growing at 25 percent per year. But budgets cannot grow at that rate… “…it absolutely is desirable to have Intel be in the networking business and bring what it knows about the datacenter to bear.” Prickett Morgan then quotes from his interview with Hou: “For Intel, we want to provide intelligence and programmability in a flexible network that can handle the complexity of the emerging workloads. Our vision is to optimize all of these technology assets to provide enabling solutions for our customers – we are not just a supplier of a bag of parts.” Although the scope of these Intel offerings is quite wide and therefore hard to sum up in a few words, Prickett Morgan makes the effort. With respect to the FPGA-based introductions made last month, he writes: “What is interesting here is that the new Intel SmartNICs are not actually made by Intel, but by Inventec and Silicom, the former being an increasingly important ODM for hyperscalers and cloud builders and the latter being a network interface supplier for the past two decades. These devices sit on a spectrum along with the pure FPGA acceleration card, called the Programmable Acceleration Card, which we talked about when it first debuted with the [Intel® Arria® 10 FPGA] in October 2017 and when it was updated in September 2018 with the [Intel] Stratix 10 FPGA.” To read Timothy Prickett Morgan’s full article in The Next Platform, click here. For an associated editorial about these networking topics written by Hong Hou, see “Smart, Programmable Interconnects Unleash Compute Performance” in the Intel Newsroom. For more information about the October 15 Intel Networking event, click here. Notices & Disclaimers Intel technologies may require enabled hardware, software or service activation. No product or component can be absolutely secure. Your costs and results may vary. Intel does not control or audit third-party data. You should consult other sources to evaluate accuracy. © Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.1.8KViews0likes0CommentsIntel and University of Massachusetts Lowell pilot FPGA learning for students in the Intel® DevCloud. You can now use these resources too, for free
The COVID-19 pandemic has disrupted everyone’s lives in ways we could never have imagined. For example, the most basic methods of teaching and learning based on group settings are no longer easy or even possible in many locations. The challenge is to deliver high-quality instruction while dealing with the reality of group-learning restrictions. Like all educational institutions, universities are currently struggling to meet their students’ needs for remote instruction and hands-on experience without access to traditional on-campus environments or shared lab equipment. The Intel Programmable Solutions Group’s Academic Program Team decided to help its university partners by developing a new remote lab framework for FPGA development based on the existing Intel® DevCloud service, which provides remote access to servers configured with the 6th to 8th generation Intel® Core™ processors and Intel® FPGA Programmable Acceleration Cards (Intel® FPGA PACs) based on Intel® Stratix® 10 and Intel® Arria® 10 FPGAs. The Academic Program Team added a new remote Intel FPGA Dev Kit experience to these existing Intel DevCloud resources. The Intel DevCloud also provides access to a wide variety of development tools including: Intel® Quartus® Prime Pro Edition design software ModelSim Intel® FPGA SDK for OpenCL™ software technology Intel® Distribution of OpenVINO™ toolkit Acceleration Stack for Intel® Xeon® CPU with FPGAs Intel® C++ Compiler In addition, the Intel DevCloud provides remote access to frameworks such as the TensorFlow machine learning library and the Data Plane Development Kit (DPDK), which consists of libraries used to accelerate packet processing workloads. Last semester, the University of Massachusetts Lowell (UML) and Intel piloted this newly enriched teaching framework. Dr. Yan Luo, an associate professor at UML, said: “By using the FPGA Cloud framework for remote labs, it has eliminated staff support and procurement cost to maintain the local development environment, and given us access to the latest and diverse FPGA hardware and development tools with 24x7 availability of the dev environment to students. It has also improved student productivity, flattened the learning curve of new technologies, and lowered the barrier for innovation. This has been invaluable to address the challenges of COVID-19.” Using the Intel DevCloud, graduate students studying heterogeneous computing can remotely access high-end servers based on Intel CPUs and Intel FPGA PACs to run lab exercises. Students in undergraduate labs now have access to Intel® Quartus® Prime Pro design software and can interact with Intel Dev Kits hosted remotely in the Intel DevCloud. These upgraded FPGA capabilities in the Intel DevCloud are not restricted to university students and faculty. Anyone can now access these tools and learn to use them, for free. For more information, click here. Notices & Disclaimers Intel technologies may require enabled hardware, software or service activation. No product or component can be absolutely secure. Your costs and results may vary. © Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.1.6KViews0likes0CommentsReinventing Storage for Social Networks with Intel® Xeon® Gold CPUs, Intel® Optane™ persistent memory, Intel® SSDs, and the Intel® PAC with Intel® Arria® 10 GX FPGA
Social networking is hugely data intensive. Consequently, data storage consumes a significant part of the budget for VK, Russia’s largest social network. In 2019, VK had 97 million monthly active users. 1 Every day, users view 9 billion posts and 650 million videos, and they exchange 10 billion messages. 2 They tap the “like” button a billion times a day. 1 Over the course of a year, VK users upload hundreds of petabytes of new data, including photos and videos. 2 VK has modernized its tiered storage using Intel® Xeon® Gold CPUs, Intel® Optane™ persistent memory, Intel® Optane™ SSDs, Intel® SSDs with non-volatile memory express (NVMe), and Intel® FPGA Programmable Acceleration Cards (Intel® PACs) with Intel® Arria® 10 GX FPGAs. As a result, VK expects to realize significant financial savings while improving performance. More specifically: Upgrading the server processor from the Intel Xeon Gold 6230 processor to the Intel Xeon Gold 6238R processor cut the compute cost by 40 percent and improved performance per watt by 72 percent 2 , according to VK. VK introduced Intel Optane persistent memory for the rating counter servers that support the newsfeed, migrating data away from more expensive DRAM. VK upgraded the storage for frequently accessed data in its content delivery network (CDN) to Intel SSDs with 3D NAND technology, and moved the most frequently used data to Intel Optane SSDs. VK is deploying Intel PACs with Intel Arria 10 GX FPGAs to reduce storage requirements and provide faster image conversion using the FPGA-based, high-performance CTAccel image-processing accelerator to convert images on-the-fly from a stored, high-resolution master as they are requested by users. VK’s image storage strategy eliminates the need to store multiple versions of the same image at different resolutions and saves storage space. VK reported that it was able to consolidate servers at a ratio of 2:1 using the new storage solution while supporting the continued data growth with storage of up to 0.408 PB in 1U servers and reducing power and cooling costs. 2 A new, detailed case study with ample technical details titled “Reinventing Storage for Social Networks with Intel® Technology“ is now available for download on the Intel Web site. Click on the link to review this new case study and to download the PDF. Intel’s silicon and software portfolio empowers our customers’ intelligent services from the cloud to the edge. Notices & Disclaimers 1 Data from https://vk.com/about 2 These results were reported to Intel by VK based on configurations that included the listed Intel components. Tests carried out by VK April-November 2019. Configurations: OLD CDN servers: 2 x Intel® Xeon® processor E5-2670 v2 or 2 x Intel® Xeon® processor E5-2680 v4, SATA SSDs, DRAM, 2 x 10 Gb/s Ethernet cards NEW CDN servers: 2 x Intel® Xeon® Gold 6238R processors or 2 x Intel® Xeon® Gold 6230, Intel® Optane™ SSD DC P4800X SSDs, 6 x Intel® SSD D5-P4320, DRAM, SATA SSD, 2 x 25 Gb/s Intel® Ethernet Adapter XXV710-DA2. Software: cache_api, nginx, Automated Certificate Management Environment (ACME). OLD rating counters servers: 2 x Intel® Xeon® processor E5-2680 v4, SATA SSD (boot device), DRAM, 2 x 10 Gb/s Ethernet cards NEW rating counters servers: 2 x Intel® Xeon® Gold 6238R processors or 2 x Intel® Xeon® Gold 6230, 12 x Intel® Optane™ persistent memory, DRAM, SATA SSDs (boot device), 2 x 25 Gb/s Intel® Ethernet Adapter XXV710-DA2. Software: customized version of Memcached. Microcode for Intel® Xeon® Gold 6230: 0x500002c Microcode for Intel® Xeon® Gold 6238R: 0x0500002f 2 Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. No product or component can be absolutely secure. Check with your system manufacturer or retailer or learn more at www.intel.com. Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit www.intel.com/benchmarks. Performance results may not reflect all publicly available security updates. No product or component can be absolutely secure. Optimization Notice: Intel’s compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice. Notice Revision #20110804. Intel does not control or audit third-party data. You should consult other sources to evaluate accuracy. Your costs and results may vary. Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade. © Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.878Views0likes0CommentsAnalog Devices and Intel collaborate on O-RAN compliant 5G development platform, melding ADI’s RF and Intel® FPGA technologies
Today, Analog Devices, Inc. (ADI) announced that it is collaborating with Intel to create a flexible 5G radio platform. This new O-RAN compliant platform combines the advanced technologies of ADI’s RF transceivers with advanced digital front end (DFE) functionality and Intel® Arria® 10 FPGAs to create a new set of design tools that address 5G network design and scaling challenges. In short, this new platform will help 5G developers tackle their design challenges quickly and economically. “This new radio platform reduces the overall cost of design and quickens our customers’ time to market without sacrificing system-level performance,” said Joe Barry, Vice President of the Wireless Communications Business Unit at ADI. “We look forward to working with ADI to expedite hardware development by offering FPGA platforms that are flexible to meet changing requirements, are easy to use, and remove many of the complex barriers of RF and digital product development,” said CC Chong, Senior Director, Head of Wireless & Access, Programmable Solution Group at Intel. The O-RAN Alliance is committed to evolving radio access networks around the world, based on a foundation of virtualized network elements, white-box hardware, and standardized interfaces that fully embrace the principles of intelligence and openness. For a related blog, see: “Arrow Electronics™ develops four complete RF reference platforms based on the Intel® Arria® 10 SOC FPGA and Analog Devices high-speed RF transceivers, ADCs, AND DACs.” Intel’s silicon and software portfolio empowers our customers’ intelligent services from the cloud to the edge. Notices & Disclaimers Intel does not control or audit third-party data. You should consult other sources to evaluate accuracy. Intel technologies may require enabled hardware, software or service activation. No product or component can be absolutely secure. Your costs and results may vary. © Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.1.3KViews0likes0CommentsThe iAbra PathWorks toolkit brings embedded AI inference, real-time video recognition to the edge with Intel® Arria® 10 FPGAs, Intel® Xeon® Gold CPUs, and Intel® Atom® processors
It’s not always easy to get data to the cloud. Multi-stream computer vision applications, for example, are extremely data intensive and can overwhelm even 5G networks. A company named iAbra has created tools that build neural networks that run on FPGAs in real time, so that inference can be carried out at the edge in small, light, low-power embedded devices rather than in the cloud. Using what-you-see-is-what-you-get (WYSIWYG) tools, iAbra’s PathWorks toolkit creates neural networks that run on an Intel® Atom® x7-E3950 processor and an Intel® Arria® 10 FPGA in the embedded platform. The tools themselves run on an Intel® Xeon® Gold 6148 CPU to create the neural networks. From a live video stream, artificial intelligence (AI) can detect, for example, how many people are standing in a bus queue, which modes of transport people are using, and where there is flooding or road damage. In exceptional circumstances, AI can also alert emergency services if vehicles are driving against the traffic flow or if pedestrians have suddenly started running. Collecting reliable, real-time data from the streets and compressing it through AI inference makes it far easier to manage resources and to improve quality of life, productivity, and emergency response times in Smart Cities. To be effective, these vision applications must process a huge amount of data in real time. A single HD stream generates 800 to 900 megabits of streaming video data per second. That’s per camera. Although broadband 5G networks deliver more bandwidth and can greatly increase the device density within geographic regions, broadly and densely distributed armadas of video cameras still risk overwhelming these networks. The solution to this bandwidth constraint is to incorporate real-time AI inference at the network edge so that only the processed, essential information is sent to the cloud. That sort of processing requires an embedded AI device that can withstand the harsh environments and resource constraints found on the edge. iAbra has approached the problem of building AI inference into embedded devices by mimicking the human brain using FPGAs. Usually, image recognition solutions map problems to generic neural networks, such as ResNet. However, such networks are too big to fit into many FPGAs destined for embedded use. Instead, iAbra’s PathWorks toolkit constructs a new, unique neural network for each problem, which is tailored and highly optimized for the target FPGA architecture where it will run. In this case, the target architecture is an Intel Arria 10 FPGA. “We believe the Intel Arria 10 FPGA is the most efficient part for this application today, based on our assessment of the performance per watt,” said iAbra’s CTO Greg Compton. “The embedded platform also incorporates the latest generation Intel Atom processor, which provides a number of additional instructions for matrix processing over the previous generation. That makes it easier to do vector processing tasks. When we need to process the output from the neural network, we can do it faster with instructions that are better attuned to the application,” Compton explains. He adds: “A lot of our customers are not from the embedded world. By using Intel Atom processors, we enable them to work within the tried and tested Intel® architecture stack they know.” Similarly, said Compton: “We chose the Intel Xeon Gold 6148 processor for the network creation step as much for economics as performance.” iAbra developed this solution using OpenCL, a programming framework that makes FPGA programming more accessible by using a language similar to C, enabling code portability across different types of processing devices. iAbra also uses Intel® Quartus® Prime Software for FPGA design and development and the Intel® C++ Compiler to develop software. The company has incorporated Intel® Math Kernel Library (Intel® MKL), which provides optimized code for mathematical operations across a range of processing platforms. Compton continues: “With Intel MKL, Intel provides highly optimized shortcuts to a lot of low-level optimizations that really help our programmer productivity. OpenCL is an intermediate language that enables us to go from the high level WYSIWYG world to the low-level transistor bitmap world of FPGAs. We need shortcuts like these to reduce the problem domains, otherwise developing software like ours would be too big a problem for any one organization to tackle.” iAbra participates in the Intel FPGA Partner Program and Intel® AI Builders Program, which gives the company access to the Intel® AI DevCloud. “The Intel® AI DevCloud enables us to get cloud access to the very latest hardware, which may be difficult to get hold of, such as some highly specialized Intel® Stratix® 10 FPGA boards. It gives us a place where Intel customers can come and see our framework in a controlled environment, enabling them to try before they buy. It helped us with our outreach for a Smart Cities project recently. It’s been a huge help to have Intel’s support as we refine our solution, and develop our code using Intel’s frameworks and libraries. We’ve worked closely with the Intel engineers, including helping them to improve the OpenCL compiler by providing feedback as one of its advanced users,” Compton concludes. For more information about the iAbra Pathworks toolkit, please see the new Case Study titled “Bringing AI Inference to the Edge.” Intel’s silicon and software portfolio empowers our customers’ intelligent services from the cloud to the edge. Notices & Disclaimers Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit www.intel.com/benchmarks. Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available updates. See backup for configuration details. No product or component can be absolutely secure. Your costs and results may vary. Intel technologies may require enabled hardware, software or service activation. © Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others. 1.5KViews0likes0CommentsVMware vSphere 6.7 now supports the Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA
According to this VMware blog published in January, VMware has expanded its array of supported hardware accelerators in its vSphere hypervisor to include support for the Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA (Intel® PAC with Intel® Arria® 10 GX FPGA). Why? The VMware blog explains: “The highly parallel architecture of FPGAs allows for higher performance, lower latency, lower power consumption, and higher throughput for computations. Typical workloads for FPGA devices include Network Function Virtualization (NFV), deep neural networks, digital signal processing, and cryptography workloads.” Intel PAC with Intel Arria 10 GX FPGA is a PCIe-based FPGA accelerator card for data centers that offers both inline and lookaside acceleration. It provides the performance and versatility of FPGA acceleration and is one of several Intel platforms supported by the Acceleration Stack for Intel® Xeon® CPU with FPGAs. The VMware blog continues: “To use the Intel® Arria® FPGA with VMware vSphere, version 6.7 Update 1 is required. DirectPath I/O is the only supported method today. This means that the hardware address of the PCIe device (the FPGA in this case) is directly exposed in the virtual machine.” In addition to the VMware vSphere, version 6.7 Update 1, you’ll need the VMware SR-IOV (single root input/output virtualization) driver for the Intel Xeon CPU and the Intel PAC with Intel Arria 10 GX FPGA, which is for VMware Hypervisor version ESXi 6.7 U2. You’ll find that SR-IOV driver and an associated toolkit here in the Intel Download Center. VMware is a member of the Intel® FPGA Partner Program. Intel’s silicon and software portfolio empowers our customers’ intelligent services from the cloud to the edge. Notices and Disclaimers Your costs and results may vary. Intel technologies may require enabled hardware, software or service activation. No product or component can be absolutely secure. © Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.1.1KViews0likes0CommentsToyota sees 2x to 3x performance gains in PostgreSQL/PostGIS performance using FPGA-enabled servers and Swarm64 DA
In two benchmarks run by Toyota, Swarm64 DA (Database Accelerator) used in conjunction with a server containing an Intel® Programmable Acceleration Card (PAC) with Intel Arria® 10 GX FPGA boosted open-source PostgreSQL/PostGIS performance by a factor of 3x and delivered more predictable query performance (with 35x less latency variance) under concurrent loads. Swarm64 DA is an FPGA-accelerated, PostgreSQL relational database used for enterprise data analytics. When Swarm64 is installed, it programs the FPGA with hundreds of processes that work in parallel to write, read, filter, compress, and decompress data contained within database tables. This massive added parallelism lowers the CPUs’ workloads and increases their throughput. The overall result is much better server performance using fewer servers, which reduces overall TCO. The results of these two PostgreSQL/PostGIS benchmarks are documented in a new White Paper titled “Measuring the impact of FPGA acceleration on PostGIS concurrent query performance at Toyota,” which is now available on the Swarm64 Web site. Tripling server performance means, for example, that a 5-node cluster of FPGA-equipped servers can handle the same PostgreSQL load as a 15-node cluster of servers equipped only with CPUs. This outcome has profound implications for capital equipment and operating costs in data centers. Put simply, FPGA-equipped servers with appropriate acceleration software can handle many more database workloads. The benchmarks consisted of two scenarios: Test Scenario 1: Obstacle detection. Find the list of vehicles that have been within 2 km of a random location within the last 6 minutes. The scenario assesses the efficiency of FPGA acceleration for the geospatial-optimized functions of PostGIS. Test Scenario 2: Analytics scenario on the NYC Taxi Dataset, a 267 gigabyte table consisting of 1.1 billion rows of data. This data set is too large to be held in the servers’ RAM and therefore required SSD accesses. In Test Scenario 1, the FPGA-equipped servers delivered more consistent query performance – with 35x less latency variance – and 3x faster performance over 3200 queries while scaling the number of concurrent users from 1 to 32. In Test Scenario 2, query response latency dropped by a factor of 2x to 3x using FPGA-accelerated servers, depending on the query. To download a copy of the White Paper from Swarm64, click here. Note: Swarm64 is a member of the Intel® FPGA Partner Program. Legal Notices and Disclaimers: Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. No product or component can be absolutely secure. Check with your system manufacturer or retailer or learn more at intel.com. Results have been estimated or simulated using internal Intel analysis, architecture simulation and modeling, and provided to you for informational purposes. Any differences in your system hardware, software or configuration may affect your actual performance. Intel does not control or audit third-party data. You should review this content, consult other sources, and confirm whether referenced data are accurate. Cost reduction scenarios described are intended as examples of how a given Intel- based product, in the specified circumstances and configurations, may affect future costs and provide cost savings. Circumstances will vary. Intel does not guarantee any costs or cost reduction. © Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Altera is a trademark of Intel Corporation or its subsidiaries. Cyclone is a trademark of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.1KViews0likes0CommentsConvert from HDR to SDR video on the fly with an FPGA-based video-conversion solution from the bcom Technology Research Institute
The human eye has an instantaneous dynamic range of approximately 10 to 14 stops of dynamic range. Standard dynamic range (SDR) video with 8 bits per subpixel and high dynamic range (HDR) video with 10 bits per subpixel have a dynamic range of eight and 10 stops per subpixel respectively, so it’s no wonder that consumers show a growing preference for HDR imagery. Consequently, 4K HDR video equipment sales volumes are booming. However, many, many legacy SDR televisions, monitors, and phones are still in use, so broadcasters and other video content providers must support multiple video formats to accommodate these legacy video display devices. Often, this support requires storage of video content in multiple forms and at multiple display resolutions. Because video content consumes large amounts of storage capacity, the need to store several versions of the same video greatly increases storage costs. Real-time video conversion from one format to another eliminates the need to store multiple versions of video files but requires enough computing horsepower to convert and transmit simultaneous HDR and SDR video streams. An on-the-fly video-conversion solution from the b<>com Technology Research Institute can deliver multiple, simultaneous video streams in both HDR and SDR formats using the Intel Acceleration Stack in conjunction with Intel® Xeon® CPUs and the Intel® Programmable Acceleration Card (PAC) with Intel® Arria® 10 FPGA. The b<>com HDR/SDR video-conversion solution can convert recorded and live video from HDR to SDR video formats on the fly. The solution is now available as a verified acceleration function (AFU) within the Intel Acceleration Stack. The b<>com solution can: Convert HDR video into legacy SDR formats Mix live HDR and SDR video sources, which is useful at live events such as sporting events Allow simultaneous monitoring of multiple screen resolutions during video production For more technical details, see the Solution Brief titled “Solving HDR to SDR Video Conversion Challenges.” For more information about other FPGA-based video solutions from Intel, click here. Legal Notices and Disclaimers: Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. No product or component can be absolutely secure. Check with your system manufacturer or retailer or learn more at intel.com. Results have been estimated or simulated using internal Intel analysis, architecture simulation and modeling, and provided to you for informational purposes. Any differences in your system hardware, software or configuration may affect your actual performance. Intel does not control or audit third-party data. You should review this content, consult other sources, and confirm whether referenced data are accurate. Cost reduction scenarios described are intended as examples of how a given Intel- based product, in the specified circumstances and configurations, may affect future costs and provide cost savings. Circumstances will vary. Intel does not guarantee any costs or cost reduction. © Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Altera is a trademark of Intel Corporation or its subsidiaries. Cyclone is a trademark of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.1.7KViews0likes0Comments