We Hope to See You at Intel® FPGA Technology Day 2021
Intel® FPGA Technology Day (IFTD) is a free four-day event that will be hosted virtually across the globe in North America, China, Japan, EMEA, and Asia Pacific from December 6-9, 2021. The theme of IFTD 2021 is “Accelerating a Smart and Connected World.” This virtual event will showcase Intel® FPGAs, SmartNICs, and infrastructure processing units (IPUs) through webinars and demonstrations presented by Intel experts and partners. The sessions are designed to be of value for a wide range of audiences, including Technology Managers, Product Managers, Board Designers, and C-Level Executives. Attendees to this four-day event will learn how Intel’s solutions can solve the toughest design challenges and provide the flexibility to adapt to the needs of today’s rapidly evolving markets. A full schedule of Cloud, Networking, Embedded, and Product Technology sessions, each just 30 minutes long, will enable you to build the best agenda for your needs. Day 1 (December 6), TECHNOLOGY: FPGAs for a Dynamic Data Centric World: Advances in cloud infrastructure, networking, and computing at the edge are accelerating. Flexibility is key to keeping pace with this transforming world. Learn about innovations developed and launched in 2021 along with new Intel FPGA technologies that address key market transitions. Day 2 (December 7), CLOUD AND ENTERPRISE: Data Center Acceleration: The cloud is changing. Disaggregation improves data center performance and scalability but requires new tools to keep things optimized. Intel FPGA smart infrastructure enables smarter applications to make the internet go fast! Day 3 (December 8): EMBEDDED: Transformation at the Edge: As performance and latency continue to dictate compute’s migration to the edge, Intel FPGAs provide the workload consolidation and optimization required with software defined solutions enabled by a vast and growing partner ecosystem. Day 4 (December 9): NETWORKING: 5G – The Need for End-to-End Programmability: The evolution of 5G continues to push the performance-to-power envelop, requiring market leaders to adapt or be replaced. Solutions for 5G and beyond will require scalable and programmable portfolios to meet evolving standards and use cases. To explore the detailed program, see the featured speakers, and register for the North America event, Click Here. Register in other regions below: EMEA China Japan Asia Pacific2.8KViews0likes0CommentsUpcoming Webinar: Computational Storage Acceleration Using Intel® Agilex™ FPGAs
Today’s computational workloads are larger, more complex, and more diverse than ever before. The explosion of applications such as high-performance computing (HPC), artificial intelligence (AI), machine vision, analytics, and other specialized tasks is driving the exponential growth of data. At the same time, the trend towards using virtualized servers, storage, and network connections means that workloads are growing in scale and complexity. Traditionally, data is conveyed to a computational engine -- such as a central processing unit (CPU) -- for processing, but transporting the data takes time, consumes power, and is increasingly proving to be a bottleneck in the overall process. The solution is computational storage, also known as in-storage processing (ISP). The idea here is that, instead of bringing the data to the computational engine, computational storage brings the computational engine to the data. In turn, this allows the data to be processed and analyzed where it is generated and stored. To learn more on this concept, please join us for a webinar led by Sean Lundy from Eideticom, Craig Petrie from BittWare, and myself: Computational Storage Using Intel® Agilex™ FPGAs: Bringing Acceleration Closer to Data The webinar will take place on Thursday 11 November 2021 starting at 11:00 a.m. EST. Sean will introduce Eideticom’s NoLoad computational storage technology for use in data center storage and compute applications. NoLoad technology provides CPU offload for applications, resulting in the dramatic acceleration of compute-plus-data intensive tasks like storage workloads, data bases, AI inferencing, and data analytics. NoLoad’s NVMe-compliant interface simplifies the deployment of computational offload by making it straightforward to deploy in servers of all types and across all major operating systems. Craig will introduce BittWare’s new IA-220-U2 FPGA-based Computational Storage Processor (CSP) that supports Eideticom’s NoLoad technology as an option. The IA-220-U2 CSP, which is powered by an Intel Agilex F-Series FPGA with 1.4M logic elements (LEs), features PCIe Gen 4 for twice the bandwidth offered by PCIe Gen 3 solutions. This CSP works alongside traditional Flash SSDs, providing accelerated computational storage services (CSS) by performing compute-intensive tasks, including compression and/or encryption. This allows users to build out their storage using standard SSDs instead of being locked into a single vendor’s storage solutions. BittWare’s IA-220-U2 accelerates NVMe FLASH SSDs by sitting alongside them as another U.2 Module. (Image source: Bittware) We will also discuss the Intel Agilex™ FPGAs that power BittWare’s new CSP. Built on Intel’s 10nm SuperFin Technology, these devices leverage heterogeneous 3D system-in-package (SiP) technology. Agilex I-Series FPGAs and SoC FPGAs are optimized for bandwidth-intensive applications that require high-performance processor interfaces, such as PCIe Gen 5 and Compute Express Link (CXL). Meanwhile, Agilex F-Series FPGAs and SoC FPGAs are optimized for applications in data center, networking, and edge computing. With transceiver support up to 58 Gbps, advanced DSP capabilities, and PCIe Gen 4 x16, the Agilex F-Series FPGAs that power BittWare’s new CSP provide the customized connectivity and acceleration required by compute-plus-data intensive power sensitive applications such as HPC, AI, machine vision, and analytics. This webinar will be of interest to anyone involved with these highly complex applications and environments. We hope to see you there, so Register Now before all the good virtual seats are taken.2.2KViews0likes0CommentsMegh Computing demos advanced, scalable Video Analytics Solution portfolio at WWT’s Advanced Technology Center
Megh Computing’s Video Analytics Solution (VAS) portfolio implements a flexible and scalable video analytics pipeline consisting of the following elements: Video Ingestion Video Transformation Object Detection and Inference Video Analytics Visualization Because Megh’s VAS is scalable, it can handle real-time video streams from a few to more than 150 video cameras. Because it’s flexible, you can use the VAS pipeline elements to construct a wide range of video analytics applications such as: Factory floor monitoring to ensure that unauthorized visitors and employees avoid hazardous or secure areas Industrial monitoring to ensure that production output is up to specifications Smart City highway monitoring to detect vehicle collisions and other public incidents Retail foot-traffic monitoring to aid in kiosk, endcap, and product positioning, and other merchandising activities Museum and gallery exhibit monitoring to ensure that safe distances are maintained between visitors and exhibits Because the number of cameras can be scaled to well more than 100 when using the VAS portfolio, Megh clearly needed a foundational technology portfolio that would support the solution’s demanding video throughput, computing, and scalability requirements. Megh selected a broad, integrated Intel technology portfolio that includes the latest 3 rd Generation Intel® Xeon® Scalable processors, Intel® Stratix® FPGAs, Intel® Core™ processors, and the Intel® Distribution of the OpenVINO™ toolkit. Megh also chose WWT’s Advanced Technology Center (ATC), a collaborative ecosystem for designing, building, educating, demonstrating, and deploying innovative technology products and integrated architectural solutions for WWT customers, partners, and employees to demo the capabilities of the VAS. WWT built and is hosting a Megh VAS environment within its ATC that allows WWT and Megh customers to explore this solution in a variety of use cases, including specific customer environment needs and other requirements. For more information about the Megh VAS portfolio and the WWT ATC, check out the WWT blog here. Notices and Disclaimers Intel is committed to respecting human rights and avoiding complicity in human rights abuses. See Intel’s Global Human Rights Principles. Intel’s products and software are intended only to be used in applications that do not cause or contribute to a violation of an internationally recognized human right. Intel does not control or audit third-party data. You should consult other sources to evaluate accuracy. Intel technologies may require enabled hardware, software, or service activation. No product or component can be absolutely secure. Your costs and results may vary. © Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.1.9KViews0likes0CommentsArrow’s Tech Snacks Festival features three delicious presentations to help you develop motion-control, video, and high-performance FPGA applications
This year, Arrow’s 10-day virtual Tech Snacks Festival (June 7-18) will include more than forty snack-sized sessions led by industry experts on topics ranging from AI/ML to Industry 4.0 and the future of automotive electronics. During the festival, you’ll have the opportunity to get some expert help from Intel presenters who will cover three hot application areas: motion-control, video, and high-performance FPGA applications. These are 15-minute sessions, so you won’t need to reschedule your entire day to attend. Here are the details for the three Intel® FPGA Tech Snack presentations: Wednesday, June 9: Ben Jeppesen and Diwakar Bansal will discuss real-time, deterministic, single- and multi-axis motion control for industrial drives and robotics. Tuesday, June 15: Jean-Michel Vuillamy will discuss FPGA-based video processing, made easy. Thursday, June 17: Graham Baker will discuss the breakthrough performance of 10nm Intel® Agilex® FPGAs. (See “Breakthrough FPGA News from Intel” for more details.) Each 15-minute session is directly followed by an optional 45-minutes deep-dive on the topic, including Q&A for those looking for a more in-depth experience. In addition, there are a couple of hour-long panel discussions that include Intel FPGA experts that you might want to attend: Tuesday, June 8: “AI is predicted to reach human levels by 2029. Where are we right now and how will this impact designs of the future?” Dr. Mark Jervis from the Intel Programmable Solutions Group, is participating on this panel. Thursday, June 17: “Getting ready for 5G. Doing things better and faster and creating new services and products we can’t foresee yet.” Martin Wiseman from the Intel Programmable Solutions Group, is participating on this panel. For more information and to register, click here. Notices and Disclaimers Intel technologies may require enabled hardware, software or service activation. No product or component can be absolutely secure. Your costs and results may vary. © Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.810Views0likes0CommentsNASA’s Perseverance Mars Rover Mission Team relies on 4K video collaboration system based on Crestron DM NVX AV-over-IP streaming media devices
Last month, NASA’s Perseverance rover landed successfully in Jezero Crater on Mars and has taken its first test drive. You may have seen video of the mission team at NASA Jet Propulsion Laboratory (JPL) standing up and cheering when news of the successful landing arrived here on Earth. The Mars 2020 Perseverance Mission Team is the first at NASA JPL to use a new high-resolution, multi-mission AV collaboration system developed by Amplified Design LLC, which was completed in time for the Mars 2020 Perseverance Rover launch in July of 2020. This AV collaboration system spans 45 new collaboration areas across multiple floors at NASA JPL’s Space Flight Operations Facility in Pasadena, California. The system is based on nearly 200 Crestron DM NVX streaming media devices that provide secure, real-time AV encoding and decoding for the transport of real-time 4K video streams over standard Gigabit Ethernet. The DM NVX backbone distributes video to both the large and small work rooms as well as mission support areas for anywhere-to-anywhere video sharing. Crestron developed the DM NVX streaming media devices using Intel® Arria® 10 SoC FPGAs. Click here to see the Amplified Design case study describing the NASA JPL AV collaboration system. Click here for more information about Crestron DM NVX AV-over-IP streaming media devices. Click here for more information about Intel Arria 10 FPGAs and SoC FPGAs. Notices & Disclaimers Intel technologies may require enabled hardware, software or service activation. No product or component can be absolutely secure. Your costs and results may vary. Intel does not control or audit third-party data. You should consult other sources to evaluate accuracy. © Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.3.4KViews0likes0CommentsThe iAbra PathWorks toolkit brings embedded AI inference, real-time video recognition to the edge with Intel® Arria® 10 FPGAs, Intel® Xeon® Gold CPUs, and Intel® Atom® processors
It’s not always easy to get data to the cloud. Multi-stream computer vision applications, for example, are extremely data intensive and can overwhelm even 5G networks. A company named iAbra has created tools that build neural networks that run on FPGAs in real time, so that inference can be carried out at the edge in small, light, low-power embedded devices rather than in the cloud. Using what-you-see-is-what-you-get (WYSIWYG) tools, iAbra’s PathWorks toolkit creates neural networks that run on an Intel® Atom® x7-E3950 processor and an Intel® Arria® 10 FPGA in the embedded platform. The tools themselves run on an Intel® Xeon® Gold 6148 CPU to create the neural networks. From a live video stream, artificial intelligence (AI) can detect, for example, how many people are standing in a bus queue, which modes of transport people are using, and where there is flooding or road damage. In exceptional circumstances, AI can also alert emergency services if vehicles are driving against the traffic flow or if pedestrians have suddenly started running. Collecting reliable, real-time data from the streets and compressing it through AI inference makes it far easier to manage resources and to improve quality of life, productivity, and emergency response times in Smart Cities. To be effective, these vision applications must process a huge amount of data in real time. A single HD stream generates 800 to 900 megabits of streaming video data per second. That’s per camera. Although broadband 5G networks deliver more bandwidth and can greatly increase the device density within geographic regions, broadly and densely distributed armadas of video cameras still risk overwhelming these networks. The solution to this bandwidth constraint is to incorporate real-time AI inference at the network edge so that only the processed, essential information is sent to the cloud. That sort of processing requires an embedded AI device that can withstand the harsh environments and resource constraints found on the edge. iAbra has approached the problem of building AI inference into embedded devices by mimicking the human brain using FPGAs. Usually, image recognition solutions map problems to generic neural networks, such as ResNet. However, such networks are too big to fit into many FPGAs destined for embedded use. Instead, iAbra’s PathWorks toolkit constructs a new, unique neural network for each problem, which is tailored and highly optimized for the target FPGA architecture where it will run. In this case, the target architecture is an Intel Arria 10 FPGA. “We believe the Intel Arria 10 FPGA is the most efficient part for this application today, based on our assessment of the performance per watt,” said iAbra’s CTO Greg Compton. “The embedded platform also incorporates the latest generation Intel Atom processor, which provides a number of additional instructions for matrix processing over the previous generation. That makes it easier to do vector processing tasks. When we need to process the output from the neural network, we can do it faster with instructions that are better attuned to the application,” Compton explains. He adds: “A lot of our customers are not from the embedded world. By using Intel Atom processors, we enable them to work within the tried and tested Intel® architecture stack they know.” Similarly, said Compton: “We chose the Intel Xeon Gold 6148 processor for the network creation step as much for economics as performance.” iAbra developed this solution using OpenCL, a programming framework that makes FPGA programming more accessible by using a language similar to C, enabling code portability across different types of processing devices. iAbra also uses Intel® Quartus® Prime Software for FPGA design and development and the Intel® C++ Compiler to develop software. The company has incorporated Intel® Math Kernel Library (Intel® MKL), which provides optimized code for mathematical operations across a range of processing platforms. Compton continues: “With Intel MKL, Intel provides highly optimized shortcuts to a lot of low-level optimizations that really help our programmer productivity. OpenCL is an intermediate language that enables us to go from the high level WYSIWYG world to the low-level transistor bitmap world of FPGAs. We need shortcuts like these to reduce the problem domains, otherwise developing software like ours would be too big a problem for any one organization to tackle.” iAbra participates in the Intel FPGA Partner Program and Intel® AI Builders Program, which gives the company access to the Intel® AI DevCloud. “The Intel® AI DevCloud enables us to get cloud access to the very latest hardware, which may be difficult to get hold of, such as some highly specialized Intel® Stratix® 10 FPGA boards. It gives us a place where Intel customers can come and see our framework in a controlled environment, enabling them to try before they buy. It helped us with our outreach for a Smart Cities project recently. It’s been a huge help to have Intel’s support as we refine our solution, and develop our code using Intel’s frameworks and libraries. We’ve worked closely with the Intel engineers, including helping them to improve the OpenCL compiler by providing feedback as one of its advanced users,” Compton concludes. For more information about the iAbra Pathworks toolkit, please see the new Case Study titled “Bringing AI Inference to the Edge.” Intel’s silicon and software portfolio empowers our customers’ intelligent services from the cloud to the edge. Notices & Disclaimers Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit www.intel.com/benchmarks. Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available updates. See backup for configuration details. No product or component can be absolutely secure. Your costs and results may vary. Intel technologies may require enabled hardware, software or service activation. © Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others. 1.5KViews0likes0CommentsAccenture, the Sulubaaï Environmental Foundation, and Intel partner to create the CORaiL underwater vision system to help restore fragile coral reef ecosystems
Accenture has partnered with the Sulubaaï Environmental Foundation and Intel to develop an innovative solution for recreating and restoring coral reefs to their former health by installing special equipment including underwater cameras in the Pangatalan Island marine protected area in the Philippines. This system supports the Sulubaaï Environmental Foundation’s efforts to regrow and rebuild the island’s coral reef ecosystem. Coral reefs are one of the most diverse ecosystems on planet Earth and they’re threatened by a host of challenges including overfishing, deteriorating water quality, and ocean pollution. The project is called CORaiL, where the “ai” stands for “artificial intelligence.” The engineering solution combines mobile, digital, and deep learning technologies. By employing rapid-prototyping techniques, the engineering team quickly developed an edge computing solution to monitor the progress of the restoration efforts by observing, classifying, and measuring marine life activity in the coral reef. The CORaiL testbed was initially launched in May, 2019. Coral reef restoration efforts in the Pangatalan marine protected area involve the installation of concrete structures on the sea bottom. These structures serve as scaffolds that anchor new fragments of living coral. The CORaiL system studies these living coral fragments as they grow over time. As the coral reef grows, it creates a larger and larger haven for marine life. Consequently, the CORaiL system also monitors the presence of fish. An artificial, concrete reef to provide support for unstable coral fragments underwater is implemented by Accenture, Intel and Sulubaaï Environmental Foundation in the coral reef surrounding the Pangatalan Island in the Philippines. Photo Credit: Accenture Smart underwater video cameras located near the concrete scaffolds employ Accenture’s Video Analytics Services Platform (VASP), which provides a toolset for rapidly building and deploying AI capabilities. Accenture’s VASP provides multiple powerful analytics and visualization tools to help analysts increase operational insight and timely decisions from computer vision models. VASP is powered by multiple Intel technologies including Intel® Xeon® CPUs, Intel® FPGA Programmable Acceleration Cards (PACs), and the Intel® Neural Compute Stick 2 powered by the Intel® Movidius™ Myriad™ X Vision Processing Unit (VPU) and the Intel® Distribution of OpenVINO™ toolkit. The resulting platform provides an effective, non-invasive observation platform for ongoing observations. The underwater cameras detect and photograph fish as they swim past. Deep learning algorithms built into the cameras then count and classify the fish. The processed data then travels wirelessly to the ocean surface and then to onshore servers where it’s processed, analyzed, and stored. The resulting data and reports allow the research team to make data-driven decisions about the restoration work. Accenture hopes to use this system for other marine applications including fish migration studies and intrusion-detection and -monitoring of restricted underwater areas such as marine sanctuaries and fish farms. For more information, see “Using Artificial Intelligence to Save Coral Reefs.” For more information about the Accenture VASP system, click here. Notices and Disclaimers Intel technologies may require enabled hardware, software or service activation. No product or component can be absolutely secure. Your costs and results may vary. Intel does not control or audit third-party data. You should consult other sources to evaluate accuracy. © Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.1.6KViews0likes0CommentsSUB2r customizable video camera gets its video pipeline programmability and configurability from an FPGA powered by four Intel® Enpirion® Power Systems on Chip (PowerSoC) modules
SUB2r, a self-funded startup makes a video camera for storytellers and gamers who want to create compelling video content. The company had a problem: visible current noise was marring the images coming from its eponymous video camera. The SUB2r camera is really a video computing platform with a configurable, customizable, upgradeable, programmable, open-architecture imaging pipeline based on an FPGA-based implementation. The camera derives its operating power from the attached USB 3.0 cable, which limits the camera’s power supply input to 5 volts at 3 amps. The camera’s original, internal voltage down-converter design was injecting a significant amount of noise into the circuitry, which resulted in a noisy image. The SUB2r customizable video camera gets its video pipeline programmability and configurability from an FPGA that’s powered by four Intel® Enpirion® Power Systems on Chip (PowerSoC) modules. SUB2r called in an expert power-conversion team from Intel for help. The team helped SUB2r redesign the camera’s on-board voltage regulation using four Intel® Enpirion® Power System on a Chip (PowerSoC) devices: the EN5319QI, EN5329QI, EN5339QI, and EN6340QI PowerSoCs. These four Intel Enpirion PowerSoC modules generate the four on-board power supplies required by the FPGA: 1 volt at 3 amps 5 volts at 3 amps 8 volts at 1 amp 5 volts at 2 amps After the power supply was redesigned using the Intel Enpirion PowerSoCs, visible current noise in the camera’s video output stream became undetectable. As a bonus, current consumption drawn over the USB 3.0 power/data cable dropped from 3 amps to 1.6 amps and the passively cooled camera’s internal operating temperature dropped from 58° C to 41° C due to the improved power supply efficiency. The reduced internal operating temperature should improve camera reliability, as it would in any electronic system – like yours. The design team at SUB2r was so impressed by the overall result that they made a short “thank you” video for the Intel Enpirion team that helped with the redesign. You’ll find that video here. If you are facing tough power-conversion challenges, think about the results that SUB2r achieved and then consider giving the Intel Enpirion team a call. They’re here to help. Legal Notices and Disclaimers: Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. No product or component can be absolutely secure. Check with your system manufacturer or retailer or learn more at intel.com. Results have been estimated or simulated using internal Intel analysis, architecture simulation and modeling, and provided to you for informational purposes. Any differences in your system hardware, software or configuration may affect your actual performance. Intel does not control or audit third-party data. You should review this content, consult other sources, and confirm whether referenced data are accurate. Cost reduction scenarios described are intended as examples of how a given Intel- based product, in the specified circumstances and configurations, may affect future costs and provide cost savings. Circumstances will vary. Intel does not guarantee any costs or cost reduction. © Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Altera is a trademark of Intel Corporation or its subsidiaries. Cyclone is a trademark of Intel Corporation or its subsidiaries. Intel and Enpirion are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.394Views0likes0CommentsConvert from HDR to SDR video on the fly with an FPGA-based video-conversion solution from the bcom Technology Research Institute
The human eye has an instantaneous dynamic range of approximately 10 to 14 stops of dynamic range. Standard dynamic range (SDR) video with 8 bits per subpixel and high dynamic range (HDR) video with 10 bits per subpixel have a dynamic range of eight and 10 stops per subpixel respectively, so it’s no wonder that consumers show a growing preference for HDR imagery. Consequently, 4K HDR video equipment sales volumes are booming. However, many, many legacy SDR televisions, monitors, and phones are still in use, so broadcasters and other video content providers must support multiple video formats to accommodate these legacy video display devices. Often, this support requires storage of video content in multiple forms and at multiple display resolutions. Because video content consumes large amounts of storage capacity, the need to store several versions of the same video greatly increases storage costs. Real-time video conversion from one format to another eliminates the need to store multiple versions of video files but requires enough computing horsepower to convert and transmit simultaneous HDR and SDR video streams. An on-the-fly video-conversion solution from the b<>com Technology Research Institute can deliver multiple, simultaneous video streams in both HDR and SDR formats using the Intel Acceleration Stack in conjunction with Intel® Xeon® CPUs and the Intel® Programmable Acceleration Card (PAC) with Intel® Arria® 10 FPGA. The b<>com HDR/SDR video-conversion solution can convert recorded and live video from HDR to SDR video formats on the fly. The solution is now available as a verified acceleration function (AFU) within the Intel Acceleration Stack. The b<>com solution can: Convert HDR video into legacy SDR formats Mix live HDR and SDR video sources, which is useful at live events such as sporting events Allow simultaneous monitoring of multiple screen resolutions during video production For more technical details, see the Solution Brief titled “Solving HDR to SDR Video Conversion Challenges.” For more information about other FPGA-based video solutions from Intel, click here. Legal Notices and Disclaimers: Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. No product or component can be absolutely secure. Check with your system manufacturer or retailer or learn more at intel.com. Results have been estimated or simulated using internal Intel analysis, architecture simulation and modeling, and provided to you for informational purposes. Any differences in your system hardware, software or configuration may affect your actual performance. Intel does not control or audit third-party data. You should review this content, consult other sources, and confirm whether referenced data are accurate. Cost reduction scenarios described are intended as examples of how a given Intel- based product, in the specified circumstances and configurations, may affect future costs and provide cost savings. Circumstances will vary. Intel does not guarantee any costs or cost reduction. © Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Altera is a trademark of Intel Corporation or its subsidiaries. Cyclone is a trademark of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.1.7KViews0likes0Comments