Semiconductor Engineering Magazine wonders whether Chiplet Technology is Good or Bad. Intel’s answer: Good, Very Good
Mark LaPedus, the Executive Editor for Manufacturing at SemiEngineering.com, recently published an article titled “The Good And Bad Of Chiplets.” From the Intel perspective, there’s only good. LaPedus writes: “With chiplets, the goal is to reduce product development times and costs by integrating pre-developed dies in an IC package. So a chipmaker may have a menu of modular dies, or chiplets, in a library. Chiplets could have different functions at various [manufacturing process] nodes.” LaPedus’ article quotes Ramune Nagisetty, Director of Process and Product Integration at Intel, who discussed the use of chiplet technology at Intel: “We’re in the early stages. More and more products from Intel and our competitors are going to reflect this approach moving forward. Every major foundry has a technology roadmap of increasing the interconnect densities for both the 2.5D and 3D integration approaches. In the coming years, we will see it expand in 2.5D and 3D types of implementations. We will see it expand into logic and memory stacking and logic and logic stacking.” As LaPedus notes in his article, “Intel and a few others have the technologies in place to develop these products.” In fact, Intel® FPGAs have incorporated chiplet technology for years. All members of the Intel® Stratix® 10 GX, SX, TX, and MX FPGA and SoC FPGA families and Intel® Agilex™ FPGA devices use chiplet technology to pair FPGA logic die with I/O tiles. (“Tile” is the name Intel uses for “chiplet.”) These I/O tiles connect to the FPGA die using a chiplet-to-chiplet (or die-to-die) interconnect standard developed by Intel called the Advanced Interface Bus (AIB) and Intel Embedded Multi-die Interconnect Bridge (EMIB) technology, which are both used for heterogeneous semiconductor manufacturing. (Note: Early last year, Intel made use of the AIB standard royalty-free to enable a broad ecosystem of chiplets, design methodologies, service providers, foundries, packaging, and system vendors. DARPA has adopted the AIB standard for its Common Heterogeneous Integration and IP Reuse Strategies (CHIPS) program. For details, see “Intel releases royalty-free, high-performance AIB interconnect standard to spur industry’s chiplet adoption and grow the ecosystem.”) In addition to using chiplet manufacturing technology for the I/O functions, the Intel Stratix 10 MX FPGA family uses chiplet manufacturing and packaging technologies to attach high-speed, 3D HBM2 DRAM die stacks to the FPGA fabric die within the FPGA device package, placing several gigabytes of fast memory very close to the FPGA logic. Here, chiplet technology facilitates quick access to data stored in the adjacent HBM2 DRAM stacks. As LaPedus writes, “This is where chiplets fit in. A bigger chip can be broken into smaller pieces and mixed and matched as needed.” LaPedus’ article in Semiconductor Engineering also explains another fundamental reason for the move towards chiplet technology: “…no one technology can meet all needs.” For example, the multiple memory die in an HBM2 die stack are manufactured using a memory-optimized manufacturing process. Memory processes require fewer metal layers than the advanced CMOS processes used for making FPGAs, which reduces the manufacturing cost for the memory. Similarly, I/O functions that must drive long copper traces on circuit boards have functional requirements that diverge from the capabilities of the densest, fastest CMOS logic process available. Here again, the use of chiplet technology makes tremendous sense. Chiplet technology allows the appropriate pairing of function with semiconductor process technology on a case-by-case basis. Rapid product development is yet another important reason for using chiplet technology. The use of tiles (or chiplets) give Intel the ability to reuse proven I/O silicon, which accelerates the product development process by eliminating the need to reimplement, retest, and validate functions that have already been tested and proven in silicon. For example, Intel Agilex devices utilize some of the same I/O tiles developed for and used to manufacture Intel Stratix 10 FPGAs. Intel continues to develop new tiles and to introduce new devices based on these new and existing tiles. Because it’s much easier to design, fabricate, and test a tile rather than an entire FPGA, AIB and tile technology give Intel an express lane for bringing more new products to market faster. Note: For more information, see “Chiplets (tiles) create a high-speed path for getting new FPGAs to market.” You might also be interested in the Intel White Paper titled “Enabling Next-Generation Platforms Using Intel’s 3D System-in-Package Technology.” Intel’s silicon and software portfolio empowers our customers’ intelligent services from the cloud to the edge. Notices & Disclaimers Intel technologies may require enabled hardware, software or service activation. No product or component can be absolutely secure. Your costs and results may vary. © Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.2.1KViews0likes0CommentsThe Three Levels of Heterogeneity: Devices, Systems, and Software
By Jose Alvarez, Senior Director, Intel CTO Office, PSG The following blog is adapted from a keynote speech that Jose Alvarez presented as the recent The Next FPGA Platform event, held in San Jose, California. There are three levels of heterogeneous integration. It’s a simple taxonomy. First, there’s heterogeneous integration at the chip level – the device level. Second, there’s heterogeneous integration at the system level. And third, there’s heterogeneity at the software level. Heterogeneity at all three levels lead to system reconfigurability. The First Level: Chip Heterogeneity Chip-level heterogeneity is heterogeneous integration inside of the device package and is closely tied to the concept of chiplets. We're building much more complex systems, much larger systems, and building larger systems with big, monolithic semiconductors is difficult. The yields for large die are not as good as for smaller die, or for chiplets. It's far more practical, more economical, to build these systems with smaller components. From a system perspective, we can make better semiconductor design decisions using chiplets because we don't have to redesign every chiplet from one semiconductor process node to the next. Some functions work perfectly well in their existing form. There’s no reason to redesign these functions when a newer technology node comes on line. Heterogeneous integration is already in production. It’s a very important technology and Intel is committed to a chiplet-based design strategy. For example, Intel® Stratix® 10 FPGAs and Intel® Agilex™ FPGAs are based on heterogeneous integration and these devices are in production now. In fact, the Intel Stratix 10 FPGAs have been in volume production for years. Chiplet-based IC design and manufacturing permit Intel to build systems with silicon-proven functions including high-speed serial transceivers, memory interfaces, Ethernet and PCIe ports, et cetera. Chiplet-based designs also permit Intel to develop targeted architectures for different workloads and bring them to market more quickly. For these reasons, Intel is actively encouraging the development of an industry ecosystem based on chiplets. We do that in in several ways. For example: Intel developed Embedded Multi-die Interconnect Bridge (EMIB) technology, an embedded multi-chip interconnect bridge used to interconnect chiplets with a standardized interconnect. Intel developed the Advanced Interface Bus (AIB), which is a PHY that Intel released as an open-source, royalty-free, high-performance chiplet interconnect. Intel recently joined the CHIPS (Common Hardware for Interfaces, Processors and Systems) Alliance, which is a collaborative industry organization dedicated to encouraging chiplet-based development. Coincidentally, Gordon Moore, one of our founders, published a paper in 1965 titled “Cramming more components onto integrated circuits.” It was a very short paper; only four pages, including pictures, and it became very famous. The second page of this famous paper contains a statement that became known as Moore’s Law: “The complexity for minimum component costs has increased at a rate of roughly a factor of two per year. Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years.” Moore’s statement predicts the exponential increase in semiconductor technology that’s now lasted, not for 10 years, but for more than 50 years! The third page of Moore’s paper goes on to mention that, just possibly, it might be better to build larger systems using smaller components integrated into a single package: “It may prove to be more economical to build large systems out of smaller functions, which are separately packaged and interconnected. The availability of large functions, combined with functional design and construction, should allow the manufacturer of large systems to design and construct a considerable variety of equipment both rapidly and economically.” Even back in 1965, Gordon Moore knew that chip-level, heterogeneous integration would be a way to move forward. This is what Intel is doing today: using advanced packaging to bring all of the company’s technologies to bear in one IC package. The Second Level: System Heterogeneity The second heterogeneous integration level is at the system level. We live in a data-centric world today. There's data everywhere. Intel is driving a lot of innovation at the system level to handle this deluge of data. A lot that needs to be done with this data: move it, store it, process it. Workloads associated with these tasks require many solutions and Intel develops and makes a wealth of devices to perform these tasks – CPUs, GPUs, ASICs, FPGAs – that we use to build heterogeneous systems. These varied workloads require different processing architectures. Scalar workloads run well on CPUs. Vector workloads run well on GPUs. Matrix workloads including AI and machine learning often run best on workload-specific ASICs. Finally, spatial workloads are best run on an FPGA. So it is important to have all of these heterogeneous architectures available to provide the right architecture for specific workloads in the data center. Bringing CPUs, GPUs, FPGAs, and specialized accelerators together allows Intel and its customers to solve problems intelligently and efficiently. The Third Level: Software Homogeneity The third type of heterogeneous integration is at the software level. This one’s hard. Intel’s approach is called the oneAPI initiative, a cross-industry, open, standards-based unified programming model that addresses the fundamental way that we build software today, which is akin to cooking. In the kitchen, you don’t ask chefs whether they have a specific way of “building” food. They have many, many ways of using tools, selecting ingredients, and preparing food to create an infinite variety of meals. Similarly, I think that we'll continue to use a multitude of programming and description languages in the future. What developers hold dear is having a single, unified development environment. That’s what Intel is striving for with the oneAPI initiative. That’s the vision. And this vision addresses the four workload types mentioned earlier: scalar, vector, matrix, and spatial. The oneAPI initiative provides a level of abstraction so that, in principle, a software developer can develop code in one layer and then deploy that code to the many processing architectures mentioned above. Today, it's just a start. Intel just announced the open-source oneAPI initiative a few weeks ago along with a beta-level product called the Intel® oneAPI Toolkits. We expect that developing Intel oneAPI Toolkits will be a long road and we definitely understand the journey we’re making. Today, we have Data Parallel C++ and libraries for the Intel oneAPI Toolkits. Data Parallel C++ incorporates SYCL from the Khronos Group and supports data parallelism and heterogeneous programming. Data Parallel C++ allows developers to write code for heterogeneous processors using a “single-source” style based on familiar C++ constructs. The Three Heterogeneous Levels Together At Intel, we know that these three levels of heterogeneity are very important for the industry. That’s why we focus at the chip level on advanced packaging technologies, at the system level on multiple processing architectures, and at the software level with the oneAPI initiative and the Intel oneAPI unified programming environment and Data Parallel C++ programming language. Intel sees a semiconductor continuum where nascent markets – for example machine learning, AI, and 5g – require flexibility in terms of rapidly changing interfaces and workloads. FPGAs play a role in the early stages of these markets because of their extreme flexibility. As these markets grow, companies developing systems for these markets often develop custom ASICs. Intel serves these markets with Intel® eASIC® structured ASICs and full custom ASICs that deliver reduced power and better performance. The Intel development flow permits a smooth progression from FPGAs into pin-compatible Intel eASIC devices and ultimately into ASICs as markets mature and production volumes grow. Intel eASIC devices work well in the data center as well, where multiple applications with specific workloads require acceleration. An accelerator design implemented with an FPGA can become a chiplet based on Intel eASIC technology. That chiplet can be faster and use less power than the FPGA, and it can be integrated into a package with other devices using AIB or some other interconnect method. For more information on Intel oneAPI Toolkits and Data Parallel C++, see “Intel announces open oneAPI initiative and development beta release with Data Parallel C++ language for programming CPUs, GPUs, FPGAs, and other accelerators” and “Want a longer, more detailed explanation of the oneAPI unified programming model? Here’s a 30-minute video.” Legal Notices and Disclaimers: Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. No product or component can be absolutely secure. Check with your system manufacturer or retailer or learn more at intel.com. Results have been estimated or simulated using internal Intel analysis, architecture simulation and modeling, and provided to you for informational purposes. Any differences in your system hardware, software or configuration may affect your actual performance. Intel does not control or audit third-party data. You should review this content, consult other sources, and confirm whether referenced data are accurate. Cost reduction scenarios described are intended as examples of how a given Intel- based product, in the specified circumstances and configurations, may affect future costs and provide cost savings. Circumstances will vary. Intel does not guarantee any costs or cost reduction. © Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Altera is a trademark of Intel Corporation or its subsidiaries. Cyclone is a trademark of Intel Corporation or its subsidiaries. Intel and Enpirion are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.1.6KViews0likes0CommentsWorld-Changing Technology to Enrich People’s Lives: Architecture Day 2020, the Six Key Intel Technology Pillars, and Intel® FPGAs
Earlier this month during Architecture Day 2020, Intel’s Chief Architect Raja Koduri and a dozen Intel fellows and architects discussed numerous advanced technologies that the company has developed and is developing so that it can continue to deliver solutions for our customers’ greatest challenges. These many advanced technologies comprise the six key pillars of innovation that collectively embody the focus of the company’s engineering development work: Process and Packaging XPU Architecture Memory Interconnect Security Software All six of these pillars are essential to creating the solutions that fuel Intel’s purpose: To create world-changing technology that enriches the lives of every person on earth. Perhaps the most broadly discussed of these six pillars is semiconductor process technology; That technology pillar has certainly been an Intel crown jewel for more than half a century. During Architecture Day 2020, Ruth Brain, Intel Fellow and Director of Interconnect Technology and Integration at Intel, discussed the company’s new 10nm SuperFin technology, which represents the largest single intranode process technology enhancement in the company’s history. Intel’s 10nm SuperFin process technology delivers performance improvements comparable to a full-node transition by incorporating several innovations including: An improved finFET gate process that results in higher electron mobility and faster transistors Self-aligned quad patterning, which nearly doubles the scaling density of the critical M0 and M1 layers in the metal stack Cobalt local interconnects, which halve via resistance and reduce electromigration by a factor of 5X to 10X COAG (contact over active gate) design, which further reduces cell size and improves transistor density by relocating the finFET’s gate contact from its previous location next to the transistor to a spot directly over the transistor’s gate, thus reducing the amount of real estate consumed by each transistor These and other innovations in the Intel 10nm SuperFin process technology demonstrate that feature size is not the only significant parameter when it comes to improving performance. In a nearly prescient article titled “No More Nanometers” published only a couple of weeks before Architecture Day 2020, EEJournal’s Kevin Morris describes the essence of the Intel six-pillar strategy when he wrote: “For the previous three decades, there was little reason to optimize … anything, really. After all, why spend a huge amount of energy doing a 15-20% improvement that would simply be obliterated by the 2x bounty of the next Moore’s Law node, and another 2x two years after that? We simply focused on building the functionality we needed and relied on lithography progress to make it faster, cheaper, and more power efficient. Now, however, we engineers couldn’t just ‘phone it in’ anymore. We had to come up with new and novel ways to improve performance and reduce power consumption, rather than relying on the penumbra of Moore’s Law to get us past the finish line in our system designs.” In a follow-on EEJournal article titled “Intel – Flourish or Flounder,” which also appeared before Architecture Day 2020, Morris wrote: “The big gains in process technology have been from innovations such as finFETs and other advances that don’t relate to geometry shrinks.” This sentence succinctly describes the approach taken in the development of Intel’s 10nm SuperFin process technology. (Note: Morris’ opinions as expressed in his second article firmly fall on the “flourish” side of its headline’s implied question.) Press and analyst reaction to the Architecture Day 2020 discussion of the 10nm SuperFin process improvements has been very positive. Here are a few quotes: “Intel's improvements in the 10nm process and architecture are so significant that the company is claiming a nearly 20% performance improvement over 14nm. With 14nm, Intel made small incremental performance improvements of roughly 4-5% per refresh (+++), which amounted to about 20% across four different CPU architectures. Intel is achieving this with a single step rather than four, which is what makes this transition to 10nm much more significant than many realize.” – Patrick Moorhead, Forbes “Pursuing more cost-effective methods of interconnection and aggregation is how we’ll drive down the cost of mounting memory closer to the CPU and improving overall performance characteristics. The work Intel is talking about on the interconnect front is critical to long-term performance improvements and better power efficiency.” - Joel Hruska, ExtremeTech “Intel's advanced packaging technologies will allow it to mix and match IP and process nodes from other vendors into the same heterogeneous packages, yielding time to market advantages.” – Paul Alcorn, Tom’sHardware “In terms of the wrap-around process, with the support of architecture and technology, Intel's 10nm is far more powerful than imagined, and the final judgment standard still needs to be based on the performance of the whole set.” – Fu Bin, 21ic Electronic Network However, semiconductor process technology constitutes only part of the first Intel technology pillar. Packaging technology constitutes the other part – a co-equal part – of the pillar. There are big performance, power, and cost gains to be achieved from packaging improvements. That’s why the next Architecture Day 2020 talk – presented by Ramune Nagisetty, Senior Principal Engineer and Director of Product and Process Integration at Intel – discussed several of the many packaging innovations that Intel has developed, including: EMIB (Embedded Multi-die Interconnect Bridge) and AIB (Intel's Advanced Interface Bus), which are used to combine several semiconductor die in one package using 2.5D packaging technology Foveros technology, which extends chip packaging into the 3 rd dimension Hybrid bonding, which further improves 3D chip packaging Co-EMIB, which blends 2D, 2.5D, and 3D packaging techniques to enable the creation of a larger-than-reticle sized base with high-density connections among companion die and stacked-die complexes Intel has used the EMIB and AIB packaging technologies for heterogeneous integration in FPGA development for several years. The Intel® Stratix® 10 and Intel® Agilex™ FPGA families employ heterogeneous packaging as a foundation technology to combine FPGA base die with a variety of advanced I/O die to add features such as 116 Gbps SerDes transceivers, PCIe Gen5 ports, and UPI and CXL coherent processor attach ports. Heterogeneous integration is also a key technology used in the creation of the Intel Stratix 10 MX FPGA, which adds high-speed HBM2 memory die stacks to the mix. Intel’s advanced packaging technology makes it possible to create broad, diverse product families and bring them to market quickly. For example, heterogeneous integration accelerated the development and recent introduction of the newest member of the Intel Stratix 10 FPGA family – the Intel Stratix 10 NX FPGA – which changes out the existing FPGA die in the Intel Stratix 10 MX FPGA with a newly designed FPGA die to produce an AI-optimized FPGA. The new FPGA die replaces existing DSP blocks with AI Tensor blocks, tailored for the computations needed in AI workloads. The result was a 15X performance boost for these workloads. Heterogeneous integration allowed Intel to bring the Intel Stratix 10 NX FPGA to market more quickly than might have been possible otherwise and opened a new market segment for Intel. Intel FPGAs are certainly not the only products in Intel’s silicon portfolio to benefit from the company’s advanced packaging technologies. For example, Intel® Core™ Processors with Intel Hybrid Technology, code-named “Lakefield,” were announced in June. These new members of the Intel Core processor family are the industry’s first product to incorporate Intel’s Foveros 3D stacking technology and a hybrid computing architecture. During Architecture Day 2020, Ravi Kuppuswamy - Corporate Vice President and General Manager of Custom Logic Engineering – disclosed that Intel’s next-generation FPGAs would employ both Foveros and Co-EMIB packaging technologies. Kuppuswamy also disclosed a new SerDes test chip, which currently operates at 112 Gbps in NRZ mode and 224 Gbps in PAM4 mode. This new SerDes represents precisely the sort of semiconductor chiplet technology that requires advanced packaging technologies such as Foveros and Co-EMIB to become practical. The critical importance of advanced packaging and heterogeneous integration is not lost on the press and analysts. Here are some observations from various publications, based on the information presented during Architecture Day 2020: “Intel's advanced packaging technologies will allow it to mix and match IP and process nodes from other vendors into the same heterogeneous packages, yielding time to market advantages. The standardized AIB (Advanced Interface Bus) interface is the key that unlocks that level of cooperation and integration between so many disparate partners. Intel has worked to further this once-proprietary standard by contributing it to the open-source CHIPS alliance without requiring royalties or licensing, thus allowing other companies to develop chiplets that are compatible with both Intel and others' chiplets.” – Paul Alcorn, Tom’s Hardware “Related to the future of design and packaging in the industry, which I see as chiplets from many companies coming together in a 3D package, I feel confident Intel is extremely competent, maybe even the current lead. This is a long-term strategy and industry shift… Intel's commitments to improving packaging and IO are among some of the most impressive ways that I see them navigating this new competitive environment and should yield some very interesting products down the road…” - Patrick Moorhead, Forbes “Intel detailed that the EMIB roadmap, with its AIB or Advanced Interface Bus architecture, will scale to a much denser 36 micron bump density and up to 6.4Gbps wire data rate with AIB 2.0. In addition, Intel has made an AIB Generator open source and available on GitHub, to help enable ecosystem partners to develop on the technology as well. Intel calls this “2.5D” packaging technology…” – Dave Altavilla and Marco Chiappetta, HotHardware “What I will say is that if this is truly Intel’s direction, then defining this strategy and building the chiplet ecosystem could be, by far, the most important job at the company.” – Patrick Kennedy, Serve the Home Process and packaging technologies are not all that’s needed to reach the sort of goals that Intel aims to achieve in the quest to move, process, and store everything. Computing workloads have been changing, expanding, as the world finds new ways to harness computing technology. That’s why the XPU architecture concept is the second of Intel’s technology pillars. Intel has characterized four broad computing architectures for the XPU architectural pillar: Scalar: As characterized by the many Intel CPU and processor families Vector: As characterized by the Intel® Xe family of GPUs (extensively discussed during Architecture Day 2020) Matrix: As characterized by the Intel® Habana® Gaudi® and Goya™ AI processor families Spatial: As characterized by Intel FPGA families All of these processing architectures are important in the quest to turn mountains of raw data into usable information and all exist within the Intel semiconductor portfolio. To convert these mountains of data into useful, actionable information, future software programmers need a unified programming environment, architecture, and languages that natively support these four basic computing architectures. That’s why another of the Architecture Day 2020 presentations was devoted to oneAPI, which is Intel’s company-wide effort to develop the unified set of tools needed to deploy applications and solutions across these four computing architectures. We’ll give EEJournal’s Kevin Morris the last word here, from his article titled “No More Nanometers”: “…to really evaluate a semiconductor technology platform, we have to look well beyond the number of transistors we can cram on a monolithic piece of silicon. We need to look at all the elements that define system-level performance and capability and account for all of those. Beyond the usual performance, power, and area of monolithic silicon, we have packaging technology that allows us to stack more (and more varied) die in a single package, interconnect technology that improves the bandwidth between system elements, architectural and structural improvements to semiconductors that are not related to density, new materials that improve the speed and power efficiency – the list goes on and on.” If you’d like to watch the Architecture Day 2020 presentations for yourself, the full video presentation lasting nearly three hours is available here in the Intel Newsroom and the 233-page slide deck used during the presentation is available as a PDF here. Intel’s silicon and software portfolio empowers our customers’ intelligent services from the cloud to the edge. Notices and Disclaimers Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit www.intel.com/benchmarks. Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice. Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available updates. For testing details and system configurations, please contact your Intel representative. No product or component can be absolutely secure. Results that are based on pre-production systems and components as well as results that have been estimated or simulated using an Intel Reference Platform (an internal example new system), internal Intel analysis or architecture simulation or modeling are provided to you for informational purposes only. Results may vary based on future changes to any systems, components, specifications, or configurations. Intel technologies may require enabled hardware, software or service activation. Intel contributes to the development of benchmarks by participating in, sponsoring, and/or contributing technical support to various benchmarking groups, including the BenchmarkXPRT Development Community administered by Principled Technologies. Statements in this presentation that refer to future plans and expectations are forward-looking statements that involve a number of risks and uncertainties. Words such as “anticipates,” “expects,” “intends,” “goals,” “plans,” “believes,” “seeks,” “estimates,” “continues,” “may,” “will,” “would,” “should,” “could,” and variations of such words and similar expressions are intended to identify such forward-looking statements. Statements that refer to or are based on estimates, forecasts, projections, uncertain events or assumptions, including statements relating to future products and technology and the expected availability and benefits of such products and technology, market opportunity, and anticipated trends in our businesses or the markets relevant to them, also identify forward-looking statements. Such statements are based on management's current expectations and involve many risks and uncertainties that could cause actual results to differ materially from those expressed or implied in these forward-looking statements. Important factors that could cause actual results to differ materially from the company's expectations are set forth in Intel's earnings release dated July 23, 2020, which is included as an exhibit to Intel’s Form 8-K furnished to the SEC on such date, and Intel's SEC filings, including the company's most recent reports on Forms 10-K and 10-Q. Copies of Intel's Form 10-K, 10-Q and 8-K reports may be obtained by visiting our Investor Relations website at www.intc.com or the SEC's website at www.sec.gov. Intel does not undertake, and expressly disclaims any duty, to update any statement made in this presentation, whether as a result of new information, new developments or otherwise, except to the extent that disclosure may be required by law. Intel does not control or audit third-party data. You should consult other sources to evaluate accuracy. © Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.1.3KViews0likes0CommentsHow Heterogeneous Device Design and Manufacturing Leads to Success
By Patrick Dorsey, Vice President Product Marketing, FPGA and Power Products, Intel The following guest blog is based on remarks by Patrick Dorsey during a panel discussion on “FPGA Hardware Innovations” at the recent The Next FPGA Platform event, held in San Jose, California on January 22, 2020. One of the looming challenges for FPGA hardware design is answering this question: What do you integrate and what do you give up when you integrate? Flexibility matters. FPGA vendors are in the flexibility business. The above question not only matters within the chip, it also matters at the system level. Intel offers many choices to designers of heterogeneous hardware systems. Designers can choose among multiple Intel® Xeon® CPUs. Likewise, they have a choice of several Intel® FPGAs. There’s a lot of value in these choices. Monolithic and heterogeneous integration increase value. Integration in both forms shrinks the overall form factor and improves power consumption. Performance can improve as well. Interfaces can be better optimized. Heterogeneous integration technology enables even more choice. It’s an option at multiple levels within the system. For example, there are a lot of things that we can do with packaging technology, with the interfaces that we have, and with development tools. The options are there. Many things are happening between semiconductor vendors and customers that we can’t talk about publicly. For example, there's a lot of integration that happens with ASICs and with structured ASICs – that’s the Intel® eASIC product. New heterogeneous devices can be manufactured in a customized fashion that could never happen before. Heterogeneous Devices are Here Now Heterogeneous device design and manufacturing is here now. For example, today’s Intel® Agilex™ FPGAs and Intel® Stratix® 10 FPGAs are already heterogeneous FPGAs. We’ve incorporated HBM2 stacked-die DRAM into both types of FPGA. We’ve put the high-speed serial transceivers for these Intel FPGAs on tiles, or chiplets, so that the Ethernet and PCIe protocols are decoupled from the design of the FPGA fabric. The FPGA fabric benefits when moved to new process nodes while interfaces do not. Heterogeneous design allows Intel to innovate in the FPGA fabric and the I/O interfaces separately, and across nodes. We can use multiple nodes to build very complex devices optimally. Another facet is the US government’s involvement with heterogeneous design and manufacturing in these early days. Several DARPA programs are driving research into standalone IP that can sit next to the FPGA. This IP can become chiplets. These chiplets can implement specialized functions in the areas of AI or analog-to-digital converters, for example. The DARPA work is all public. It’s called the CHIPS (Common Heterogeneous Integration and IP Reuse Strategies) program. The chiplets concept has advanced very quickly for FPGAs. It’s in volume manufacturing now, as discussed above. However, there are challenges. The first challenge is TDP, thermal design power. The more functions put into the package, the harder it is to get the heat out. So it's primarily a thermal problem but it’s also a business-model problem, which makes it a commercial problem. As soon as you have two or more companies putting their IP into the same device, the business model gets a little more complicated. For example, how you go to market with these mixed-vendor heterogeneous devices? These are some of the challenges for heterogeneous device design and manufacture, but despite these challenges, it's happening now. It Starts with a Great FPGA In terms of the FPGA innovation for heterogeneous device design, it starts with having a great FPGA. All FPGA vendors are trying to build the best FPGA fabric and there are innovations in the ways that the FPGA fabric works. For example, how does the FPGA’s fundamental logic and compute architecture work with memory? One of an FPGA’s key advantages is it’s on-chip memory, which is especially important these days for AI and machine language (ML). For performance reasons, you really want the AI/ML model to fit into the memory in the FPGA package. The next area of innovation is the interconnect, whether it's the heterogeneous interconnect or what happens in the package to solve TDP and power problems. There are things that the US government's doing around the Intel Advanced Interface Bus (AIB), which is an interface that Intel already uses in the Intel Agilex FPGAs and Intel Stratix FPGAs. Intel released the AIB specification into the open-source arena last year and industry organizations like the CHIPS Alliance are pushing the AIB specification as an open industry standard to allow better die-to-die connectivity and to jumpstart the chiplet ecosystem. This may sound funny coming from Intel, but it's not about the semiconductor process alone. Don’t misunderstand that statement. Process technology is extremely important and Intel continues to drive the Moore’s Law process technology cadence. So being on the latest process node is important and I think you'll see FPGAs continuing to lead the charge when it comes to process technology. However, the choices made in the design of a new device really depend on the problems you're trying to solve. You only need to take some functions to the most advanced process node. Some functions, especially I/O functions, can't take advantage of the most advanced process node. Analog functions and memory can’t use the most advanced process nodes. This is where heterogeneous device design and manufacturing shines. Chiplets and heterogeneous devices enable choices. You can choose to advance only the functions that need to go faster while reusing designs that already meet performance and power requirements. Heterogeneous design allows you to make different design tradeoffs – in terms of static power, for example – and power has become critical in the design of every new device. At Intel, we're constantly trying to balance power and performance. Process nodes matter and a mix of process nodes gives you the most design choices by allowing an easy mix of custom IP, standard IP, flexible FPGA IP, and interconnect IP. Heterogeneous design and manufacturing allow Intel to decide how to best mix these technologies to create new semiconductor products. The world's getting more complicated, but Intel has more ways to solve these problems than ever before. Legal Notices and Disclaimers: Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. No product or component can be absolutely secure. Check with your system manufacturer or retailer or learn more at intel.com. Results have been estimated or simulated using internal Intel analysis, architecture simulation and modeling, and provided to you for informational purposes. Any differences in your system hardware, software or configuration may affect your actual performance. Intel does not control or audit third-party data. You should review this content, consult other sources, and confirm whether referenced data are accurate. Cost reduction scenarios described are intended as examples of how a given Intel- based product, in the specified circumstances and configurations, may affect future costs and provide cost savings. Circumstances will vary. Intel does not guarantee any costs or cost reduction. © Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Altera is a trademark of Intel Corporation or its subsidiaries. Cyclone is a trademark of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.574Views0likes0CommentsWant a preview of the Intel keynote at The Next FPGA Platform Event? The Next Platform just published one
Next week, The Next FPGA Platform event takes place in downtown San Jose. This day-long event includes several talks and panels devoted specifically to datacenter topics with an emphasis on FPGA acceleration. Several speakers and panelists from Intel will be there and Timothy Prickett Morgan, co-editor and co-founder of The Next Platform, has just published an article titled “Covering All The Compute Bases In A Heterogeneous Datacenter” that previews the event. In particular, he quotes Jose Alvarez, Senior Director in the CTO Office for the Programmable Solutions Group at Intel, who will be giving one of the keynotes at the event. Alvarez told The Next Platform: “One architecture doesn’t fit everything, and there are a lot of different workloads. There are scalar, vector, matrix, and spatial architectures and those are very uniquely positioned to solve multiple problems. For scalar, you may use a CPU. For vector processing, you may use a GPU. For matrix, you may use a dedicated ASIC as we do for machine learning and artificial intelligence. And for spatial, you might use an FPGA.” The article goes on to quote several other things that Alvarez told The Next Platform about various technologies and architectures that Intel is using to continue the company’s close, half-century-long alliance with Moore’s Law. If you want to read what Alvarez told The Next Platform, then click over to Morgan’s article using the link above. Better yet, plan on attending The Next FPGA Platform event in person on January 22 in San Jose. (See “FPGA Acceleration in the Datacenter: See and Hear High-Level Experts from Intel and FPGA Partners at The Next FPGA Platform Day-Long, Live Event – January 22 in San Jose” or click here for the event information page with a full agenda.) Better yet, just click here to register for the event. Legal Notices and Disclaimers: Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. No product or component can be absolutely secure. Check with your system manufacturer or retailer or learn more at intel.com. Results have been estimated or simulated using internal Intel analysis, architecture simulation and modeling, and provided to you for informational purposes. Any differences in your system hardware, software or configuration may affect your actual performance. Intel does not control or audit third-party data. You should review this content, consult other sources, and confirm whether referenced data are accurate. Cost reduction scenarios described are intended as examples of how a given Intel- based product, in the specified circumstances and configurations, may affect future costs and provide cost savings. Circumstances will vary. Intel does not guarantee any costs or cost reduction. © Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Altera is a trademark of Intel Corporation or its subsidiaries. Cyclone is a trademark of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.506Views0likes0Comments