AI Chipsets for Edge Forecast Report - 2021 Analysis & Data
過去10年的人工智慧的引進劇增，與硬體設備的開發有密接關聯。首先，GPU(圖形處理單元)對通用計算的發展使這一趨勢成為可能。這一趨勢本身也促進了功能強大的伺服器 GPU 的開發，並將其整合到主要部署在雲端和資料中心的高性能計算 (HPC) 系統中。隨著人工智慧應用進入移動和嵌入式市場，對可在邊緣執行神經網路推理的硬體的需求出現了 受功耗、價格和面積 (PPA) 限制的系統單晶片 (SoC)。
The surge in artificial intelligence (AI) adoption in the past decade has been closely linked with developments in hardware. The development of general-purpose computing on graphics processing units (GPUs) enabled this trend to begin with, and the trend itself then drove the development of very powerful server GPUs and their integration into high performance computing (HPC) systems, largely focused on cloud and data center deployments. With the spread of AI applications into mobile and embedded markets, a demand emerged for hardware that could perform neural network inference at the edge, in the context of power-, price-, and area (PPA)-constrained systems-on-chip (SoC).
Desktop or server-style GPUs were largely irrelevant here, as they tend to be large, expensive, and very demanding in terms of both power draw and cooling. Even in a high-end smartphone, the SoC itself typically contributes around $70 to the bill of materials, and battery life is a key selling point. Discrete GPUs for PCs, by contrast, often cost hundreds to thousands of dollars and require their own onboard cooling fan. As a result, there was an increasing demand for neural network acceleration on smartphone SoCs, and one that could only in part be met by GPU technology.
At the same time, the increasingly advanced SoCs developed for the enormous smartphone market began to penetrate other markets, such as robotics, automotive, industrial automation, UAVs, and much else. These devices are general-purpose computing platforms, optimized for constrained, embedded form factors-as such, using them makes a lot of sense for developers in these sectors, as they make it possible to develop much more functionality in software or by training machine-learning models, and therefore to attain a greater degree of mass customization and to operate a faster development cycle. Here, too, there was a demand for AI hardware acceleration beyond the server GPU.
The smartphone silicon vendors have benefitted from the enormous scale of their core business to become arguably the technology leaders of the semiconductor industry, something demonstrated by Apple's decision in 2020 to switch the MacBook (and probably also Mac) product lines to the M1 SoC, a device developed from the ARM-based SoCs Apple designed for its iPhones. TSMC, which fabricates most of the smartphone SoC vendors' products on contract, is clearly in the lead on process node sizes, while the SoCs themselves are beginning to outperform the lower end of the discrete GPU market on AI benchmarks such as MLPerf Mobile or the one maintained by ETH Zürich.
Omdia's fundamental thesis on AI hardware is that we are facing a second transition. After the move from CPU to GP-GPU computing for AI workloads, Omdia expects a further transition from GPUs to other chip types, notably application-specific integrated circuits (ASICs) designed expressly for machine learning, and more broadly, to a much bigger share of custom silicon. This transition is much more pronounced in the edge market, where PPA constraints are more binding, where the key technologies originally arose, and where innovation in robotics, autonomous systems, 5G local-use mobile networks, and much else is at its fastest.
This report analyzes the technical and market aspects of these chipsets in edge devices and outlines the advantages and limitations. It provides forecasts of these categories through 2026 and a summary of market drivers and challenges.
Omdia focuses on a particular set of device categories in this report that are believed to be the primary drivers for volume shipments and revenue. The device categories include automotive, consumer and enterprise robots, drones, head-mounted displays (HMDs), smart speakers, mobile phones, PCs/tablets, security cameras, machine vision, and edge servers. This report also details the applications that will be run on AI edge devices, in turn driving hardware requirements. Several software platforms are also emerging to help embedded application developers leverage AI in their applications.
This report does not cover the AI edge software stack in detail but does recognize its importance in driving the development of AI edge processors. The AI edge software stack is expected to be covered in a separate report planned for 2Q21. This report also does not cover the vendor ecosystem or profiles which will be covered in an Omdia Market Radar report planned for 4Q21.
This report used a wide range of Omdia data products to establish baseline information and, where possible, to benchmark AI attach rates and chipset type ratios. This report also used the exhaustive chipset estimates from the “Smartphone Model Market Tracker” to do a detailed, chip-by-chip review of AI accelerator content, the System on Chip (SoC) Market Tracker to backtest market sizing estimates, the “AI Edge Appliances for Healthcare” report and its underlying survey, as well as the “New Compute Ecosystem: From Cloud to Edge Report - 2021” survey for edge appliance estimates. Further reading