封面
市場調查報告書
商品編碼
1400759

汽車語音功能產業分析(2023-2024)

Automotive Voice Industry Report, 2023-2024

出版日期: | 出版商: ResearchInChina | 英文 180 Pages | 商品交期: 最快1-2個工作天內

價格
簡介目錄

汽車語音互動市場特色如下:

1. OEM市場2023年將有46個品牌標配汽車音響功能

2019年至2023年1月至9月,配備音訊功能的汽車數量和安裝率均有所增加。 2023年前三季度,近1,200萬輛汽車預載汽車音響,普及率接近80%。

到2023年,AITO、Avatr、HiPhi、Rising Auto、ZEEKR、Voyah、Li Auto、Lynk &Co、Tank、NIO、Xpeng等46個乘用車品牌汽車音響功能採用率將達到100% 。自豪的。 到2023年,將有超過2,000萬輛汽車配備車用音響,安裝率超過80%。

2、車企自主開發語音功能將推動語音功能供應鏈重構

整車廠對智慧汽車語音能力的差異化需求和自主開發導向,將使傳統語音能力供應鏈中的Tier 2廠商能夠直接與整車廠合作。 產業鏈上、中、下游的界線越來越模糊。 例如,GWM、ZEEKR、Wuling等汽車製造商將直接與AISpeech合作,提高智慧語音功能的採用率和智慧化程度。

隨著產業鏈關係的變化,汽車音響功能的競爭格局也會發生相對應的變化。 從2023年1月至2023年9月的裝車量來看,AISpeech排名第三,支援30多家汽車製造商的150多種車型。

3、看講功能將成為標準配置,並行指令、跨聲區繼承、離線音訊功能、外部音訊功能等進階功能將搭載在汽車上

先前分析顯示,視覺和語音功能僅在部分新興汽車廠商和國內主要自主品牌具備,且最長連續通話時間僅為90秒,雙語音區識別仍是主流解決方案。 。

2023年,視覺辨識與語音辨識功能將成為新興汽車製造商旗艦車型的標配,可實現長達120秒的連續對話。 小鵬汽車還推出了 "駕駛座全時互動" 功能(啟動後,駕駛者可以一邊看中控台螢幕一邊看東西、說話,而無需啟動螢幕內容)。 同時,四重距離識別已成為新的主流解決方案,理想汽車、Xpeng Motor也推出了六重距離識別解決方案。

此外,到2023年,汽車上將安裝更先進的語音功能。

4.語音對話:根據智慧座艙場景,基礎模型裝車的第一步

隨著ChatGPT的熱潮,相關底層模型技術將從AI迅速擴展到其他領域。 2023年,汽車產業基礎模型的普及將加速,不少汽車廠商正在探索將基礎模型與智慧座艙、智慧駕駛等場景結合的落地機會。

在智慧座艙場景中,語音互動是底層模型融入汽車的第一個手段。 2023年2月,Baidu發表了ChatGPT的中文版ERNIE Bot,GWM、Geely、Voyah等品牌緊隨其後。 2023年4月,Alibaba透露,AliOS智慧車載作業系統已在統一千問基礎車型上進行連接測試,後續將在IM汽車上應用。 在華為HarmonyOS 4.0中,智慧助理小藝首次連結盤古模型,主要完善智慧互動、場景佈置、語言理解、生產力、個人化服務等功能。

本報告對全球及中國汽車音頻功能市場及產業進行了分析,概述了技術概況、市場基本結構、汽車整車廠音頻功能的開發和利用現狀以及主要應用領域。汽車音響功能提供商,正在調查其概要、主要技術、經營策略等。

目錄

第一章汽車音響功能產業概述

  • 汽車音響功能概述
  • 汽車音響功能的使用場景
  • 汽車音響技術
  • 汽車語音功能:互動架構
  • 汽車語音功能:常用互動功能
  • 汽車音響功能的開發要素
  • 汽車音響功能的發展歷史
  • 汽車語音功能產業鏈演進
  • 汽車語音功能產業鏈
  • 市場規模預測(2023-2026 年)
  • 語音功能提供者:市場排名
  • 其他音訊技術

第二章汽車音響功能在OEM的應用

  • OEM廠商音訊功能對比
  • OEM語音功能開發模型:概述
  • OEM OTA(無線)語音功能
  • 小鵬汽車
    • 相容汽車音訊功能的基準模型
    • 汽車音響功能
    • 語音技術
    • 內部開發的語音功能:架構
    • 自主開發的語音功能:基本功能
    • 汽車音訊功能夥伴
  • 理想汽車
    • 相容汽車音訊功能的基準模型
    • 汽車語音功能技巧
    • 車輛控制功能
    • 內部開發的音訊技術
    • 基本模型
    • 駕駛艙互動規劃
    • 汽車音訊功能夥伴
  • NIO
  • AITO
  • Aion
  • Rising Auto
  • Jiyue
  • ZEEKR
  • IM Motor
  • Denza
  • Leap Motor
  • Neta Auto
  • Geely
  • GWM
  • Changan
  • Chery

第三章汽車語音功能供應商

  • 汽車語音功能供應商概況:市場地位、技術競爭力、底層車款佈局
  • iFLYTEK
  • Cerence
  • AISpeech
  • Unisound
  • txzing.com
  • VW-Mobvoi
  • Mobvoi
  • Pachira
  • Tencent
  • Baidu
  • Alibaba
  • Huawei
  • Volcano Engine
  • Microsoft
  • VoiceAI

第四章汽車音響功能產業鏈

  • 平台整合:PATEO
  • 平台整合:Tinnove
  • 語音處理引擎:SinoVoice
  • 聲音處理引擎:Megatronix
  • 資料收集/標註:Haitian Ruisheng
  • 資料收集/註釋:Testin
  • 資料收集/註解:DataBaker
  • 語料庫:Magic Data
  • 晶片:Horizon
  • 晶片:ShensiliCon
  • 晶片:Chipintelli
  • 音訊晶片:Rockchip
  • 音訊晶片:WUQi Micro
  • 音訊晶片:LAPIS Semiconductor

第五章汽車音響功能發展趨勢

簡介目錄
Product Code: LMM020

The automotive voice interaction market is characterized by the following:

1. In OEM market, 46 brands install automotive voice as a standard configuration in 2023.

From 2019 to the first nine months of 2023, automotive voice saw rising installations and installation rate. In the first three quarters of 2023, nearly 12 million vehicles were pre-installed with automotive voice, with the installation rate of nearly 80%.

In 2023, there are 46 passenger car brands boasting automotive voice installation rate of 100%, including AITO, Avatr, HiPhi, Rising Auto, ZEEKR, Voyah, Li Auto, Lynk & Co, Tank, NIO, and Xpeng. In 2023, over 20 million vehicles are equipped with automotive voice, with the installation rate higher than 80%.

2. Automakers' self-development of voice facilitates the reshaping of the voice supply chain.

OEMs' differentiated demand for intelligent automotive voice and their preference for independent development enable Tier 2 vendors in the conventional voice supply chain to cooperate directly with OEMs. Boundaries between upstream, midstream and downstream of the industry chain tend to blur. For example, the direct cooperation of automakers like GWM, ZEEKR and Wuling with AISpeech improves their installation and intelligence levels of intelligent voice.

The change in industry chain relationships makes the automotive voice competitive pattern change accordingly. By installations from January to September 2023, AISpeech that supported more than 150 models of over 30 automakers ranked third.

3. See-and-speak function becomes a standard configuration, and advanced functions such as parallel instruction, cross-sound-zone inheritance, offline voice, and out-of-vehicle voice are available on cars.

In ResearchInChina's China Automotive Voice Industry Report, 2021-2022, "see-and-speak" was only installed by some emerging carmakers and leading Chinese independent brands, the longest continuous conversation duration was only 90 seconds, and dual-sound-zone recognition was still the mainstream solution.

In 2023, "see-and-speak" has become a standard configuration in emerging carmakers' flagship models, with up to 120-second continuous dialogue. Xpeng Motor has also introduced the "Full-time Dialogue at Driver's Seat" function (when turned on, it allows the driver to see and speak when looking at the center console screen, without needing to wake up the content on the screen). Meanwhile, four-sound-zone recognition has become a new mainstream solution, and Li Auto and Xpeng Motor also introduced six-sound-zone recognition solutions.

In addition, more advanced voice functions became available on cars in 2023.

Parallel instruction: support up to 10 actions in one instruction;

Cross-sound-zone inheritance: available on models of Xpeng, ZEEKR, and Li Auto (cross-sound-zone inheritance: when a person finishes an instruction, if other passengers want to continue, they can trigger this function by saying "I want too").

Offline instruction: more controllable content. Jiyue 01 supports all-zone, full offline voice. In offline state, Jiyue 01 still enables extremely fast interaction with occupants.

Out-of-vehicle voice: this function in Changan Nevo A07 allows for voice control on trunk, windows, music, air conditioning, pull-out/in, and other functions; this function in Jiyue 01 allows for voice control on car/parking, air conditioning, audio, lights, windows, doors, tailgate, and charging cover.

4. Voice interaction is the first stop for foundation models to get on vehicles in intelligent cockpit scenarios.

The boom of ChatGPT allows the related foundation model technology to rapidly extend from AI to all other sectors. In 2023, foundation models gain pace in automotive industry, and quite a few automakers are exploring the opportunities to implement foundation models in intelligent cockpit, intelligent driving and other scenarios.

In intelligent cockpit scenarios, voice interaction is the first stop for foundation models to get on vehicles. In February 2023, Baidu released a Chinese version of ChatGPT - ERNIE Bot, and brands like GWM, Geely, and Voyah followed; in April 2023, Alibaba disclosed that AliOS intelligent vehicle operating system has been connected to Tongyi Qianwen foundation model for testing, and will later be applied by IM Motors; in August 2023, in Huawei HarmonyOS 4.0, intelligent assistant Xiaoyi was connected to Pangu model for the first time, mainly to improve capabilities of intelligent interaction, scenario arrangement, language understanding, productivity and personalized service.

Besides conventional Internet companies, voice providers as important foundation model players such as iFLYTEK, AISpeech and Unisound have also launched related products.

iFLYTEK Spark cognitive foundation model has six core capabilities: penetrative understanding of multi-round dialogues, knowledge application, empathic chat & dialogue, self-guided reply in multi-round dialogues, file-based rapid learning of new knowledge, and evolution based on correction opinions of massive users;

AISpeech DFM-2 is an industry language foundation model with generalized intelligence. In the field of in-vehicle interaction, AISpeech integrates Lyra automotive voice assistant with DFM-2, which significantly improves capabilities in planning, creation, knowledge, intervention, plug-in, multi-level semantic dialogue, and documentation, and supports multi-modal, multi-intent, multi-sound-zone, and all-scenario multi-round continuous dialogues.

Table of Contents

1 Overview of Automotive Voice Industry

  • 1.1 Overview of Automotive Voice
  • 1.2 Application Scenarios of Automotive Voice
  • 1.3 Automotive Voice Technologies
  • 1.4 Automotive Voice Interaction Architecture
  • 1.5 Automotive Voice Common Interaction Functions
  • 1.6 Automotive Voice Development Factors
  • 1.7 Development History of Automotive Voice
  • 1.8 Automotive Voice Industry Chain Evolution
  • 1.9 Automotive Voice Industry Chain
  • 1.10 Market Size Forecast (2023-2026)
  • 1.11 Voice Providers Market Rankings
  • 1.12 Other Voice Technologies

2 Automotive Voice Applications for OEMs

  • 2.1 Voice Function Comparison of OEMs
  • 2.2 Summary of Voice Development Models by OEMs
  • 2.3 OTA Voice Functions of OEMs
  • 2.4 Xpeng Motor
    • 2.4.1 Automotive Voice-enabled Benchmark Models
    • 2.4.2 Automotive Voice Functions
    • 2.4.3 Voice Technology
    • 2.4.4 Self-developed Voice Architecture
    • 2.4.5 Self-developed Voice Basic Capabilities
    • 2.4.6 Automotive Voice Partners
  • 2.5 Li Auto
    • 2.5.1 Automotive Voice-enabled Benchmark Models
    • 2.5.2 Automotive Voice Skills
    • 2.5.3 Vehicle Control Functions
    • 2.5.4 Self-developed Voice Technology
    • 2.5.5 Foundation Model
    • 2.5.6 Cockpit Interaction Planning
    • 2.5.7 Automotive Voice Partners
  • 2.6 NIO
  • 2.7 AITO
  • 2.8 Aion
  • 2.9 Rising Auto
  • 2.10 Jiyue
  • 2.11 ZEEKR
  • 2.12 IM Motor
  • 2.13 Denza
  • 2.14 Leap Motor
  • 2.15 Neta Auto
  • 2.16 Geely
  • 2.17 GWM
  • 2.18 Changan
  • 2.19 Chery

3 Automotive Voice Providers

  • 3.1 Summary of Automotive Voice Providers: Market Position & Technical Competitiveness & Foundation Model Layout
  • 3.2 iFLYTEK
    • 3.2.1 Profile
    • 3.2.2 Intelligent Vehicle Business Performance
    • 3.2.3 Intelligent Vehicle Core Technology
    • 3.2.4 Voice Interaction Full Link Technology
    • 3.2.5 Automotive Interaction Development Plan
    • 3.2.6 Text-To-Speech (TTS) Technology
    • 3.2.7 Interaction Model
    • 3.2.8 Application of Interaction Foundation Model in Intelligent Cockpit
    • 3.2.9 Cockpit OS Enhanced by Foundation Models
    • 3.2.10 Knowledge Graph of iFLYTEK Interaction Foundation Model
    • 3.2.11 Interaction Foundation Model Core Capabilities
    • 3.2.12 Interaction Foundation Model Enabling Automotive Human-Machine Interaction
    • 3.2.13 Accumulation in Cognitive Intelligent Foundation Model Technology
    • 3.2.14 "1+N" System
    • 3.2.15 Multilingual Interaction System
    • 3.2.16 Support for Automotive Minor Languages
    • 3.2.17 Open Platform Voice Technology Support
    • 3.2.18 Out-of-vehicle Voice Interaction System
  • 3.3 Cerence
    • 3.3.1 Automotive Voice Recognition Hardware Framework
    • 3.3.2 Vehicle-Cloud Integration Solution
    • 3.3.3 Core Technology
    • 3.3.4 ARK Main Content
    • 3.3.5 SSE
    • 3.3.6 Drive
    • 3.3.7 Automotive Voice Interaction + AI Solution
    • 3.3.8 Co-Pilot
    • 3.3.9 Biometrics
    • 3.3.10 ICC
    • 3.3.11 Out-of-vehicle Voice Interaction
    • 3.3.12 TTS
    • 3.3.13 Other Voice Solutions
    • 3.3.14 Product Development Roadmap (2023~)
  • 3.4 AISpeech
    • 3.4.1 Profile
    • 3.4.2 Voice and Language Key Technologies
    • 3.4.3 "Cloud + Chip" Integration Strategy
    • 3.4.4 Customized Development Platform for All-link Intelligent Dialogue System: DUI
    • 3.4.5 Industry Language Model: DFM
    • 3.4.6 Intelligent Telematics Solutions
    • 3.4.7 Automotive Voice Assistant
    • 3.4.8 Intelligent Cockpit Products
    • 3.4.9 Cooperation Model Cases
  • 3.5 Unisound
    • 3.5.1 Intelligent Automotive Solutions
    • 3.5.2 Foundation Model
    • 3.5.3 Voice Technology Capabilities
    • 3.5.4 TTS
    • 3.5.5 Automotive Voice Solution Business Models
    • 3.5.6 Core Technology
    • 3.5.7 Automotive Voice Chip
    • 3.5.8 Automotive Voice Solution Supporting
  • 3.6 txzing.com
  • 3.7 VW-Mobvoi
  • 3.8 Mobvoi
  • 3.9 Pachira
  • 3.10 Tencent
  • 3.11 Baidu
    • 3.11.1 Core Voice Technology
    • 3.11.2 Voice Chip
    • 3.11.3 DuerOS
    • 3.11.4 DuerOS Empowered by Foundation Models
    • 3.11.5 ERNIE Foundation Model Enabled Cockpit Voice Interaction
    • 3.11.6 ERNIE Foundation Model Analysis
  • 3.12 Alibaba
  • 3.13 Huawei
  • 3.14 Volcano Engine
  • 3.15 Microsoft
  • 3.16 VoiceAI

4 Automotive Voice Industry Chain

  • 4.1 Platform Integration: PATEO
  • 4.2 Platform Integration: Tinnove
  • 4.3 Voice Processing Engine: SinoVoice
  • 4.4 Voice Processing Engine: Megatronix
    • 4.4.1 Product Layout
    • 4.4.2 Automotive Voice SmartMega® VOS Module
    • 4.4.3 Automotive Voice Customized and Cooperation Modes
    • 4.4.4 Implemented Model Cases
  • 4.5 Data Collection / Annotation: Haitian Ruisheng
    • 4.5.1 Voice Business
    • 4.5.2 Structure of Training Dataset
    • 4.5.3 Speech Services: Data Collection Services
    • 4.5.4 Speech Services: Data Annotation Services
  • 4.6 Data Collection / Annotation: Testin
  • 4.7 Data Collection / Annotation: DataBaker
  • 4.8 Corpus: Magic Data
  • 4.9 Chip: Horizon
  • 4.10 Chip: ShensiliCon
  • 4.11 Chip: Chipintelli
  • 4.12 Voice Chip: Rockchip
  • 4.13 Voice Chip: WUQi Micro
  • 4.14 Voice Chip: LAPIS Semiconductor

5 Development Trends of Automotive Voice

  • 5.1 Trend 1
  • 5.2 Trend 2
  • 5.3 Trend 3
  • 5.4 Trend 4
  • 5.5 Trend 5
  • 5.6 Trend 6
  • 5.7 Trend 7
  • 5.8 Trend 8