行业新闻

Home>News>行业新闻

source:JUNWELLpublish time:2025-04-16click count:26

market size forecasting

According to a Deloitte report, the global AI chip market is expected to exceed $150 billion by 2025 and increase to $400 billion by 2027. Other institutions (such as Sohu related analysis) predict that the market size will be about 91.96 billion US dollars in 2025, with an average annual growth rate of 25.6% -33%. The difference may stem from statistical caliber (such as whether edge device chips are included, etc.).


1. Technical roadmap and market positioning

GPU

Representative manufacturers: Nvidia AMD Weiren Technology

Characteristics: Strong universality and mature ecology.

ASIC

Representative manufacturers: Google TPU, Cambrian

Characteristics: High efficiency in dedicated scenarios.

Brain like chip

 represents the manufacturer: IBM TrueNorth

Characteristics: Low power consumption, but the ecosystem needs to be improved.

Edge AI

 Representative manufacturers: Horizon Robotics Hailo

Features: low power consumption, high energy efficiency ratio.

2. American companies

NVIDIA (NVIDIA)

A100/H100 GPU: Based on Ampere/Hopper architecture, supports large-scale AI training and inference, suitable for data centers and supercomputing.

 Jetson series (such as Jetson AGX Orin): low-power AI chips for edge computing and robot scenarios.

Technical features: CUDA has ecological advantages, strong compatibility, and is widely used in deep learning.

AMD

Representative product: Instinct MI300 series: the first CPU+GPU heterogeneous chip designed specifically for generative AI and high-performance computing optimization.

Positioning: Challenging Nvidia's dominant position in the data center market.

Intel (Intel)

Habana Gaudi/Gaudi2: ASIC chip for AI training, benchmarked against Nvidia A100.

Movidius VPU: a visual processing chip designed for edge devices, such as drones and security cameras.

Google

 represents the product: TPU v4 Dedicated ASIC chip, supporting Google Cloud AI services, proficient in large-scale matrix operations.

Cerebras Systems

 represents the product: Wafer Scale EngineWSE-3): A super large chip based on a whole wafer, specializing in large model training, with computing power reaching billions of times.

Groq

 represents the product: LPULanguage Processing Unit): Low latency inference chip, specifically optimized for generative AI (such as LLM).

3. Chinese enterprises

Huawei (HiSilicon)

Representative product: Ascend 910/310: Based on the da Vinci architecture, it supports full scene AI (cloud edge end) and has a computing power of 256 TFLOPS.

Cambricon

Representative product: Siyuan (MLU) 590:7nm process, supports kilocard cluster training, benchmarking against Nvidia A100.

Horizon Robotics

Representative product: Journey series (such as J5): BPU architecture for autonomous driving, with a computing power of 128 TOPS.

Biren Technology

Representative product: BR100 series: 7nm general-purpose GPU, with computing power surpassing NVIDIA A100, focusing on the data center market.

Iluvatar (Iluvatar)

Representative product: Big Island series: Universal GPU, compatible with CUDA ecosystem, supports AI training and graphics rendering.

Moore Threads

Representative product: MTT S series: a domestically produced fully functional GPU that supports AI acceleration and graphics rendering.

Mu Xi

Xi Si @N series: AI reasoning

Xi Yun @C-series: Large Model Training

Xi Cai @G Series: Graphics Rendering

Suiyuan Technology

Yunsui T1x/T2x: Training

Yunsui i1x/i2x: Reasoning

Architecture: self-developed GCU-CARA architecture

Other updates: DeepSeek It is reported that we are accelerating the layout of self-developed AI chips and recruiting chip design talents.

4. Other emerging players

Tesla

Representative product: Dojo D1 chip: supports autonomous driving video data training.

MetaFacebook

Research and Development Direction: MTIAMeta Training & Inference Accelerator): Optimize the recommendation system.

Amazon

Representative products: Inferentia/Trainium Deploying self-developed chips through AWS to reduce cloud service costs.