Part #/ Keyword
All Products

Azure Cobalt 200: First Arm CSS V3 CPU

2025-11-29 11:38:19Mr.Ming
twitter photos
twitter photos
twitter photos
Azure Cobalt 200: First Arm CSS V3 CPU

Arm has confirmed that Microsoft's newly launched Azure Cobalt 200 is the world’s first processor built on Arm Neoverse CSS V3 and specifically designed for cloud and AI infrastructure. The announcement marks a major step in how modern data centers are being reshaped by artificial intelligence, moving away from one-size-fits-all systems toward deeply customized platforms built for specific workloads.

As AI becomes part of everything from web services to big data analytics and large-model inference, data centers are no longer just collections of compute nodes. They are evolving into tightly integrated systems that must deliver high throughput, low latency, and strong energy efficiency at the same time. To meet these demands, the industry is rethinking compute architecture from the ground up, optimizing at every level for performance, scalability, and power efficiency.

Microsoft officially introduced Azure Cobalt 200 on November 19 as its next-generation Arm-based CPU for cloud-native workloads. Manufactured using TSMC's advanced 3nm process, the chip represents Microsoft's strategy of refining its entire cloud stack, from silicon through to software. It is fully compatible with existing Azure workloads running on Cobalt CPUs, delivers up to 50% higher performance than the previous Cobalt 100, and is designed to integrate tightly with Microsoft's latest security, networking, and storage technologies.

The first Cobalt 200 systems are already operating in Microsoft's data centers, with broader deployment and availability planned for 2026. One of the most important technical highlights is its use of Arm Neoverse CSS V3, Arm's latest performance-focused compute architecture. According to Arm, this makes Cobalt 200 the first publicly announced chip to adopt the platform, positioning it at the forefront of next-generation cloud silicon.

Each Cobalt 200 system-on-chip integrates 132 cores, with 3MB of L2 cache per core and a massive 192MB shared L3 cache. This configuration is designed to handle large-scale cloud workloads with high efficiency, especially when paired with dedicated accelerators. Microsoft describes this as the foundation of its “AI data center” vision, where general-purpose CPUs work closely with specialized hardware, supported by customized networking, storage, and security offload engines to improve performance in training, tuning, and deployment of AI models.

One of the most distinctive features of Cobalt 200 is its per-core dynamic voltage and frequency scaling. Instead of all cores running at the same speed, each of the 132 cores can operate at its own optimal performance level depending on workload demands. This fine-grained control helps maximize efficiency, ensuring that power is used where it matters most without wasting energy elsewhere.

With Azure Cobalt 200, Microsoft and Arm are signaling a clear direction for the future of cloud computing: purpose-built processors, tightly integrated systems, and architectures shaped from the start for AI-driven workloads.

* Solemnly declare: The copyright of this article belongs to the original author. The reprinted article is only for the purpose of disseminating more information. If the author's information is marked incorrectly, please contact us to modify or delete it as soon as possible. Thank you for your attention!