According to recent reports, AI company DeepSeek has delayed the launch of its much-anticipated R2 model due to chip-related setbacks. The R2 was originally trained on Huawei's Ascend platform, but issues with system stability, limited software and hardware support, and slower chip communication speeds disrupted progress.
To keep development moving, DeepSeek switched to NVIDIA chips for the training phase while continuing to use Huawei hardware for inference. This adjustment pushed the release date from the planned May launch to an as-yet-unconfirmed date.
Huawei has since dispatched an engineering team to DeepSeek's offices to help optimize the Ascend platform for R2's development. Despite the ongoing collaboration, DeepSeek founder Liang Wenfeng has voiced dissatisfaction with the pace, pledging to boost R&D investment to finish the model within weeks.
Another major bottleneck has been longer-than-expected data labeling work, a critical step in AI training that has stretched the overall timeline.
The delay highlights the real-world challenges of integrating domestic chips into high-performance AI workflows. While China's regulators have been encouraging local tech firms to justify purchases of NVIDIA's H20 processors to promote domestic alternatives, industry consensus suggests local chips still trail NVIDIA by one to two generations in training capabilities.
Berkeley AI researcher Ritwik Gupta noted that "model homogenization is real—developers can quickly switch to competitors like Alibaba's Qwen3," underscoring the narrow market window DeepSeek faces.