China’s DeepSeek says its hit AI model cost just $294,000 to train

In a groundbreaking revelation, Chinese AI developer DeepSeek disclosed that it spent a mere $294,000 to train its R1 model, a figure significantly lower than the costs reported by its U.S. counterparts. This disclosure, published in a peer-reviewed article in the journal Nature on September 18, 2025, is poised to reignite discussions about China’s role in the global AI race. The Hangzhou-based company, which has largely remained out of the public eye since its January 2025 announcement of lower-cost AI systems, detailed that the R1 model was trained using 512 Nvidia H800 chips over 80 hours. The article, co-authored by DeepSeek founder Liang Wenfeng, also revealed that the company utilized Nvidia A100 GPUs in the preparatory stages of development, a fact that had not been previously disclosed. This revelation comes amidst ongoing scrutiny from U.S. companies and officials regarding DeepSeek’s access to advanced AI chips, particularly after the U.S. imposed export controls on high-performance chips to China in October 2022. Despite these challenges, DeepSeek has managed to attract top talent in China, partly due to its operation of an A100 supercomputing cluster, a rarity among domestic firms. The company’s cost-effective approach to AI development has already had a significant impact on global markets, prompting investors to reevaluate the dominance of established AI leaders like Nvidia.