
According to a joint statement, Amazon plans to invest an additional $5 billion in Anthropic, with the potential for a further $20 billion investment in the future, strengthening the strategic partnership between the two companies in the rapidly expanding artificial intelligence (AI) industry.
Anthropic, the developer of the Claude AI chatbot and coding tools, also stated that it intends to invest more than $100 billion over the next decade in computing infrastructure, including cloud services and AI chips provided by Amazon Web Services (AWS). This long-term commitment highlights the scale of compute demand required to train and deploy next-generation AI models.
Amazon has already been one of Anthropic’s largest backers, with prior investments totaling approximately $8 billion. Through this expanded partnership, Amazon strengthens the position of its cloud business by gaining deeper integration with leading foundation models, while also expanding the adoption of its in-house AI accelerators, including the AWS Trainium chip series. At the same time, Anthropic gains access to AWS’s global enterprise customer ecosystem, with both companies reporting that over 100,000 customers are already running Claude models on AWS infrastructure.
Founded in 2021 by former OpenAI researchers, Anthropic is widely viewed as a leading contender in the generative AI space and is often expected to consider an initial public offering (IPO) in the near future. The company has been actively scaling its commercial offerings to offset the significant costs associated with advanced AI model development. In February, Anthropic completed a funding round that valued the company at approximately $38 billion, with later private investment offers reportedly valuing it at more than $80 billion.
The latest agreement underscores Anthropic’s ongoing effort to secure large-scale compute capacity required for training its next-generation Claude models. Similar to other AI developers, the company has entered multiple infrastructure partnerships to ensure access to advanced semiconductor resources and high-performance computing systems. Recently, Anthropic announced collaboration with Broadcom to develop custom chips based on Google’s Tensor Processing Unit (TPU) architecture developed by Google, which competes with Amazon’s Trainium platform. Together, these partnerships are expected to provide Anthropic with approximately 3.5 gigawatts of computing capacity, while earlier agreements include plans to procure up to one million custom AI chips from Alphabet’s ecosystem.
Under the new announcement, Amazon will further supply Anthropic with a combination of general-purpose processors and AI accelerator chips, targeting a total computing capacity of around 5 gigawatts. This expansion reflects the increasing demand for high-performance semiconductor infrastructure driven by large-scale AI model training and inference workloads.
Despite its rapid product growth, including strong adoption of tools such as Claude Code, Anthropic has also faced regulatory and legal challenges in the United States related to AI safety frameworks. These disputes have led to ongoing litigation, with the company arguing that such issues could impact its business operations.
Amazon emphasized that it remains a minority shareholder in Anthropic and does not hold a board seat or governance control. Future investment commitments, according to the company, will be contingent on the achievement of specific commercial milestones, reflecting a performance-linked approach to continued collaboration in the AI infrastructure and semiconductor-driven computing ecosystem.