OpenAI Partners with Broadcom for 10-Gigawatt AI Chip Deployment
OpenAI has entered a multiyear collaboration with Broadcom to co-develop custom AI chips and networking equipment. Together, the companies plan to deploy 10 gigawatts of AI accelerators, designed by OpenAI and implemented with Broadcom’s networking solutions. Installation of these systems is scheduled to begin in the second half of 2026 and complete by the end of 2029.
The new infrastructure, using Broadcom’s Ethernet and connectivity technology, will be deployed across OpenAI and partner data centers to meet the rising global demand for AI services. By designing its own accelerators, OpenAI aims to integrate lessons learned from building cutting-edge models directly into hardware, unlocking higher levels of AI performance and efficiency.
OpenAI CEO Sam Altman said, “Partnering with Broadcom is a critical step in building the infrastructure needed to unlock AI’s potential and deliver real benefits for people and businesses. Developing our own accelerators complements the broader ecosystem of partners expanding AI capacity for humanity.”
Broadcom President and CEO Hock Tan added, “We are excited to co-develop and deploy 10 gigawatts of next-generation accelerators and network systems, helping pave the way for the future of AI.”
This announcement comes as OpenAI continues to expand its compute partnerships. Last month, Nvidia committed up to $100 billion to support OpenAI’s AI data center infrastructure, including 10 gigawatts of deployment using its upcoming Vera Rubin chips. Additionally, OpenAI recently signed a multi-year agreement with AMD to deploy six gigawatts of AMD GPUs, starting with the first one-gigawatt installation of AMD Instinct MI450 GPUs in the latter half of 2026.
With over 800 million weekly active users, OpenAI is rapidly scaling its hardware ecosystem to support next-generation AI capabilities and meet increasing global demand.