OpenAI CEO Sam Altman shares photos of ‘gigantic’ Stargate AI project: ‘Easy to throw around numbers, but…’


OpenAI CEO Sam Altman shares photos of ‘gigantic’ Stargate AI project: ‘Easy to throw around numbers, but…’

Sam Altman, CEO of ChatGPT-maker OpenAI, has said that the company has entered into an agreement with Oracle to develop an additional 4.5 gigawatts (GW) of “Stargate” data centre capacity in the US. According to the AI company, this investment aims to create new jobs, accelerate reindustrialisation, and solidify US leadership in artificial intelligence (AI). “We have signed a deal for an additional 4.5 gigawatts of capacity with oracle as part of stargate. easy to throw around numbers, but this is a _gigantic_ infrastructure project. some progress photos from abilene:” he said in a post on X.

What is Stargate AI project and what has changed

Combined with the existing Stargate I site in Abilene, Texas, this new collaboration with Oracle will bring OpenAI’s total Stargate AI data centre capacity under development to over 5 GW, which will power more than 2 million AI chips.This development advances OpenAI’s previously announced commitment in January to invest $500 billion into 10 GW of AI infrastructure in the US over the next four years. OpenAI now anticipates exceeding this initial commitment, citing strong momentum with partners including Oracle and SoftBank.Altman also said that they “are planning to significantly expand the ambitions of Stargate past the $500 billion commitment we announced in January.”The expansion is also projected to drive a substantial portion of the hundreds of thousands of jobs expected to be created by Stargate in the coming years. OpenAI estimates that the construction, development, and operation of this additional data centre capacity will generate over 100,000 jobs across construction and operations roles nationwide.Construction of the initial Stargate I facility in Abilene, Texas, is underway, with parts of the site already operational. Oracle began delivering the first Nvidia GB200 racks last month, allowing OpenAI to initiate early training and inference workloads.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *