The week kicks off with another big deal in AI.  OpenAI commits $38 billion over seven years to get immediate access to AWS infrastructure and “hundreds of thousands” of Nvidia GPUs for training and inference.
 
Just six weeks ago Nvidia announced (up to) a $100 billion investment in OpenAI. 
 
Now OpenAI is using that backing to secure access to … Nvidia chips.
 
Is this a shell game, moving money from one pocket to another? The signs of hubris we’ve seen in past bubbles?
 
In this case, it looks more like a feature of the AI infrastructure boom, not a bug.
 
It's coordination across big tech, which accelerates the infrastructure build.  And that only widens the competitive moats.
 
For OpenAI, which is inherently at risk of losing ground to model commoditization, these deals give them the competitive advantage to widen the lead in scaling global AI usage — plus the cloud companies now have an incentive to see OpenAI stay relevant.
 
Jensen Huang said OpenAI would be the next multi-trillion dollar company.  That seems more obvious with the events of the past six weeks.
 
These deals do indeed scratch the backs of all parties, but they also increase the probability that America wins the race for global AI leadership — by coordinating capital, compute capacity and AI deployment.
 
All of this said, this acceleration puts more demand on the electricity required to run it all.   
 
On the one hand, the alignment and multi-year commitments are catalysts to get things moving on energy capacity.  On the other hand, in the short term, it tightens energy supply even more.  Satya Nadella just admitted last week, in a podcast, that Azure has GPUs "sitting in inventory that he can't plug in," because he doesn't have the power.