Mira Murati’s highly anticipated startup, Thinking Machines Lab (TML), has forged a landmark multi-billion-dollar agreement with Google Cloud, an exclusive revelation by TechCrunch confirms. This monumental deal is set to significantly expand TML’s access to Google’s cutting-edge artificial intelligence infrastructure, including state-of-the-art systems powered by Nvidia’s revolutionary GB300 graphics processing units (GPUs). The strategic alliance underscores the intensifying competition among cloud providers to secure partnerships with burgeoning frontier AI labs, while simultaneously empowering TML’s ambitious quest to automate the creation of advanced AI models.
The agreement, reportedly valued in the single-digit billions, provides Thinking Machines Lab with critical access to Google’s most advanced AI systems, built upon the foundation of Nvidia’s new GB300 chips. Beyond raw computational power, the deal encompasses a comprehensive suite of infrastructure services meticulously designed to support the rigorous demands of large-scale model training and deployment. This includes not only raw compute but also crucial ancillary services that are vital for developing sophisticated AI, such as advanced data storage solutions, robust networking capabilities, and specialized software stacks optimized for AI workloads.
This significant investment from Google Cloud into Thinking Machines Lab reflects a broader industry trend where cloud giants are vying to become the foundational infrastructure for the next generation of artificial intelligence. The partnership is particularly notable as it positions TML as one of the very first Google Cloud customers to leverage its GB300-powered systems, which Google touts as offering a remarkable twofold improvement in training and serving speed compared to previous-generation GPUs. Such a performance leap is indispensable for firms operating at the bleeding edge of AI research and development, where computational bottlenecks often dictate the pace of innovation.
Thinking Machines Lab: A Rapid Ascent in Frontier AI
Thinking Machines Lab has emerged as one of the most closely watched startups in the AI landscape since its founding in February 2025 by Mira Murati. Murati, formerly the Chief Technologist at OpenAI, departed the highly influential AI research and deployment company amidst a period of intense industry scrutiny and rapid technological advancement. Her decision to launch TML immediately garnered significant attention, fueled by her prominent role in the development of OpenAI’s groundbreaking models like ChatGPT and DALL-E.
The startup’s trajectory has been nothing short of meteoric. Just five months after its inception, in July 2025, TML successfully raised a staggering $2 billion in a seed funding round, catapulting its valuation to an astonishing $12 billion. This unprecedented valuation for a seed-stage company highlighted immense investor confidence in Murati’s vision and TML’s potential to disrupt the nascent frontier AI market. Despite its high profile and substantial backing, Thinking Machines Lab has maintained a veil of secrecy around its core operations and technological advancements, adding to the intrigue surrounding its progress.
In October 2025, TML finally unveiled its inaugural product, dubbed "Tinker." Tinker is described as an innovative tool designed to automate the creation of custom frontier AI models. This product addresses a critical need in the AI ecosystem: enabling organizations to develop highly specialized AI solutions tailored to their unique requirements, rather than relying solely on generalized large language models. The underlying architecture of Tinker, as revealed by the Google Cloud deal, heavily relies on reinforcement learning workloads. This highly computationally intensive training approach has been instrumental in recent breakthroughs achieved by leading AI labs such as DeepMind and Murati’s former employer, OpenAI, underpinning the development of agents capable of complex decision-making and problem-solving.
While this agreement marks Thinking Machines Lab’s first significant partnership with a major cloud services provider, it is not its initial foray into strategic alliances. Earlier in 2025, TML had already secured a partnership with chipmaking behemoth Nvidia, a deal that notably included an investment from the GPU giant. This earlier collaboration with Nvidia underscored TML’s commitment to leveraging cutting-edge hardware from the outset. The non-exclusive nature of the Google Cloud deal also suggests TML’s pragmatic approach, indicating that the startup may opt to utilize multiple cloud providers over time to ensure resilience, optimize costs, and access specialized services, reflecting a growing trend among enterprise AI developers.
The High-Stakes Cloud AI Race and Google’s Aggressive Strategy
Google Cloud’s strategic investment in Thinking Machines Lab is a clear manifestation of its aggressive campaign to solidify its position in the fiercely competitive cloud AI market. In an era where AI development is increasingly reliant on vast computational resources, securing partnerships with innovative, fast-growing frontier labs like TML is paramount. Google Cloud is actively seeking to bundle its extensive cloud offerings – ranging from scalable storage solutions and its robust Kubernetes engine for container orchestration to Spanner, its globally distributed database product – with its specialized AI infrastructure to create a comprehensive and compelling ecosystem for AI developers.
This deal with TML is part of a series of high-profile agreements Google has recently struck. Earlier this month, AI research powerhouse Anthropic, a direct competitor to OpenAI, signed a substantial agreement with Google and Broadcom. That deal focused on securing multiple gigawatts of tensor processing unit (TPU) capacity. TPUs are Google’s custom-designed AI chips, specifically engineered for machine learning workloads, offering an alternative to Nvidia’s dominant GPUs. This dual strategy of offering both proprietary TPUs and access to Nvidia’s latest GPUs allows Google to cater to a broader spectrum of AI developers with varying hardware preferences and computational needs.
However, the competition for AI compute resources and developer partnerships is exceptionally fierce. Just this week, Anthropic further diversified its infrastructure by signing a separate agreement with Amazon Web Services (AWS) to secure up to 5 gigawatts of capacity for training and deploying its flagship Claude AI models. This rapid succession of multi-gigawatt deals underscores the astronomical computational demands of current and future AI models, transforming AI infrastructure into a strategic battleground for cloud providers. The global market for cloud AI infrastructure is projected to reach hundreds of billions of dollars within the next few years, making these early-stage partnerships critical for long-term market dominance.
Mira Murati’s Vision and Background
Mira Murati’s transition from a pivotal leadership role at OpenAI to founding Thinking Machines Lab has been a subject of considerable interest within the AI community. As OpenAI’s Chief Technologist, she was instrumental in guiding the technical development and strategic direction of some of the most advanced AI models in existence, navigating complex ethical considerations and rapid productization challenges. Her experience at the forefront of generative AI development provides TML with an unparalleled foundation of expertise and insight into the demands and future trajectory of the field.
Her decision to create a company focused on "automating the creation of custom frontier AI models" speaks to a specific vision: moving beyond generic large models to empower organizations with tailored, highly efficient AI solutions. This approach suggests a focus on practical application and industrial deployment, where customization and efficiency are often paramount. The secretive nature of TML, coupled with its colossal valuation, hints at potentially disruptive technologies or methodologies that promise significant advancements in how AI is developed and utilized.
Technological Underpinnings: Reinforcement Learning and Advanced Hardware
The Google Cloud deal offers a crucial glimpse into the technological core of Thinking Machines Lab, explicitly mentioning support for its reinforcement learning (RL) workloads. Reinforcement learning is a paradigm of machine learning concerned with how intelligent agents ought to take actions in an environment to maximize the notion of cumulative reward. Unlike supervised learning, which relies on labeled datasets, or unsupervised learning, which finds patterns in unlabelled data, RL agents learn through trial and error, receiving feedback (rewards or penalties) for their actions.
This approach has been pivotal in achieving superhuman performance in complex tasks, from mastering board games like Go and chess (DeepMind’s AlphaGo) to controlling robotic systems and developing advanced autonomous agents. However, RL is notoriously computationally expensive. Training an RL agent often requires millions or even billions of interactions with its environment, simulating complex scenarios, and iterating through countless policy updates. This iterative process demands immense parallel processing capabilities and vast memory, making high-performance GPUs, particularly those optimized for AI workloads, indispensable.
Nvidia’s GB300 chips, which TML will be among the first to access via Google Cloud, represent the cutting edge of this hardware evolution. The GB300 Grace Blackwell Superchip combines powerful GPUs with Nvidia’s Grace CPU, designed for extreme performance in AI and high-performance computing. Its architecture is specifically tailored to accelerate both training and inference for next-generation AI models, offering significant advantages in processing large datasets and complex algorithms inherent in reinforcement learning. The 2X improvement in speed cited by Google is a critical factor for TML, enabling faster experimentation, quicker model iteration, and ultimately, a faster path to deploying advanced AI solutions.
Market Impact and Future Outlook
This multi-billion-dollar deal has profound implications for all parties involved and the broader AI ecosystem. For Google Cloud, it represents a significant victory in the ongoing "AI compute arms race." By securing an early partnership with a high-profile, well-funded frontier AI lab founded by a figure like Mira Murati, Google strengthens its credibility as a preferred infrastructure provider for leading-edge AI development. It also provides Google with valuable insights into the future computational needs and architectural preferences of advanced AI, allowing it to fine-tune its offerings.
For Thinking Machines Lab, the agreement provides the necessary computational horsepower to scale its ambitions. Access to such vast and advanced infrastructure is not merely an operational necessity; it is a strategic differentiator. It enables TML to accelerate the development of Tinker, push the boundaries of custom frontier AI models, and potentially establish a leadership position in a rapidly evolving market. The ability to iterate quickly and deploy sophisticated RL-based models will be crucial for TML to maintain its competitive edge and justify its high valuation.
More broadly, the deal underscores several key trends in the AI industry:
- The Unprecedented Demand for Compute: The insatiable hunger for computational power to train and deploy increasingly complex AI models shows no signs of abating. This drives massive capital expenditure by cloud providers and chip manufacturers.
- Strategic Importance of Cloud Partnerships: AI startups, even well-funded ones, often cannot afford or efficiently manage their own data centers at the required scale. Partnering with cloud giants becomes a strategic imperative.
- The Rise of Custom AI: The focus on "custom frontier AI models" suggests a maturing market where general-purpose models are complemented by highly specialized, fine-tuned, and purpose-built AI solutions.
- Nvidia’s Continued Dominance: Despite efforts by cloud providers to develop custom chips, Nvidia’s GPUs remain the gold standard for high-performance AI, cementing its indispensable role in the ecosystem.
In conclusion, the multi-billion-dollar partnership between Mira Murati’s Thinking Machines Lab and Google Cloud is a watershed moment, illustrating the intense competition for AI infrastructure and the staggering investments required to propel frontier AI development. It equips one of the industry’s most intriguing new ventures with the essential tools to realize its ambitious vision, while simultaneously reinforcing Google Cloud’s strategic positioning at the forefront of the AI revolution. As the AI landscape continues to evolve at breakneck speed, such alliances will undoubtedly shape the trajectory of technological innovation and market leadership for years to come.







