Thinking Machines first product coming in months, open source

0


Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now

Mira Murati, founder of AI startup Thinking Machines and former chief technology officer of OpenAI, today announced a new round of $2 billion in venture funding, and stated that her company’s first product will launch in the coming months and will include a “significant open source component…useful for researchers and startups developing custom models.”

The news is exciting for all those awaiting Murati’s new venture since she exited OpenAI in September 2024 as part of a wave of high-profile researcher and leadership departures, and seems to come at an opportune time given her former employer OpenAI’s recent announcement that is own forthcoming open source frontier AI model — still unnamed — would be delayed.

As Murati wrote on X:

“Thinking Machines Lab exists to empower humanity through advancing collaborative general intelligence.

We’re building multimodal AI that works with how you naturally interact with the world – through conversation, through sight, through the messy way we collaborate. We’re excited that in the next couple months we’ll be able to share our first product, which will include a significant open source component and be useful for researchers and startups developing custom models. Soon, we’ll also share our best science to help the research community better understand frontier AI systems.

To accelerate our progress, we’re happy to confirm that we’ve raised $2B led by a16z with participation from NVIDIA, Accel, ServiceNow, CISCO, AMD, Jane Street and more who share our mission.

We’re always looking for extraordinary talent that learns by doing, turning research into useful things. We believe AI should serve as an extension of individual agency and, in the spirit of freedom, be distributed as widely and equitably as possible.  We hope this vision resonates with those who share our commitment to advancing the field. If so, join us. https://thinkingmachines.paperform.co“

Broad excitement from Thinking Machines team

Other Thinking Machines employees have echoed the excitement around the product and infrastructure progress.

The AI Impact Series Returns to San Francisco – August 5

The next phase of AI is here — are you ready? Join leaders from Block, GSK, and SAP for an exclusive look at how autonomous agents are reshaping enterprise workflows — from real-time decision-making to end-to-end automation.

Secure your spot now — space is limited: https://bit.ly/3GuuPLF

Alexander Kirillov described it on X as “the most ambitious multimodal AI program in the world,” noting rapid progress over the past six months.

Horace He, another engineer at the company, highlighted their early work on scalable, efficient tooling for AI researchers. “We’re building some of the best research infra around,” he posted. “Research infra is about jointly optimizing researcher and GPU efficiency, and it’s been a joy to work on this with the other great folk here.”

Investor Sarah Wang of a16z similarly shared her enthusiasm about the team’s pedigree and potential. “Thrilled to back Mira Murati and the world-class team behind ~ every major recent AI research and product breakthrough,” she wrote. “RL (PPO, TRPO, GAE), reasoning, multimodal, Character, and of course ChatGPT! No one is better positioned to advance the frontier.”

More on Thinking Machines

According to Murati, the company aims to deliver systems that are not only technically capable but also adaptable, safe, and broadly accessible. Their approach emphasizes open science, including public releases of model specs, technical papers, and best practices, along with safety measures such as red-teaming and post-deployment monitoring.

As VentureBeat previously reported, Thinking Machines emerged after Murati’s departure from OpenAI in late 2024. The company is now one of several new entrants aiming to reframe how advanced AI tools are developed and distributed.

A well-timed announcement following OpenAI’s delay of its own open source foundation model

The announcement comes amid increased attention on open-access AI, following OpenAI’s decision to delay the release of its long-awaited open-weight model.

The planned release, originally scheduled for this week, was postponed recently by CEO and co-founder Sam Altman, who cited the need for additional safety testing and further review of high-risk areas.

As Altman wrote on X:

“we planned to launch our open-weight model next week.

we are delaying it; we need time to run additional safety tests and review high-risk areas. we are not yet sure how long it will take us.

while we trust the community will build great things with this model, once weights are out, they can’t be pulled back. this is new for us and we want to get it right. sorry to be the bearer of bad news; we are working super hard!“

Altman acknowledged the irreversible nature of releasing model weights and emphasized the importance of getting it right, without providing a new timeline.

First announced publicly by Altman in March, the model was billed as OpenAI’s most open release since GPT-2 back in 2019 — long before the November 2022 release of ChatGPT powered by GPT-3.

Since then, OpenAI has focused on releasing ever more powerful foundation large language models (LLMs), but kept them proprietary and only accessible through its ChatGPT interface (with limited interactions for free tier users) and paid subscribers to that application and its others such as Sora, Codex, and its platform application programming interface (API), angering many of its initial open source supporters and former funder and co-founder turned AI rival Elon Musk (who is now leading xAI).

Yet the launch of the powerful open source DeepSeek R1 by Chinese firm DeepSeek (an offshoot of High-Flyer Capital Management) in January 2025 totally upended the AI model market, as it immediately rocketed up the top most-used AI model charts and app downloads, offering advanced AI reasoning capabilities previously relegated to proprietary models for free, and the added bonus of complete customizability and fine-tuning, as well as running locally without web servers for those concerned about privacy.

Other major AI providers including Google were subsequently motivated to release similarly powerful open source AI models in hopes to bring users into their ecosystems, and it appears OpenAI also “felt the heat” of the competitive fire and was moved to begin developing its own open source rival as well.

Altman has described the upcoming new OpenAI open source release as a model with reasoning capabilities and emphasized its use as a foundation for developers to build and fine-tune their own systems. OpenAI has also hosted feedback sessions with developers in San Francisco, Europe, and Asia-Pacific to gather input, signaling that the model was still undergoing refinement.

Other OpenAI employees, including Aidan Clark, reiterated on X that the model is strong in capability but must meet a high safety bar.

This pause, combined with the lack of technical detail and clear dates, suggests the initiative remains in a cautious, internally focused phase.

The delay has left an opening in the developer ecosystem—one that Thinking Machines now appears poised to step into with clearer timing and a public commitment to openness.

With OpenAI’s open-weight model now in limbo, Thinking Machines’ decision to announce a clear timeline and include an open source component could reshape developer attention. By signaling public readiness and a commitment to openness, the company is not only staking out a position in the competitive frontier of AI, but also addressing developer demand for transparent, customizable tools.



Source link

You might also like
Leave A Reply

Your email address will not be published.