
Thinking machines, AI Startup, was established earlier this year by former Openi CTO by Meerati, Launch her first product: TinkerA API -based API that is designed to make better toning a large language model (LLM), which is both powerful and accessible.
Now, in a private beta, the tinker provides direct control over their training pipelines, offloading heavy lifting of computations and infrastructure management to the developers and researchers.
As the Marti wrote in one Post on Social Network X: "Tinker brings Frontier Tools for researchers, which offers a clear summary for writing experiments and writing training pipelines while handling the complexity of distributed training. It enables novel research, customs model and solid baselines."
Tinker’s start is the first public milestone for thinking machines, which picked up billion 2 billion Earlier this year, from A16z, Nvidia, Accel, and others.
The company’s goal is to support more open and customized AI development – a mission that seems to be resonant with both independent researchers and institutions, frustrated by vague tooling around today’s proprietary models.
A developer concentrated training API
Tinker is not another drag and drop interface or black box toning service. Instead, it offers a Lower -level but user -friendly APIGiving researchers a granular control over damage functions, training loops, and data workflows – are all in the standardized code.
The original training workload operates on the systematic infrastructure of thinking machines, which usually enables GPU to enable rapidly distributed process without headaches.
In the core part of it, the tinker offers:
Local ancient of Azigar Like
forward_backwardAndsampleEnable users to make customs fine toning or RL algorithm.Support for both Small and large open -weight modelsIncluding compound specialist architecture such as QWen-235b-A22b.
With integration Laura -based tuningAllowing multiple training jobs to share computers ponds, improve cost performance.
An open source fellow library called Tinker kickbookWhich includes the implementation of the post -training methods.
As a student of the University of Berkeley Computer Science PhD Tyler Gergus wrote on X After testing the API, “many RL finishing services are based on the enterprise and do not let you replace the training logic. With tinker, you can neglect calculation and ‘tinker’, with ENVS, ELGS and data.”
In -real -world use issues in institutions
Before its public first film, Tinker was already in use in several research labs. Among the early adoptions, yes, along with Berkeley, as well as Princeton, Stanford, and Redwood Research teams, each applied to API on model unique training issues:
Princeon’s Goodel team Fine Tound LLM to prove a formal ideology. Using Tinker and Laura with only 20 % of the data, they fought the performance of full parameter SFT models such as Guidale-Prover V2. Tinker trained on their model arrived 88.1 % Pass@32 Manif 2F Benchmark Mark and on 90.4 % with self correctionBy defeating the big closed model.
Rutskov Lab in Stanford Used tinker to train chemical reasoning models. With learning on the upper part of Lama 70B, Iupac-To-Formula conversion turned out of accuracy 15 % to 50 %Researchers were described beyond the first reach without major infra support.
Skyrill in Berkeley Add Ren Customs Multi-Agent Learning Loop, which includes ASYNC of Policy Training and Multi-turn tool use-thanks to the flexibility of the title.
Red Wood Research Long context AI used tinker for RL-train QWEN3-32B on control tasks. Researcher Eric Gan shared that without Tinker, he would not have followed the project, noting that multi -node training scaling has always been a barrier.
These examples show Tinker’s ability-this classic supervision supports both the finer toning and the highly experimental RL pipelines, which have large-scale domains.
Confirmation of community from AI Research World
Tinker’s announcement gave rise to an immediate response from the AI ​​research community.
Former Openi co -founder and former Tesla Ai Head Andridge Carpathi (Now head of AI-Native School Euraka Labs) Tinker’s Design Trade Office praised, Write on X: “Compared to the more common and current sample to upload your data, we will train your LLM, ‘this, in my opinion, is another clever place to eliminate the complexity of the post -training.”
He added that tinker allows consumers to maintain ~ 90 % algorithmic control by removing 90 % of infrastructure pain.
John ShilminFormer co -founder of Openai and now chief scientist and a Co -founder of thinking machinesTinker described X on As “infrastructure I always wanted," And it also includes a quote attributed to the late British philosopher and mathematician Alfred North White Head: "Civilization develops by expanding the number of important operations that we can do without thinking about them."
Others noted how API had to use clean and how easily it handled parallel estimates and samples of checkpoints, such as RL’s specific landscapes.
Philip Mortz And Robert NisharaScaleting framework to creators of any scale co -founder and widely used open source AI applications RayEven so, highlights the opportunity to connect the tinker with a more distributed computing framework.
Free to start, pay as your pricing is coming soon
Tinker is currently available Private betaWith A, with Weightlist sign -up Open for developers and research teams. During beta, the platform is used free. A Pricing Prices model Will be introduced in the coming weeks.
For organizations interested in deep integration or dedicated assistance, the company invites them to inquire through its website.
The background of thinking machines and Open AI outbreak
Thinking machines were founded My dreadWho served as the Openi CTO until his departure in September 2024. After it went out, after the period of organizational instability in Open and one of the high -profile researchers’ departure, especially in the Openi Super League team, which has been since. Deducted.
Mauriti announced the vision of his new company in the early 2025, emphasizing three pillars:
To help people Adapt to the AI ​​system according to their specific requirements
Building Strong foundations for and safe AI
Rear Open Science Through a public release of models, codes, and research
In July, Mariti confirmed that the company had raised $ 2 billionPositioning thinking machines as one of the most financially -driven independent AI startups. Investors cited the team’s experience in basic developments like Chat GPT, PPO, TRPO, Piturich, and Openi Gym.
The company distinguishes itself by focusing on its attention Multi Moodle AI System Who cooperate with consumers through natural communication rather than the purpose of fully independent agents. Its infrastructure and research efforts are aimed at supporting high quality, adaptable model while maintaining strict safety standards.
Since then, he has also published several research papers on open source techniques that anyone can use freely in machine learning and AI community.
This emphasis on openness, infrastructure standards, and researchers’ support, and the researcher’s support machines separately-even as the open source AI market has become extremely competitive, with many companies fielding powerful models that are open, antropic, Google, Google, Meta and Meta.
Since the developer is competing against the Mind Share, thinking machines are indicating that it is ready to meet the demand for a product, technical explanation and public documents.