Inside Openai’s big game for science

by SkillAiNest

“It’s actually a desirable location,” says Weil. “If you say enough wrong things and then someone stumbles upon a grain of truth and then another person seizes it and says, ‘Oh, yeah, that’s not quite right, but what if we slowly find our way through the woods.'”

This is Weill’s basic vision for Opnai for science. GPT5 is good, but it’s not Oracle. The value of this technology, he says, is in pointing people in new directions, not providing definitive answers.

In fact, one of the things OpenEye is looking at now is the GPT-5 dial losing its confidence when it comes to responding. Instead of saying Here is the answerit can tell scientists: Here is something to consider.

“It’s actually something we’ve been spending a lot of time on,” Weil says. “Trying to make sure there’s some kind of cognitive humility in that model.”

Watching the viewers

Another thing Openai is looking at is how to use GPT5 to fact-check GPT5. It is often the case that if you feed any of GPT5’s responses back into the model, it will isolate it and highlight errors.

“You can look at the model as its own critic,” Weil says. “Then you can have a workflow where the model is thinking and then it goes into another model, and if that model finds things it can improve on, it sends it back to the original model and says, ‘Hey, wait a minute—this part wasn’t right, but this part was interesting. Keep it up.’ It’s almost like two agents working together and you only see the output once the critter has passed.

What Weil is describing sounds a lot like what Google DeepMind did with AlphaVolvo, a tool that wrapped firms LLM, Gemini into a broader system that filtered the good feedback from the bad and fed it back to make them better. Google DeepMind has used alphanumeric algorithms to solve many real-world problems.

Openei faces stiff competition from rival firms, whose own LLM can do most of these things, if it claims for its models. If this is the case, why should scientists use GPT5 instead of Gemini or Entropic Cloud, models that themselves improve every year? After all, Opnai for Science can be as much an effort as trying to set a flag in new territory. The original innovations are yet to come.

“I think 2026 will be to science what 2025 was to software engineering,” says Weil. “At the start of 2025, if you were using AI to write most of your code, you were an early adopter. Whereas 12 months from now, if you’re not using AI to write most of your code, you’re probably lagging behind. Now we’re seeing the same early brilliance for science as we did for code.”

He continues: “I think in a year, if you’re a scientist and you’re not using AI heavily, you’re going to miss an opportunity to increase the quality and speed of your thinking.”

You may also like

Leave a Comment

At Skillainest, we believe the future belongs to those who embrace AI, upgrade their skills, and stay ahead of the curve.

Get latest news

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

@2025 Skillainest.Designed and Developed by Pro