Google Deep Mind Makes AI History with Gold Medal Winning Over the Worst Maths in the World

by SkillAiNest

Want a smart insight into your inbox? Sign up for our weekly newsletters to get the only thing that is important to enterprise AI, data, and security leaders. Subscribe now


Google Deep Mind Monday announced that a modern version of its gymnasium artificial intelligence has been officially acquired Gold medal level performance On Olympiad of International MathsSolving six of the six extraordinary difficult problems, and recognizing the first AI system to receive gold level official rating from competitive organizers.

The victory AI moves the field of reasoning and puts Google into a sharp war between the next generation of artificial intelligence between tech jinn. More importantly, it shows that AI can now tackle complex mathematical issues using natural language understanding rather than the need for special programming languages.

“Official results have come-Gemini gained gold medal level at the International Mathematics Olympiad!” Demis calculatedGoogle Deep Mind’s CEO, social media platform XI wrote on Monday morning. “5 of a modern version 6 were able to solve problems. Incredible progress.”

Olympiad of International MathsHeld annually every year since 1959, the world’s most famous mathematics competition is considered for students at the university. Each participating country sends six elite young mathematicians to compete with algebra, combination, geometry, and number theory to resolve the six extraordinary challenging problems. Only 8 % of human participants usually receive gold medals.


AI Impact Series returning to San Francisco – August 5

The next step of the AI is here – are you ready? Block, GSK, and SAP leaders include for a special look on how autonomous agents are changing enterprise workflows-from real time decision-making to end to automation.

Now secure your place – space is limited:


How Google Deep Mind’s Gemini Deep thinks that the most difficult problems of mathematics were broken

Google’s latest success exceeds its 2024 performance, when the company’s joint Alphaproof And Alphajometry The system won a silver medal by solving four of the six problems. Earlier, humans needed to translate natural language problems into domain -related programming languages and then translate AI math production.

This year’s progress took place Gemini deep thinkingA better reasoning system that researchers call “Parallel. Unlike the traditional AI model that follows the same series of arguments, the deep think of several potential solutions simultaneously before reaching a final response.

“Our model worked in natural language from the end to the end, which developed proof of strict math directly in detail of the official issue,” The calculation explained In a follow -up post on the social media site X, emphasizing that the system completed its work in a standard 4.5 hour time limit.

The model potentially achieved 35 of the 42 points, which is more than comfortable than the door threshold of gold. According to IMO President Professor Dr. Gregor Dollenar, the solution was “Astonishing in many cases“And the competition by the graders” was found to be clear, precise and most of them.

Openi faces a reaction to ignoring the official competition rules

This announcement comes between competitive methods and transparency in the midst of increasing tensions in the AI industry. Google Deep Mind’s approach to releasing its own results has been praised by the AI community, especially similar to the rivals of rival Openi.

“We did not announce on Friday because we respected the original request of the IMO board that all AI labs share their results only when official results were confirmed by independent experts and students were correctly received the respect they deserved,” Name wroteOpeni’s first announcement of his Olympiad Performance appears.

Social media users were in a hurry to note this distinction. “Are you watching? Open ignored the IMO’s request. Not ashamed. No class. No class. Straight dishonor,” Write a user. “Google Deep Mind worked with integrity associated with humanity.”

This criticism is from the decision to announce the Openi’s Olympiad results without participating in the official IMO diagnostic process. Instead, a panel of former IMO participants with the openness ranks its AI’s performance, which is a point of view that some of the credibility involved in the society is seen as a lack of reputation.

One critic wrote, “Open is currently the worst company on the planet, while others suggest that the company needs to” take things seriously “and” be more reliable. ” “

Within the methods of training who mastered in Gemini’s math

Google Deep Mind’s success is created from novel training techniques that are above the traditional point of view. The team used the modern -day learning methods designed to address multilateral reasoning, solving the problem and taking advantage of the theory. This model was also provided access to a combination of high quality math solutions and received specific guidance on reaching IMO -style issues.

Technical success influenced AI researchers who noted its wider implications. The AI Observer wrote, “Not only to solve mathematics … but also to understand the issues described in the language and apply the logic on the issues of the novel.” Elvis Warin. “This is not a root memory – it’s an emerging sense of motion.”

Ethan MulkA professor at Wartton School who studied AI, emphasized the importance of using a general purpose model rather than special tools. He wrote, “The growing evidence of LLM’s ability to normalize the problem of the novel,” he wrote, highlighting how it is different from previous ways that require special math software.

The model, in particular, showed impressive arguments in a problem where many human rivals apply the concepts of graduate level mathematics. According to the Deep Mind researcher Juniwic Battle, Gemini “used the only initial number theory to make a wonderful observation and make self -proof,” finding more beautiful solutions than many human participants.

What does Google Deep Mind victory mean for Google 200 billion AI race?

The progress comes at a critical moment in the AI industry, where companies are running to demonstrate high reasoning capabilities. There are instant practical implications of success: Google intends to make this version Deep Think Model Google AI Ultra users are available to test mathematics earlier, who pay $ 250 a month to access the company’s latest AI models.

At this time, the intense competition between large AI laboratories has also been highlighted. Although Google celebrates its procedure, formally certified approach, the openness of the Open announcement reflects a wider tension about transparency and credibility in the development of AI.

This competitive dynamic goes beyond mere mathematics reasoning. In recent weeks, various AI companies announce success capabilities, though not all have been positively received. Elon Musk’s Zee recently launched Grook 4Whose company claimed that “the world’s most clever AI” was, though Leader Board Scores showed it leaving it behind Behind Google and Open Models. In addition, Grook has faced criticism, including controversial features Sexually AI partner And episodes to produce Antisamic material.

AI’s rising dawn that thinks like humans-with the consequences of the real world

Mathematics Olympiad Fateh is beyond competitive global rights. Gemini’s performance shows that the AI system can now resemble human level reasoning in complex tasks, which requires creativity, abstract thinking, and the ability to synthesize insight into multiple domains.

“This is a significant progress as a result of previous year’s progress,” Deep Mind Team noted In their technical announcement. The progress made from special formal languages until the natural language is fully functioning shows that the AI systems are becoming more intuitive and accessible.

This is a development indicator of businesses, that AI can soon be without needing special programming or domain skills without dealing with complex analytical issues in different industries. The ability to argue through complex challenges using everyday language can democratize sophisticated analytical capabilities in organizations.

However, the questions remain about whether these arguments will effectively translate the challenges of the real world. Mathematics Olympiad provides well -fixed issues with clear success standards. This is a far -reaching scream from vague, multi -faceted decisions, which explains most of the business and scientific efforts.

Google Deep Mind intends to return than next year ”In search of the perfect score. The company believes that the AI system, which connects the natural language flow with strict reasoning, will become invaluable tools for mathematics, scientists, engineers and researchers, who will help us advance human knowledge on the AGI path. “

But perhaps the most explained details have come from competitiveness: When the most difficult problem of the competition is faced, Gemini started with a wrong assumption and never recovered. Only five human students solved the problem properly. Finally, it looks, even the AI, who wins the gold medal, still has something to learn from teenage mathematics.

You may also like

Leave a Comment

At Skillainest, we believe the future belongs to those who embrace AI, upgrade their skills, and stay ahead of the curve.

Get latest news

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

@2025 Skillainest.Designed and Developed by Pro