- Sam Altman says humanity is “nearer to the construction of digital sprintilins”
- Intelligent robots that can make other robots “they are not far away”
- He sees that “the whole class of jobs goes away” but “the capabilities will grow so fast, and we will all get better things”.
In a long Blog PostOpen AI CEO Sam Altman has set its vision for the future and has revealed how artificial general intelligence (AGI) is now indispensable and changing the world.
Which can be seen as an attempt to explain why we have not yet received AGI, the opposite emphasizes that AI’s progress is as a soft curvy rather than a speedy acceleration, but now we are “past the horizon” and that “when we look back in a few decades, we will change in a few decades.”
The opposite writes, “From a relative point of view, uniformity is a bit” and the integration is slowly. ہم کفایت شعاری تکنیکی ترقی کی لمبی لمبی لمبی لمبی لمبی لمبی لمبی لمبی لمبی لمبی لمبی لمبی لمبی لمبی لمبی لمبی لمبی لمبی لمبی لمبی لمبی لمبی لمبی لمبی لمبی لمبی لمبی لمبی لمبی لمبی لمبی لمبی لمبی لمبی لمبی لمبی لمبی لمبی لمبی لمبی لمبی لمبی لمبی لمبی لمبی چوٹی پر چڑھ رہے ہیں۔
But even with a more bad timeline, Altman believes that we are on the way to the Agi, and predict three ways that it will form a future.
1. Robotics
The role of special interest for Altman is the role that is going to pay robotics in the future:
“In 2025, the arrival of agents who can do real academic work. Writing computer codes will never be the same. In 2026, the arrival of systems that can detect the novel’s insights. In 2027, the arrival of the robot can be seen in the real world.”
To do real work in the world, as the Altman imagined, the robot will need to be human, because our world is designed to use by humans, finally.
Inverted, “… robot that can make other robots … is not far from it. If we first have to make a million Humanoid Robots an old-fashioned way, but then they supply the supply chain and improve minerals, driving trucks, operating factories, etc. Centers, etc. can create rates. “
2. Job losses but also opportunities
Ultman says society will have to change on the one hand to adapt to AI by employment losses, but also through growing opportunities:
“The technical growth rate will continue to rise, and the matter will continue that people are almost able to adapt to anything. The entire job classes will be very severe, but on the other hand the world will be richer so fast that we will not be able to entertain the new policy ideas that we will never be able to entertain.”
It seems that the reversal balances the changing job scenes with the new opportunities that will bring the sprintillens: “… We may get out of space colonies next year from solving more than a year’s energy physics. Or the next year a major substance.
3. AGI will be cheaper and widely available
In the new future of Altman, sprintilus will be cheap and widely available. When you explain the best way forward, Altman has previously suggested that we solve the “alignment problem”, which involves “… AI system to learn and work towards which we collectively want Long more than long -term long -term”.
“Then (we) need to focus on superstitulins cheap, widely available, and more focus with any person, company, or country … Giving consumers a lot of freedom, widespread deciding society, it seems very important. As soon as the world talks about these wider sanctions, we begin to talk about the best.”
This is not necessary
By reading Ultman’s blog, it is a matter of essential to the prediction that humanity is marching towards the Agi. It is as if he has seen the future, and his vision has no doubt, but is he okay?
The vision of the reversal is quite the opposite of Apple’s recent dissertation, which states that we are far from the acquisition of AGI that many AI lawyers will like.
“The illusion of thinking”, a new Research dissertation Apple has been told, “Despite their sophisticated self -reflection methods by learning reinforcements, these model fails to promote the ability to solve the general problem for planning tasks, whose performance drops to zero over a certain complexity.”
This research was done on major arguments models, such as Openi’s O1/O3 models and Claude 3.7 Smont thinking.
“Especially the reasoning efforts about this are contradictory as problems come close to significant complications, which suggest the extent of hereditary computing scaling in LRM.” , Paper says.
On the contrary, Altman believes that “very cheap intelligence for the meter is well inside the grip. It may be crazy to say, but if we tell you back in 2020, we are going to where we look more crazy about our current predictions about 2030.”
As like all the predictions about the future, we will know if the opposite is right soon.