What happens when a manpower has an honest, nonsense conversation about the future of AI when a manpower and a discussion AI expert? In Kore.Ai’s Re: Imagine the 2025 event, Terren Hawkins By Optica Labs And Kore.ai’s own Cobes Grillings That’s exactly what – to face the noise to find real challenges, risks and opportunities, such as AI’s work and society.
The conversation was not about shiny demo or sales pitches. Instead, he focused on what really makes sense: how AI will change jobs, the importance of keeping humans, the moral risks of business, and the frequently ignored complications of language and culture in the AI ​​system.
“If an AI agent took your job yesterday …”
In the early days of the debate, Taran was asked what she would say in her farewell email if yesterday the AI ​​replaced her character. The answer was interesting and thinking of equal parts:
“Good wishes, the pace of God … but also Hello from Hawaii!”
Despite being funny, this reaction indicated a critical insight: AI is not just about taking people’s place. It is forcing everyone to consider how and where a human being increases unique value. The future of work is not dustopian obsolete. This is a new explanation and adaptation.
To keep humans in the loop
Looking forward to 2030, the Tirenn emphasized non -negotiation priority for the leaders: human in loop.
“Hypratometation seems interesting, but we need to withdraw and revise the processes we have already done automatically. Have we got this right? Do humans still need to provide monitoring, checks and balances?”
This is a common but often neglected truth to the home: automation is not just about working at least. It is about working better with intention, sympathy and accountability.
Ethical dangers out of office
Kobus extends the lens from operational to geographical political:
“I want to borrow an idea of ​​an idea of ​​an idea-the concept of national states using AI models as a troyjan horses. AI is not just a source of business. It is a rapid weapon in geographical political disputes. It is a real and current threat.”
TIRNE involved to balance innovation with responsibility:
“This is not about reducing innovation. It is about to ensure safety, morality and strong company guards coming first. The rules, policy and moral framework cannot happen after that.”
Together, they highlighted it The effect of AI is not limited to the enterprise..
The global challenge of language, culture, and AI
One of the most eye -opening moments of conversation is focused on the language.
“More than 7,500 languages ​​are spoken globally,” Taryn noted. “Most AI models are trained in English, Japanese, Chinese, or original language. What will happen to the rest?”
It made the challenge example with an easy example of English newborn:
“If you ask for ‘obstacles’, most of the American laundry compartment thinks. But in my country, this is a basket full of cheese, alcohol and picnic behavior.”
The story highlights how AI’s assumptions can create misunderstandings without cultural contexts, and there is a risk of misunderstanding on the scale because the AI ​​system begins globally.
Sympathetic – deeper than the skin
Both agreed to the AI ​​interface to be sympathetic. But Kobus made a precautionary note:
“Just because the AI ​​interface appears sympathetic, it does not mean that the basic system is really is. There is still an unseen threat below the surface.”
Real sympathy requires more than a user -friendly interface to design AI. It calls for human surveillance, moral hardship, and constant vigilance.
Why is this conversation important
Answer: Imagine that 2025 was not just about to show what AI was next. It was a platform for honest conversation that matters – among experts who believe that AI’s future is much more than technology. It is about leadership, morality and thinking.
The dialogue between Taryn and Cobes reminds us that since AI changes work and society, it is our common responsibility to create it with care, wisdom and humanity.
Stay for more stories from Re: Imagine 2025.