Enterprise leaders have joined a reliable program for nearly two decades. VB transform brings people to develop real enterprise AI strategies. Get more information
Although businesses face challenges to deploy AI agents to critical applications, a new, more practical model is emerging that controls humans as a strategic protection against AI’s failure.
There is an instance of this MixA platform that uses a “fellow in loop” perspective to make AI agents reliable for the mission’s main task.
This approach is a reaction to the growing evidence that a fully independent agent is at a high stake.
High cost of non -checking AI
The problem of AI HULUCITITIONS When companies look for AI applications, it has become a solid threat. In a recent event, AI -powered code editor cursor saw his support boot Invent a fake policy Restriction of subscriptions, rise to a wave of public consumer cancellation.
Likewise, the Finctic Company is famous Reversed course After acknowledging the move, changing customer service agents with AI led to a low quality result. In another dangerous matter, a business chat boot from New York City AI advised businessmen that I am engaged in illegal waysHighlight the dangers of destructive compliance of unorganized agents.
These events are symptoms of great potential differences. According to the sales force of May 2025 Research dissertationToday’s leading agent wins only 58 % of the time and only 35 % of time on multi -faceted people, highlighting an important difference between “current LLM capabilities and the multi -faceted demands of real -world enterprise scenarios.
Fellow -in -loop model
To eliminate this gap, a new approach is focused on human surveillance structures. “The AI ​​agent should work on your direction and on your behalf,” the co -founder of the Mix, Elite Dogs told the venture bat. “But without the built -in organizational surveillance, a completely autonomous agent often causes much problem than their solution.”
This philosophy mixes the Mix colleague in loop models, which directly add human verification to automatic workflows. For example, a huge retailer can receive weekly reports from thousands of stores, including important operational data (eg, sales volume, labor hours, production ratio, headquarters requests). Human analysts should spend hours in manually evaluating data and making decisions based on hoverstics. With the mix, the AI ​​agent automatically automate heavy lifting, analyzing complex patterns and flagging unusually high pay applications or productive abilities.

Payment permits or policy violations such as high stake decisions-prevents workflow-agent described as a “high risk” by a human user and requires human approval before proceeding. The distribution of wages between AI and humans has been integrated into the process of making the agent.
The dog said, “This approach means that a person only joins when his skills actually increase the value-usually 5-10 percent of decisions that can have a significant impact-while the remaining 90-95 % of the usual work goes on automatically.” “You get the full automation speed of standard operations, but when the most important thing is the context, decisions and accountability, human surveillance is particularly kicking.”
In a demo that the mix team showed the venture bat, making an agent is an intuitive process that can be done with simple text guidelines. For reporters, to build facts agent, for example, co -founder Shai Magazmov easily described the multilateral process in the natural language and instructed the platform to embed human verification measures with specific doorsteps, such as when the claim is more dangerous or the resulting results.
One of the basic powers of the platform is integration with tools such as Google Drive, email, and slack, which allows enterprise users to bring their data sources into workflose and interact with their communication platforms directly with agents, without sending the application to the application, or sending the application to the application.
The integration capabilities of the platform further extend to meet the requirements of the specific enterprise. The Maxis model supports the context protocol (MCP), which enables businesses to connect with their basepock tools and APIs, and avoids the need to restore wheels for existing internal systems. In conjunction with other enterprise software such as JIRA and sales force, it allows agents to perform complicated, cross platform function, such as checking open engineering tickets and reporting a manager on the slack.
Human monitoring as a strategic multiplication
Enterprise AI space is currently examining a realism as companies move from experiments to production. The consensus among many of the industry leaders is that there is a practical need for human agents to perform reliable performance in the loop.
The Mix has cooperated with the Model AI to change the economy of scaling. The mixed predicted that by 2030, the deployment of the agent may increase by 1000X and every human monitoring will be more efficient as AI agents will become more reliable. But the total need for human surveillance will still increase.
“Every human surveillance manages more AI work over time over time, but you still need most monitoring because the deployment of AI in your organization,” said the dog.

This means that human skills will be ready instead of disappearing. Instead of replacing the AI, experts will be promoted to the roles where they make the fleet of AI agents and take over the high -ranking decisions for their review.
In this framework, creating a strong human surveillance work becomes a competitive advantage, which allows companies to deploy AI more aggressive and securely than their competitors.
Katz said, “The companies that specialize in this multiplication will dominate their industries, while the full automation chasing will struggle with reliability, compliance and confidence.”