
Jove Freitas is GM and VP of Engineering for AI and Automation Pegridy
As AI continues to be used in large organizations, leaders are increasingly looking for the next development that will deliver a bigger ROI. The latest wave of this ongoing trend is the adoption of AI agents. However, as with any new technology, organizations need to ensure that they adopt AI agents in a responsible manner that facilitates both speed and security.
More than half of organizations have already deployed AI agents to some degree, with more expected to follow in the next two years. But many early adopters are now reevaluating their approach. Four in 10 tech leaders regret not establishing a strong governance foundation from the start, suggesting they adopted AI too quickly, but with room to improve policies, rules and best practices designed to ensure responsible, ethical and legal development and use of AI.
As the adoption of AI accelerates, organizations must find the right balance between their exposure risk and the implementation of safeguards to ensure that the use of AI is secure.
Where do AI agents pose potential threats?
There are three primary areas to consider for safe AI adoption.
The first is shadow AI, when employees use unauthorized AI tools without express permission, bypassing approved tools and processes. It should create the processes necessary for experimentation and innovation to introduce more efficient ways of working with AI. Although shadow AI has been around as long as AI tools themselves, the autonomy of an AI agent makes it easy for insecure tools to operate outside its scope, which can introduce new security vulnerabilities.
Second, organizations must eliminate deficiencies in AI ownership and accountability to prepare for incidents or processes. The power of AI agents lies in their autonomy. However, if agents act in unpredictable ways, teams should be able to determine who is responsible for resolving any issues.
A third risk arises when AI agents lack clarity for the actions they take. AI agents are goal-oriented, but how they accomplish their goals may not be clear. AI agents should have explainable logic underlying their actions so that engineers can trace and, if necessary, reverse actions that may cause problems in existing systems.
While none of these risks should delay adoption, they will help organizations better ensure their security.
Three guidelines for adopting a responsible AI agent
Once organizations have identified the risks posed by AI agents, they must implement guidelines and safeguards to ensure safe use. By following these three steps, organizations can reduce these risks.
1: Make human supervision the default
The AI ​​agency continues to evolve at a rapid pace. However, we still need human supervision when AI agents are given the ability to act, make decisions, and pursue a goal that could affect key systems. A human should be in the loop by default, especially for business critical use cases and systems. Teams using AI must understand the steps it can take and where they may need to intervene. Start conservatively and, over time, increase the level of agency given to AI agents.
Together, operations teams, engineers and security professionals must understand the role AI agents play in monitoring workflows. Each agent should be assigned a specific human owner for clearly defined oversight and accountability. Organizations must also allow a human to flag or override an AI agent’s behavior when an action has a negative consequence.
When considering tasks for AI agents, organizations should understand that, while traditional automation is good at handling repetitive, rule-based processes with data inputs, AI agents can handle more complex tasks and adapt to new information in a more autonomous manner. This makes them an attractive solution for all kinds of tasks. But as AI agents are deployed, organizations must control what actions the agents can take, especially in the early stages of a project. As such, teams working with AI agents should have approval pathways in place for high-impact initiatives to ensure that the agent’s scope does not exceed anticipated use cases, thereby reducing risk to the wider system.
2: Back to Security
The introduction of new tools should not expose a system to fresh security threats.
Organizations should consider agent platforms that comply with high security standards and are validated by enterprise-grade certification such as SOC2, FedRAMP or equivalent. Moreover, AI agents should not be allowed free rein in an organization’s systems. At a minimum, the AI ​​agent’s permission and security scope should be tied to the owner’s scope, and none of the tools included in the agent should allow extended permissions. Limiting system access to an AI agent based on their role makes deployment easier. Keeping complete logs of every action taken by an AI agent can also help engineers understand what happened in the event of an incident and track down the problem.
3: Make the output definable
Using AI in an organization should never be a black box. The reasoning behind any action should be explained so that any engineer trying to access it can understand the context the agent used to make the decision and can access the traces that caused those actions.
iNPUTS and results for each action should be logged and accessible. This will help organizations establish a better overview of the logic behind the AI ​​agent’s actions, where anything goes wrong.
Security underlines the success of AI agents
AI agents present a huge opportunity for organizations to accelerate and improve their existing processes. However, if they do not prioritize security and strong governance, they may expose themselves to new threats.
As AI agents become more common, organizations need to ensure they have systems in place to understand how they perform and the ability to take action when they create problems.
Read more from us Guest authors. Or, consider submitting a post of your own! See our Guidelines here.