
The race to deploy agentic AI is on. In the enterprise, systems that can plan, execute and collaborate on business applications promise unprecedented performance. But in automating ourselves, one important component is being overlooked: scalable security. We’re not building a workforce of digital employees without providing them with a secure way to log in, access data, and do their jobs without creating catastrophic risk.
The fundamental problem is that traditional identity and access management (IAM) designed for humans breaks down at the agentic scale. Controls like static characters, long-standing passwords and one-time approvals are useless when non-human identities can outnumber humans 10 to one. To harness the power of agentic AI, identity must evolve from a simple login gatekeeper to a dynamic control plane for your entire AI operation.
“The fastest path to responsible AI is to avoid real data. Use artificial data to prove value, then earn the right to touch the real thing.” b (b ( Sean Kunungu, Keynote Speaker and Innovation Strategist; Best-selling author of The Brave
Why your human-centered IAM is a sitting duck
Agentic AI doesn’t just use software. It behaves like a user. It authenticates systems, assumes roles and calls APIs. If you treat these agents as mere features of an application, you invite hidden privilege creep and irreversible actions. A single overworked agent can destroy data or trigger faulty business processes at machine speed, with no one being the wiser until it’s too late.
The static nature of legacy IAM is a primary weakness. You cannot pre-define a fixed role for an agent whose tasks and required data access may change daily. The only way to keep access decisions valid is to move policy enforcement from a one-time grant to a constant, runtime evaluation.
Prove value before production figures
Kongo’s guidance offers a practical on-ramp. Start with simulated or masked datasets to validate agent workflows, scopes, and guardrails. Once your policies, logs, and break-glass paths are in place in this sandbox, you can graduate agents to real data with confidence and clear audit evidence.
Building an identity-centric operating model for AI
Securing this new workforce requires a change in mindset. Every AI agent should be considered a first-class citizen in your identity ecosystem.
First, each agent needs a unique, authenticated identity. It is not just a technical identification. It must relate to a human owner, a specific business use case and a Software Bill of Materials (SBOM). The era of shared service accounts is over. They are tantamount to giving the master’s key to the ignorant crowd.
Second, replace the set-and-forget role with session-based, risk-aware permissions. Access should be granted only over time, scoped to the immediate task and minimally necessary dataset, then automatically revoked when the job is complete. Think of it as giving an agent a key to a single room in a single room, not a master key to the entire building.
The three pillars of a scalable agent security architecture
Basically context-aware permissions. Permission can now be a simple yes or no at the door. This should be an ongoing conversation. The system should evaluate the context in real time. Is the agent’s digital currency verified? Is it requesting general data for its purpose? Does this access occur during the normal operational window? This dynamic evaluation enables both security and speed.
Purpose-linked data access at the edge. The last line of defense is the data layer itself. By adding policy enforcement directly to the data query engine, you can enforce row-level and column-level security based on the agent’s declared intent. A customer service agent should automatically block a query that is designed for financial analysis. Purpose binding ensures that data is used as intended, not simply accessed by an authorized identity.
Evidence of tampering is predicated. In a world of independent initiatives, auditability is non-negotiable. Every access decision, data query and API call should be diligently logged, showing who, what, where and why. Link logs so they are tamper-proof and actionable for auditors or incident responders, providing a clear narrative of each agent’s activities.
A practical roadmap to getting started
Start with an identity inventory. Catalog of all non-human identities and service accounts. You will likely have the opportunity to share and provide more. Start issuing unique identifiers for each agent workload.
Pilot a one-time access platform. Implement a tool that delivers short-lived, scoped credentials for a specific project. It proves the concept and demonstrates the operational benefits.
Mandate short-lived credentials. Issue tokens that expire in minutes, not months. Find and remove static API keys and secrets from code and configuration.
Build a synthetic data sandbox. First validate agent workflows, scopes, indicators and policies on simulated or masked data. Promote real data only after passing control, logging and egress policies.
Have the agent drill the tablet of the event. Follow up on responses to leaked credentials, quick injections or tool additions. Prove you can revoke access, rotate credentials and isolate an agent in minutes.
The bottom line
You can’t manage an agentic, AI-powered future with human-era recognition tools. Organizations that will win identity recognize identity as the central nervous system of AI operations. Make identity the control plane, pass authorization to runtime, bind data access to the target and prove value on synthetic data before touching the real thing. Do that, and you can scale to a million agents without scaling your breach risk.
Michelle Beckner is a NASA Information Systems Security Officer (ISSO).