The Accountability Challenge: It’s not them, it’s you.
Until now, governance has focused on modeling output risks with humans—such as with loan approvals or job applications—before consequential decisions are made. Model behavior, including drift, alignment, data enrichment, and poisoning, was the focus. The pace was set by a human prompting a model in chatbot format with lots of back-and-forth interactions between machine and human.
Today, with autonomous agents operating in complex workflows, the benefits of vision and applied AI require significantly fewer humans in the loop. The point is to automate manual tasks to run the business at machine speed with a clear architecture and decision rules. Objective From a liability perspective, there is no reduction in enterprise or business risk between the machine running the workflow and the human running the workflow. Summary of CX Today The situation in a nutshell: “AI does the work, humans own the risk,” and California’s state law (AB 316), effective January 1, 2026, removes the “AI did it; I didn’t approve it” excuse. It’s like parenting when an adult is held responsible for a child’s actions that negatively impact the larger community.
The challenge is that without building in code that implements operational governance that aligns with varying levels of risk and responsibility throughout the workflow, the benefit of autonomous AI agents is negated. In the past, governance was static and dictated the speed of conversation that was typical for a chatbot. However, autonomous AI by design removes humans from many decisions, which can affect governance.
Consideration of permissions
Just like handing a three-year-old a video game console that remotely controls an Abrams tank or an armed drone, leaving a potential system operating without real-time guardrails that can alter critical enterprise data poses significant risks. For example, agents that perform integrations and processes across multiple corporate systems may exceed the privileges that would be granted to a single human user. To move forward successfully, governance must move away from policy set by committees to operational code built into the workflow from the start.
A hilarious meme surrounding toddlers’ behavior with toys begins with all the reasons why whatever toy you have is mine and ends with a broken toy that is definitely yours. For example, OpenClaw brought the user experience closer to working with a human assistant; But the excitement changed. Security experts Realized that inexperienced users can easily compromise by using it.
For decades, enterprise IT has lived with shadow IT and the reality that skilled technical teams must manage and clean up assets they didn’t build or install, much like a toddler returning a broken toy. With autonomous agents, the risks are greater: persistent service account credentials, long-lived API tokens, and permissions to make decisions on underlying file systems. To meet this challenge, it is important to allocate adequate IT budget and labor in advance to maintain centralized discovery, monitoring and remediation for thousands of employees or agents created by the department.
Having a retirement plan
Recently, an acquaintance reported that he saved a client millions of dollars by identifying and then killing a “zombie project” — a neglected or failed AI pilot running on a GPU cloud instance. There are potentially thousands of agents who risk becoming a zombie fleet within the business. Today, many executives encourage employees to use AI—or else—and employees are asked to create their own AI-first workflows or AI assistants. With the utility of something like OpenClaw and top-down directives, it’s easy to imagine that the number of my own agents coming into the office with their human employee would explode. Because an AI agent is a program that falls under the definition of company-owned IP, as an employee changes departments or companies, those agents can be orphaned. Releasing and retiring any agent tied to a specific employee’s identity and permissions requires proactive policy and governance.
Financial reform is governance out of the gate.
While for some executives, autonomous AI may seem like a way to improve their operating margins by limiting human capital, many are realizing that the ROI for replacing human labor is the wrong angle. Adding AI capabilities to the enterprise doesn’t mean buying a new software tool with an expected cost per hour or per instance. December 1, 2025 IDC survey Sponsored by DataRobot indicated that 96% of organizations deploying generative AI and 92% of agents implementing AI reported costs that were higher or much higher than expected.