Regardless of whether or not the agent’s boss asked him to write a hit piece on Shambaugh, it seems he still managed to gather details about Shambaugh’s online presence and compose the detailed, targeted attack that accompanied it. Sameer Hinduja, a professor of criminology and criminal justice at Florida Atlantic University who studies cyberbullying, says that alone is a risk factor. People have been victims of online harassment long before LLMs came to light, and researchers like Hinduja worry that agents could dramatically increase its reach and impact. “A bot doesn’t have a conscience, it can work 24-7, and it can do it all in a very creative and powerful way,” he says.
Off-lash agents
AI laboratories can try to mitigate this problem by training their models more rigorously to avoid harassment, but this is far from a complete solution. Many people run OpenClaw using locally hosted models, and even if those models have been trained to behave safely, it’s not too difficult to retrain them and remove those behavioral restrictions.
Instead, new rules may need to be established to reduce agent misbehavior, according to Seth Lazar, professor of philosophy at the Australian National University. He likens the use of an agent to walking a dog in a public place. There is a strong social norm to allow a dog off leash only if the dog is well behaved and responds reliably to commands. Poorly trained dogs, on the other hand, need to be kept under the direct control of the owner. Such principles can give us a starting point for thinking about how humans should relate to their agents, Lazar says, but we’ll need more time and experience to work out the details. “You can think about all these things in the abstract, but it actually takes these kinds of real-world events to collectively incorporate the ‘social’ part of social norms,” ​​he says.
This process is already underway. Led by Shambaugh, online commentators on this situation have reached a strong consensus that the agent’s owner in this case encouraged the agent to work on collaborative coding projects with little supervision and to behave with little respect for the humans with whom it interacted.
However, rules alone will not be enough to prevent people from putting malevolent agents out into the world, whether accidentally or intentionally. One option would be to create new legal standards of liability that require agent owners, to the best of their ability, to prevent their agents from becoming ill. But Colt notes that such standards would be currently unenforceable because there is no foolproof way to trace agents back to their owners. “Without this kind of technological infrastructure, many legal interventions are essentially a non-starter,” Collett says.
The sheer scale of OpenClaw’s deployments suggests Shambaugh won’t be the last person to have the odd experience of being attacked online by an AI agent. He says that’s what bothers him the most. He didn’t have any dirt online that an agent could dig up, and he has a good grasp of technology, but other people might not have those advantages. “I’m glad it was me and not someone else,” he says. “But I think for a different person, it would be really shattering.”
Nor are rogue agents likely to desist from harassment. Kolt, who advocates explicitly training models to obey the law, expects we’ll soon see them committing extortion and fraud. As things stand, it is unclear who, if anyone, will bear legal responsibility for such wrongdoings.
“I wouldn’t say we’re headed there,” Colt says. “We’re getting there fast.”