The implication—spurred by new demonstrations of humanoid robots taking away dishes or assembling cars—is that replicating human limbs with single-purpose robot arms is an outdated method of automation. The new approach is to mimic the way humans think, learn and adapt while working. The problem is that the lack of transparency about the human labor involved in training and operating such robots leads the public to misunderstand what robots can actually do and fail to see new forms of work around them.
Consider how, in the AI era, robots often learn from humans who show them how to do things. The large-scale creation of this data is now underway. Black mirror-esque scenarios. For example, a worker in Shanghai recently spent a week wearing a virtual reality headset and an exoskeleton while opening and closing a microwave door to train the robot that accompanies him. rest of the world Reported. In North America, robotics company Figure appears to be planning something similar: this announced In September the investment firm will partner with Brookfield, which manages 100,000 residential units, to capture “massive amounts” of real-world data “across multiple home environments.” (Photo did not respond to questions about the effort.)
Just as our words became the training data for larger language patterns, our movements are now poised to follow the same path. Except that future could leave humans with an even worse deal, and it has already begun. Robotist Aaron Prather told me about recent work with a delivery company whose workers wore movement-tracking sensors while moving boxes. The collected data will be used to train the robots. The effort to create humanoids will likely require manual laborers to act as large-scale data collectors. “It’s going to be weird,” Prather says. “No doubt about it.”
Or consider teleoperation. Although the endgame in robotics is a machine that can complete a task on its own, robotics companies employ people to operate their robots remotely. NineA $20,000 humanoid robot from startup 1X is set to ship into homes this year, but the company’s founder, Bernt Øivind Børnich, told me recently that he’s not committed to any set level of autonomy. If a robot gets stuck, or if a customer wants it to do a difficult task, a teleoperator from the company’s headquarters in Palo Alto, Calif., will pilot it, watching it through its cameras to iron clothes or unload the dishwasher.
This isn’t inherently harmful — the customer’s consent is obtained before the 1X switches to teleoperation mode — but privacy as we know it won’t exist in a world where teleoperators are doing your housework with robots. And if domestic humanoids are not truly autonomous, management is best understood as a form of wage mediation that recreates the dynamics of gig work while, for the first time, allowing physical work to be performed where labor is cheap.
We have been down similar roads before. Performing “AI-powered” content moderation on social media platforms or collecting training data for AI companies often requires workers in low-wage countries to view disturbing content. And despite claims that AI will soon train enough on its outputs and learn on its own, even the best models need a lot of human feedback to work as intended.
These human workforces don’t mean AI is just vapor. But while they remain hidden, the public constantly overestimates the true capabilities of the machines.
It’s great for investors and hype, but it has consequences for everyone. For example, when Tesla marketed its driver assistant software as “Autopilot,” it inflated public expectations of what the system could do safely—a misrepresentation of the Miami jury. Recently found Involved in an accident that killed a 22-year-old woman (Tesla was ordered to pay $240 million in damages).
The same will be true for humanoid robots. If Huang is right, and physical AI is coming to our workplaces, homes, and public spaces, how we describe and evaluate such technology matters. Yet robotics companies remain as vague about training and teleoperation as AI firms are about their training data. If this doesn’t change, we risk mistaking hidden human labor for machine intelligence—and seeing far more autonomy than actually exists.