That’s because, until the last several decades, people weren’t creating massive clouds of data that opened up new possibilities for surveillance. The Fourth Amendment, which protects against unreasonable searches and seizures, was written when gathering information meant entering people’s homes.
Subsequent laws, such as the Foreign Intelligence Surveillance Act of 1978 or the Electronic Communications Privacy Act of 1986, were passed when surveillance included wiretapping phone calls and intercepting e-mails. Most of the laws governing surveillance were on the books before the Internet took off. We weren’t building extensive online data trails, and the government didn’t have sophisticated tools to analyze the data.
Now we do, and AI supercharges what kind of monitoring can be done. “What AI can do is it can take a lot of information, none of which is self-aware, and therefore none of which is self-regulating, and it can give government a lot of powers that government didn’t have before,” says Rosenstein.
AI can gather individual pieces of information to find patterns, make inferences, and build detailed profiles of people. And as long as the government collects the information legally, it can do whatever it wants with that information, including feeding it to AI systems. “The law doesn’t match technological reality,” says Rosenstein.
Although surveillance can raise serious privacy concerns, the Pentagon has legitimate national security interests in collecting and analyzing data about Americans. “In order to collect information about Americans, it has to be for a very specific subset of missions,” says Lorraine Voss, a former military intelligence officer at the Pentagon.
For example, a counterintelligence mission may need information about an American who is working for a foreign country, or conspiring to engage in international terrorist activity. But targeted intelligence can sometimes extend to collecting more data. “That kind of combination makes people uncomfortable,” Voss says.
Fair use
OpenAI says its contract now includes language stating that the company’s AI system “shall not be used knowingly for domestic surveillance of US persons and citizens” in accordance with applicable laws. The amendment clarifies that it “prohibits the intentional tracking, monitoring or surveillance of US persons or citizens, including through the purchase or use of commercially obtained personal or identifiable information.”
But the added language may not do much to eliminate the provision that the Pentagon can use the company’s AI systems for all lawful purposes, which could include collecting and analyzing sensitive personal information. “Open AI can say whatever it wants in its contract … but the Pentagon will use the technology for what it sees as legitimate,” says Jessica Tliepman, a law professor at George Washington University Law School. This may include home monitoring. “Most of the time, companies won’t be able to stop the Pentagon from doing anything,” she says.