There has also been an explosion of political deepfakes. The Trump administration, for example, has regularly produced and shared AI-generated images and videos. Not all of them are intended to appear genuine, but others are designed to influence public opinion and humiliate the person depicted.
In January, meanwhile, Texas Attorney General Ken Paxton shared one. Video Sen. John Cornyn, his opponent in the Republican primary for the U.S. Senate seat, is seen dancing with Rep. Jasmine Crockett, the contender for the Democratic nomination. But that never happened—a fact the ad didn’t clearly state.
Proposed solutions include big AI firms adding new technical safeguards and detection methods, encouraging users to take more security measures, and enacting new legislation or applying existing regulatory frameworks such as copyright law to the problem.
But they all have limitations. Technical solutions can be ignored. For example, bad actors can easily access open source models built without protections. Forcing people to change their behavior, such as watermarking photos or posting less personal information online, is simply unrealistic. Stronger regulations require enforcement—and while President Trump has signed legislation that criminalizes deepfake porn, his administration continues to post. Other types of malicious deep faxes. In late January, for example, the White House shared one. changed The portrait of the Minneapolis civil rights attorney darkens her skin and changes her facial expression from calm to exaggerated crying.
The problem can get worse and sooner. The United States has high-stakes midterm elections later this year, and federal agencies that traditionally focused on the integrity of election-related information have been weakened. Similarly, many outside research groups are dedicated to fact-checking and countering misinformation about choice.