Hani Farid, a UC Berkeley professor who specializes in digital forensics but was not involved in Microsoft’s research, says that if the industry adopts the company’s blueprint, it will be significantly more difficult to deceive the public with manipulated content. Sophisticated individuals or governments can work to bypass such devices, he says, but the new standard could eliminate a significant portion of misleading content.
“I don’t think it solves the problem, but I think it makes a nice big difference,” he says.
Still, there are reasons to see Microsoft’s approach as an example of somewhat simplistic techno-optimism. is Growing evidence That people are influenced by AI-generated content even when they know it’s wrong. And recently study Among pro-Russian AI-generated videos about the war in Ukraine, comments indicating that the videos were created with AI received far less engagement than comments that believed them to be genuine.
“Are there people who, no matter what you tell them, will believe what they believe?” Farid asks. “Yes.” But, he adds, “there is a large majority of Americans and citizens around the world who I think want to know the truth.”
This desire has not led to immediate action by tech companies. Google began adding watermarks to content generated by its AI tools in 2023, which Fareed said helped in his research. Some platforms use C2PA, a Microsoft standard. Helped to start. In 2021 But the full suite of changes proposed by Microsoft, powerful as they are, can only remain suggestions if they threaten the business models of AI companies or social media platforms.