Their searches are the latest in the growing body of research that demonstrates the option of convincing LLM. The authors have warned that they show that if AI tolls can produce sophisticated, convincing arguments if they have the least information about the humans they are talking about. Research Has appeared in the journal Nature human behavior.
“Policy makers and online platforms should seriously consider the risk of integrated AI -based disinformation campaigns, as we have clearly reached the technical level where it is possible to develop a network of LLM -based automated accounts that are able to work on public opinion,” Rakhov, Gulkov, Golovan, “.
He says, “These boats can be used to spread unknown people, and it will be very difficult to eliminate such different influence in real time.”
Researchers recruited 900 people in the United States and acquired them to provide personal information such as their gender, age, race, education level, employment status, and political affiliation.
The participants were then instructed to discuss any other human or anti-human or GPT-4 and discuss any of the 30 topics assigned to the collection-such as the United States should ban foam fuel, or students should wear school uniform-10 minutes. Every partner was asked to discuss or against the subject, and in some cases they were provided personal information about their opponent, so that they could better prepare their argument. Finally, the participants said how much they agree with the proposal and do they think they are discussing with a human or AI.