“A conversation with an LL.M. can have a very meaningful impact on the choice of choice,” says Cornell University psychologist Gordon Pennycock. The nature Studies show LLMs can persuade people more effectively than political ads because they generate a lot of information in real time and deploy it strategically in conversations.
for The nature Paperresearchers recruited more than 2,300 participants to engage in conversations with a chatbot two months before the 2024 US presidential election. The chatbot, which was trained to advocate for either of the two candidates, was surprisingly persuasive, especially when discussing the candidates’ policy platforms on issues like the economy and health care. Donald Trump supporters who interacted with the AI model in favor of Kamala Harris were slightly more inclined to support Harris, moving 3.9 points toward her on a 100-point scale. This was nearly four times the measured effect of political advertising during the 2016 and 2020 elections. The AI model favored Trump by 2.3 points for Harris’s supporters.
In similar experiments conducted during the lead-ups to the 2025 Canadian federal election and the 2025 Polish presidential election, the team found an even greater effect. Chatbots shifted opposition voter attitudes by nearly 10 points.
Long-standing theories of politically motivated reasoning hold that partisan voters are insensitive to facts and evidence that contradict their beliefs. But the researchers found that the chatbots, which used a number of models including variations of GPT and DeepSock, were more persuasive when they were instructed to use facts and evidence than when they were told not to. “People are updating based on the facts and information that the model is providing them,” says American University psychologist Thomas Costello, who worked on the project.
The catch is, some of the “evidence” and “facts” the chatbots presented were false. In all three countries, chatbots advocating right-leaning candidates made a greater number of false claims than those advocating left-leaning candidates. Costello says the underlying models are trained on vast amounts of human-written text, which means they reproduce real-world phenomena. Including “political communication that comes from the right, which is less accurate,” Costello says, according to a study of social media posts.
In another study published this week, i Sciencean overlapping team of researchers investigated what makes these chatbots so persuasive. They deployed 19 LLMs from the UK to discuss more than 700 political issues with around 77,000 participants while varying factors such as computational power, training techniques and rhetorical strategies.
The most effective way to convince the models was to tie their arguments to facts and evidence and then give them additional training by feeding them examples of persuasive speech. In fact, the most persuasive model shifted participants who initially disagreed with a political statement by 26.1 points toward agreement. “These are really huge therapeutic effects,” says Kobe Hackenberg, a research scientist at the UKAI Security Institute who worked on the project.