In the months leading up to last year's presidential election, more than 2,000 Americans, roughly split across partisan lines, were recruited for an experiment: Could an AI model influence their political inclinations' The premise was straightforward'let people spend a few minutes talking with a chatbot designed to stump for Kamala Harris or Donald Trump, then see if their voting preferences changed at all. The bots were effective. After talking with a pro-Trump bot, one in 35 people who initially said they would not vote for Trump flipped to saying they would. The number who flipped after talking with a pro-Harris bot was even higher, at one in 21. A month later, when participants were surveyed again, much of the effect persisted. The results suggest that AI 'creates a lot of opportunities for manipulating people's beliefs and attitudes,' David Rand, a senior author on the study, which was published today in Nature, told me. Rand didn't stop with the U.S. general election. He and...
learn more