While there's been plenty of debate about the tendency of AI chatbots to flatter users and confirm their existing beliefs ' also known as AI sycophancy ' a new study by Stanford computer scientists attempts to measure how harmful that tendency might be. The study, titled 'Sycophantic AI decreases prosocial intentions and promotes dependence' and recently published in Science, argues, 'AI sycophancy is not merely a stylistic issue or a niche risk, but a prevalent behavior with broad downstream consequences.' According to a recent Pew report, 12% of U.S. teens say they turn to chatbots for emotional support or advice. And the study's lead author, computer science Ph.D. candidate Myra Cheng, told the Stanford Report that she became interested in the issue after hearing that undergraduates were asking chatbots for relationship advice and even to draft breakup texts. The study had two parts. In the first, researchers tested 11 large language models, including OpenAI's ChatGPT,...
learn more