AI chatbot responds to emotional cuing

Research Square (Research Square)(2023)

引用 0|浏览3
暂无评分
摘要
Abstract Emotion has long been considered to distinguish humans from Artificial Intelligence (AI). Previously, AI's ability to interpret and express emotions was seen as mere text interpretation. In humans, emotions co-ordinate a suite of behavioral actions, e.g., under negative emotion being risk averse or under positive emotion being generous. So, we investigated such coordination to emotional cues in AI chatbots. We treated AI chatbots like human participants, prompting them with scenarios that prime positive emotions, negative emotions, or no emotions. Multiple OpenAI ChatGPT Plus accounts answered questions on investment decisions and prosocial tendencies. We found that ChatGPT-4 bots primed with positive emotions, negative emotions, and no emotions exhibited different risk-taking and prosocial actions. These effects were weaker among ChatGPT-3.5 bots. The ability to coordinate responses with emotional cues may have become stronger in large language models as they evolved. This highlights the potential of influencing AI using emotion and it suggests that complex AI possesses a necessary capacity for “having” emotion.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要