Evaluating the Experience of LGBTQ+ People Using Large Language Model Based Chatbots for Mental Health Support
CoRR(2024)
Abstract
LGBTQ+ individuals are increasingly turning to chatbots powered by large
language models (LLMs) to meet their mental health needs. However, little
research has explored whether these chatbots can adequately and safely provide
tailored support for this demographic. We interviewed 18 LGBTQ+ and 13
non-LGBTQ+ participants about their experiences with LLM-based chatbots for
mental health needs. LGBTQ+ participants relied on these chatbots for mental
health support, likely due to an absence of support in real life. Notably,
while LLMs offer prompt support, they frequently fall short in grasping the
nuances of LGBTQ-specific challenges. Although fine-tuning LLMs to address
LGBTQ+ needs can be a step in the right direction, it isn't the panacea. The
deeper issue is entrenched in societal discrimination. Consequently, we call on
future researchers and designers to look beyond mere technical refinements and
advocate for holistic strategies that confront and counteract the societal
biases burdening the LGBTQ+ community.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined