Validity of chatbot use for mental health assessment: experimental study

Background: Mental disorders in adolescence and young adulthood are major public health concerns. Digital tools such as text-based conversational agents (ie, chatbots) are a promising technology for facilitating mental health assessment. However, the human-like interaction style of chatbots may indu...

Full description

Saved in:
Bibliographic Details
Main Authors: Schick, Anita (Author) , Feine, Jasper (Author) , Morana, Stefan (Author) , Maedche, Alexander (Author) , Reininghaus, Ulrich (Author)
Format: Article (Journal)
Language:English
Published: 31.10.2022
In: JMIR mhealth and uhealth
Year: 2022, Volume: 10, Issue: 10, Pages: 1-16
ISSN:2291-5222
DOI:10.2196/28082
Online Access:Verlag, kostenfrei, Volltext: https://doi.org/10.2196/28082
Verlag, kostenfrei, Volltext: https://mhealth.jmir.org/2022/10/e28082
Get full text
Author Notes:Anita Schick, Jasper Feine, Stefan Morana, Alexander Maedche, Ulrich Reininghaus
Description
Summary:Background: Mental disorders in adolescence and young adulthood are major public health concerns. Digital tools such as text-based conversational agents (ie, chatbots) are a promising technology for facilitating mental health assessment. However, the human-like interaction style of chatbots may induce potential biases, such as socially desirable responding (SDR), and may require further effort to complete assessments. - Objective: This study aimed to investigate the convergent and discriminant validity of chatbots for mental health assessments, the effect of assessment mode on SDR, and the effort required by participants for assessments using chatbots compared with established modes. - Methods: In a counterbalanced within-subject design, we assessed 2 different constructs—psychological distress (Kessler Psychological Distress Scale and Brief Symptom Inventory-18) and problematic alcohol use (Alcohol Use Disorders Identification Test-3)—in 3 modes (chatbot, paper-and-pencil, and web-based), and examined convergent and discriminant validity. In addition, we investigated the effect of mode on SDR, controlling for perceived sensitivity of items and individuals’ tendency to respond in a socially desirable way, and we also assessed the perceived social presence of modes. Including a between-subject condition, we further investigated whether SDR is increased in chatbot assessments when applied in a self-report setting versus when human interaction may be expected. Finally, the effort (ie, complexity, difficulty, burden, and time) required to complete the assessments was investigated. - Results: A total of 146 young adults (mean age 24, SD 6.42 years; n=67, 45.9% female) were recruited from a research panel for laboratory experiments. The results revealed high positive correlations (all P<.001) of measures of the same construct across different modes, indicating the convergent validity of chatbot assessments. Furthermore, there were no correlations between the distinct constructs, indicating discriminant validity. Moreover, there were no differences in SDR between modes and whether human interaction was expected, although the perceived social presence of the chatbot mode was higher than that of the established modes (P<.001). Finally, greater effort (all P<.05) and more time were needed to complete chatbot assessments than for completing the established modes (P<.001). - Conclusions: Our findings suggest that chatbots may yield valid results. Furthermore, an understanding of chatbot design trade-offs in terms of potential strengths (ie, increased social presence) and limitations (ie, increased effort) when assessing mental health were established.
Item Description:Gesehen am 24.09.2024
Physical Description:Online Resource
ISSN:2291-5222
DOI:10.2196/28082