Research Alert
Background: Recent years have seen an immense surge in the creation and use of chatbots as social and mental health companions. Aiming to provide empathic responses in support of the delivery of personalized support, these tools are often presented as offering immense potential. However, it is also essential that we understand the risks of their deployment, including their potential adverse impacts on the mental health of users, including those most at risk. Objective: The study aims to assess the ethical and pragmatic clinical implications of using chatbots that claim to aid mental health. While several studies within human-computer interaction and related fields have examined users’ perceptions of such systems, few studies have engaged mental health professionals in critical analysis of their conduct as mental health support tools. This paper comprises, in turn, an effort to assess the ethical and pragmatic clinical implications of using chatbots that claim to aid mental health. Methods: This study included 8 interdisciplinary mental health professional participants (from psychology and psychotherapy to social care and crisis volunteer workers) in a mixed methods and hands-on analysis of 2 popular mental health–related chatbots’ data handling, interface design, and responses. This analysis was carried out through profession-specific tasks with each chatbot, eliciting participants’ perceptions through both the Trust in Automation scale and semistructured interviews. Through thematic analysis and a 2-tailed, paired t test, these chatbots’ implications for mental health support were thus evaluated. Results: Qualitative analysis revealed emphatic initial impressions among mental health professionals of chatbot responses likely to produce harm, exhibiting a generic mode of care, and risking user dependence and manipulation given the central role of trust in the therapeutic relationship. Trust scores from the Trust in Automation scale, while exhibiting no statistically significant differences between the chatbots (t6=–0.76; P=.48), indicated medium to low trust scores for each chatbot. The findings of this work highlight that the design and development of artificial intelligence (AI)–driven mental health–related solutions must be undertaken with utmost caution. The mental health professionals in this study collectively resist these chatbots and make clear that AI-driven chatbots used for mental health by at-risk users invite several potential and specific harms. Conclusions: Through this work, we contributed insights into the mental health professional perspective on the design of chatbots used for mental health and underscore the necessity of ongoing critical assessment and iterative refinement to maximize the benefits and minimize the risks associated with integrating AI into mental health support.