Summary: A recent study suggests that ChatGPT’s replies to psychotherapy scenarios are frequently rated higher than those written by people professionals. Researchers found that respondents struggled to differentiate between AI-generated and therapist-written actions in woman’s therapy scenes. ChatGPT’s reactions were usually longer and contained more adjectives and descriptors, providing greater contextualization.
This additional information may have helped to raise scores for fundamental counselling principles. The studies raise ethical and practical questions about how AI might be integrated into mental health care while highlighting its potential role in medical treatments. Researchers stress the necessity for experts to be involved in AI developments to ensure responsible monitoring.
Essential Information
- Higher Scores: ChatGPT’s reactions were rated higher on counselling principles.
- Identical Reactions: Members struggled to identify AI from human-written actions.
- Possible Integration: Findings suggest AI may play a role in potential therapeutic interventions.
Origin: PLOS
When it comes to comparing messages written by psychologists to those written by ChatGPT, the latter are usually rated higher, according to a research published , February 12, 2025, in the open-access journal , PLOS Mental Health , by , H. Dorian Hatch, from The Ohio State University and co-founder of Hatch Data and Mental Health,  , and associates
Given some of the advantages of working with generative artificial intelligence ( AI), it has gained more and more attention to whether machines could be therapists.
Recent research suggests that AI can create sensitively and the generated material is highly regarded by both mental health professionals and volunteer service users, so much so that it is frequently preferred over written by professionals despite earlier research showing that humans may struggle to tell the difference between responses from machines and humans.
Hatch and acquaintances ‘ new study, which included over 800 participants, found that when presented with 18 woman’s treatment vignettes, people could scarcely tell whether reactions were written by ChatGPT or by professionals.
This getting echoes Alan Turing’s forecast that people would be unable to distinguish between actions written by machines and those written by people. In addition, ChatGPT’s actions were generally rated higher in the main guiding principles of counseling.
Further research revealed that ChatGPT’s reactions were typically longer than those provided by the therapists. After controlling for size, ChatGPT continued to reply with more adjectives and nouns than therapists.
Considering that verbs can be used to describe persons, places, and points, and adjectives can be used to provide more environment, this could mean that ChatGPT contextualizes more heavily than the therapists.
More in-depth contextualization may have caused respondents to rate the ChatGPT responses higher on the common elements of therapy ( components that are necessary for the achievement of desired outcomes ) than other factors.
These findings, in the authors ‘ opinion, could provide a first glimpse into ChatGPT’s potential to enhance psychotherapeutic techniques. This work may help to develop new testing techniques and create therapeutic treatments in specific.
The authors urge mental health professionals to increase their professional education in order to maintain that AI models are being properly trained and supervised by trustworthy professionals, improving access to care and the availability of evidence suggesting that conceptual AI can be useful in medical settings.  ,
The authors add that since the development of ELIZA nearly sixty years ago, researchers have been debating whether AI could function as a therapy. Our findings suggest that the answer may be” Yes.” despite there still being many significant lingering questions.
Before the AI train leaves the station, we hope our work will inspire both the public and mental health professionals to pose significant questions about the justifiability, and value of integrating AI and mental health treatment.
About this research in the field of AI and psychotherapy
Author: Charlotte Bhaskar
Source: PLOS
Contact: Charlotte Bhaskar – PLOS
Image: The image is credited to Neuroscience News
Original Research: Open access.
” When ELIZA meets therapists: A Turing test for the heart and mind” by H. Dorian Hatch et al. PLOS Mental Health
Abstract
When ELIZA meets therapists: A Turing test for the heart and mind
” Can machines be therapists? is a question that is getting more attention given the relative ease of using generative artificial intelligence.
Although recent ( and decades-old ) research has shown that humans struggle to distinguish between responses from machines and humans, recent findings suggest that artificial intelligence can write empathically and that the produced content is highly regarded by therapists and outperforms professionals.
In a preregistered competition where therapists and ChatGPT respond to therapeutic vignettes about couples therapy, a panel of participants can determine which responses are generated by ChatGPT and which are written by therapists ( N = 13 ), b ) the generated responses or the therapist-written responses are more in line with important therapy principles, and c ) there are linguistic differences between conditions.
We found that a ) ChatGPT responses were hardly ever distinguished from responses written by a therapist, b ) ChatGPT responses were generally rated higher in respect of important psychotherapy principles, and c ) there were also differences in language patterns between ChatGPT and therapists.
We then tested different measures to find out whether ChatGPT’s responses were higher than the therapist’s responses, suggesting that part-of-speech and response sentiment may account for these differences.
This might give us an early indication that ChatGPT has the potential to enhance psychotherapeutic processes. We anticipate that this research will lead to the development of new testing techniques and the development of psychotherapeutic treatments.
In addition, we discuss limitations ( including the absence of the therapeutic context ), and how continuing research in this field may improve the effectiveness of psychotherapeutic interventions, enabling the placement of those interventions in the hands of those who are most in need.