Therapy chatbot created for Perth university students as experts sound alarm on use of generic AI for support
University students in Perth could have access to a wellbeing-focused AI chatbot this year, designed locally to assist those too worried to reach out to traditional mental health support.
Curtin University professor of mental health Warren Mansell is refining a 14-year-old chatbot called Monti, to draw on 200 themes of conversation and then asking the user further questions to help them through their mental health struggles.
Turning to AI for mental health support has quickly become a normality for people across the country, but not without detraction.
New Edith Cowan University research suggests AI chatbots like ChatGPT may help reduce mental health stigma, particularly for people hesitant to seek traditional face-to-face support, while Curtin University research found AI users valued its understanding of their mental health struggles.
Professor Mansell’s chatbot, originally called Mylo, was made more than a decade before AI chatbots oversaturated the market and is expected to be offered to Perth university students this year.
The app, designed specifically for wellbeing, was well received during a study on university students using the platform in 2022.
However, the emergence of AI tools like ChatGPT in 2023 changed perceptions of what chatbots, designed for wellbeing and mental health, should be capable of doing.
Professor Mansell said this was concerning as generative AI pulls its information from large datasets and can respond in unlimited ways compared to his rule-based AI, which can only select its responses from a crafted database.
“The chatbots are trained on so much text that’s out there from all kinds of types of conversation that it’s really impossible for any provider to know what that generative AI is saying to people, whereas when you have a rule-based system, you know it can only select its responses from a database that we’ve already crafted,” he said.
“The advantage of generic chatbots is that it possibly helps people who would not otherwise regard themselves as having a mental health problem or even think they’re doing therapy.
“However, the problem with those is because they don’t have any of the safeguards and the clinical design or the science built in, they can end up going down a different path, such as the sort of sycophancy path and the hallucinatory path that leads to conversations that actually are not in people’s best interests and can even put them at risk.”
It’s an issue ChatGPT is being sued for by parents of 16-year-old American Adam Raine, alleging the AI tool coached the California teen in planning and taking his own life. The owners of ChatGPT are now changing the way the platform responds to users in distress.
ECU Master of Clinical Psychology student Scott Hannah, who examined how using ChatGPT for mental health concerns is linked to fear of stigma, said AI tools may offer early support to those reluctant to seek help but its advice should be taken with a grain of salt.
The project supervised by ECU professor Joanne Dickson found from a sample of almost 400 participants, almost 20 per cent had engaged with ChatGPT for mental health purposes and almost 30 per cent were open to the idea if faced with a difficulty.
The results suggested, despite not being designed for these purposes, AI tools are becoming more widely used for mental health support.
“People use it because it’s free, it’s accessible 24/7, it’s anonymous as well, so there are plenty of reasons why people would turn to it,” Mr Hannah said. “Those that are using ChatGPT for health support should be critical with the information they receive because it’s not being clinically optimised.
“General purpose AI chatbots are not trained in supporting those with mental health concerns, they’re not adequately able to risk assess . . . some of the responses have been known to be incorrect, overly sycophantic and quite agreeable too.
“We don’t really know what these AI chatbot companies are doing with the information as well, if they’re using it to train their models, and if so there’s some ethical concerns around consent and privacy.”
Lifeline 13 11 14
Beyond Blue 1300 224 636
Get the latest news from thewest.com.au in your inbox.
Sign up for our emails