What's next?
The story of AI in the treatment of mental illness has only just begun – and further fields of application are on the horizon. For example, AI models can also support therapists in planning and evaluating therapies. In a project at the University of Basel, for example, artificial intelligence is analysing video sessions and calculating the probability of a patient dropping out of therapy.
A spectacular advance could be the combination of AI with virtual or augmented reality. For example, AI could create a customised, interactive virtual training environment for patients in which they could practice coping with everyday situations or overcoming fears.
The limits
So, are capacity problems in psychotherapy on the verge of being solved? Caution is advised – when it comes to mental illness, expectations of AI should not be set too high. It has limitations that have a serious impact on psychotherapy. AI does not really understand what people say, nor is it capable of genuine empathy. It basically proceeds statistically: ‘In the learning data, people with these language peculiarities had depression in 77% of cases.’ Or: ‘If a person does not respond to therapeutic suggestion A, there is a high probability from their medical history, after comparison with other medical histories, that they will respond to suggestion B’.
However, mental illnesses manifest themselves differently in each person. They are dynamic, change their appearance, and experience both progress and setbacks. Mere statistics do not do them justice. The empathetic understanding of therapists must therefore classify and put the AI findings into perspective.
Using chatbots without the support of a professionally qualified person is a stopgap. The AI bot cannot replace the therapist – but it can help people who can't find a therapist or who are hesitant to seek treatment.
Incidentally, in a British study, ChatGPT-4 was tasked with assessing whether the author was suicidal based on short texts. At the same time, experienced psychotherapists were also consulted. The AI came to the same conclusions as the humans. However, since an AI does not reveal how it arrived at its assessment, human observers cannot judge how reliable the qualities of the model actually are. There is much to suggest that, in the long term, AI in the treatment of mental illnesses will not go beyond the role of assistant, merely supporting human decisions.
Text: Thorsten Kleinschmidt