Claudia Martinez is Research Manager at independent public services think tank, Reform. She recently authored a paper on the potential of data-driven technology for mental healthcare.
Recent years have seen increased interest in the use of artificial intelligence and other data-driven technologies to improve healthcare. In mental health care, the potential is significant. Technology promises to bridge gaps in access to services, improve the quality of care, and deliver personalised treatments. However, these promises will go unfulfilled if such technologies fail to take account of what patients need.
"If implemented ethically and responsibly, chatbot technologies and “digital phenotyping” could greatly improve our understanding of the causes of mental health, giving patients the ability to better manage their health."
A few months ago, I sat with a group of clinicians and frontline practitioners to discuss the findings of Reform's recent research on data-driven mental health. There was a sentiment that technology is already transforming the face of mental health services, as seen by the rise in mental health apps and more people accessing online therapies. For the most part, technology was seen as a positive force which will help improve future service provision and make life easier for patients and clinicians. However, a significant risk is that too much focus is placed on technology itself, while patient engagement is left as an afterthought. Poorly designed or "unfit for purpose" technology might impose burdens on clinicians’ workloads and prevent patients from accessing high-quality care.
Data-driven technologies offer an opportunity to redesign services in ways that are truly person-centred. For instance, the stigma attached to mental illness deters people from accessing care. Men, young people, minority ethnic groups and people in the military are disproportionately affected. Technology could help improve access to services for these groups. There is research showing that many patients feel more comfortable disclosing personal details to an automated agent or “chatbot” than to a clinician.
In the UK, trials of SUbot, a digitally enabled therapy bot developed to address male suicide, showed that participants were five times more likely to say what was on their mind when talking to an unmonitored machine. The NHS has started trialling its own mental health chatbot to support GPs and mental health professionals to triage patients more effectively. The bot seeks to reduce the amount of time patients spend filling out paper-based questionnaires at their GP. The platform allows for depression and anxiety self-assessments to be administered remotely. This not only enhances people's experiences of care services but could also help drive-up engagement for those less likely to seek help. The bot incorporates a library containing language associated with mental health crises, such as suicidal ideation/thoughts. Although still rudimentary, bots employing natural language processing could flag warning signs and triggers, enabling timely intervention.
Another recent and exciting development is “digital phenotyping”. This involves harnessing data gathered from people's personal devices - such as smartphones and wearable devices - to learn about their behaviours and overall state of health. Research is ongoing to understand how new sources of data, including the number of hours someone spends looking at their phone, their geographical information, and the way they “click and tap”, can be turned into valuable clinical information. In the US, small-scale clinical trials of the Beiwe app with patients with schizophrenia show that “digital markers” could help recognise those at higher risk of relapse and intervene before their symptoms worsen.
For instance, motor disorders such as altered or decreased movement are known markers for schizophrenia, and key to understanding the progression of the disease. Patients might also experience involuntary tremors or muscle twitching as a result of some antipsychotic medications. Apps like Beiwe use GPS and accelerometer data to recognise the way that a person walks or holds their phone and identify any abnormal movements. Moreover, data regarding someone’s call history and text messaging activity can serve as a proxy measure for social engagement. The app does not listen to the content of the call or text message but anonymously collects highly granular data such as the time a call was placed, its length, or the number of characters on a text message. Researchers at Boston University are taking this one step further by developing an app that uses the smartphone’s microphone to record random snippets of people’s conversations and to monitor ambient noise. Indicators such as tone of voice or the length of a call could help pinpoint the cues associated with social isolation or evaluate which therapies and interventions are most effective in encouraging participation.
- See more: Have your mental health treatment options been corrupted?
- See more: Are chatbots really the answer for young people's mental health?
Despite their potential, there are still unanswered questions regarding the impact that these systems could have on people’s privacy, and the security of their personal data.
Clinicians and mental health practitioners have also voiced concerns that, for some patients, digital technologies could end up doing more harm than good. For people with schizophrenia, for example, “phenotyping” could generate the feeling of being constantly watched and exacerbate their symptoms of anxiety and paranoia. Technology developers must, therefore, go further in engaging patients and clinicians in the design and deployment of these tools and conduct a thorough risk assessment prior to introducing them. Importantly, this must be accompanied by an understanding that digital technologies will not work for everyone. We already know that some tools will only be appropriate for those with early stage psychosis rather than for those experiencing severe symptoms. There are also many people who are not digitally savvy or might choose not to take up these new innovations, making it imperative that we continue developing alternative ways for them to access services.
The future potential for data-driven mental healthcare is an exciting one. If implemented ethically and responsibly, chatbot technologies and “digital phenotyping” could greatly improve our understanding of the causes of mental health, giving patients the ability to better manage their health. However, this will remain rhetoric rather than reality if we fail to put patients at the centre.