This month, internet giant Google gained positive press by launching a ‘depression test’ that appears to all US mobile users who use its search to look for ‘depression’ on a mobile device. Appearing as the first result beneath any sponsored content, this ‘Knowledge Graph’ asks users if they would like to “check if you’re clinically depressed’, administers a PHQ-9 screening questionnaire and then routes them to a result, advice and links to the National Suicide Prevention Lifeline and the National Alliance on Mental Illness.

"Being able to recognise people who may be experiencing depression is not a neutral tool." 

The knowledge graph reassures the user “No individual data linking you to your answers will be used by Google without your consent. Some anonymised data may be used in aggregate to improve your experience.” In mental health, an area too often overlooked, we can be too quick to ignore the potential for damage in the seemingly beneficent actions of those in power. Data allows you to make models and models shape decisions. If you successfully turn a thing into data you can turn it into a commodity. These questionnaires, given with consent, may allow Google to confirm its hunches about who is and isn’t depressed.

"Google would know you better than you know yourself, which in turn might allow others to know you better than you know yourself, too."

In October 2010 then CEO of Google Eric Schmidt told editor of US magazine The Atlantic James Bennet: "We don't need you to type at all. We know where you are. We know where you've been. We can more or less know what you're thinking about." Soon, he implied, Google would know you better than you know yourself, which in turn might allow others to know you better than you know yourself, too.

Patterning

Decision making systems like algorithms are becoming more advanced, predicting and shaping rather than just responding. To do this they must learn and what they learn must be validated. Speaking to the New York Times in 2012 Google fellow Jeff Dean explained how Google X, Google's then research arm, had used a ‘neural network’ of 16000 computers to look at thumbnails of millions of youtube videos and how the machines assembled a "dreamlike digital image of a cat." Said Dean: "“We never told it during the training, ‘This is a cat.’ It basically invented the concept of a cat.” The machines had spotted a pattern, which the researchers validated by recognising as a cat.

For machine learning to work it needs data, and lots of it. Google already knows how people use its searches and the patterns they form. People experiencing depression may well have an outline in the data as distinctive as our feline overlords. But how to validate that? Where to get that already validated data? In the case of depression, the validation might comes neatly from the thumbs of the individual in the form of questionnaire responses. But that isn’t the only way.

Safeguarding data in the UK

In 2015 Google’s Artificial Intelligence arm Deepmind, acquired by Google in 2014, cut a deal with The Royal Free NHS Trust for access to 1.6 million patient data records in return for the provision of an application which would be trialed to look for acute kidney injury and alert doctors and clinicians. The Information Commission (ICO) ruled in July 2017 that The Royal Free had not done enough to safeguard patient data. Critics suggested the terms of the Royal Free agreement made it possible for Google to do whatever it wanted with what it had mined from the data.

Dr Julian Huppert, chair of the DeepMind Health Independent Review Panel said: “"People are concerned about the power of big technology firms.... It is not sustainable to offer the app for free, and I suspect DeepMind intends to make money but whether that is in the UK or not we don't know.” In the case of depression, Google owns already all of the knowledge derived from people’s search histories and will own their aggregated questionnaire responses, too. The question is: how will this be used, if at all?

The authors of a 2017 paper on the ethics of recommending digital technology to mental health patients’ make the point that who is collecting data makes a huge difference: “At first glance, the use of personal data for commercial profiling and medical monitoring purposes may look identical. But the motivation for using algorithms to define emotions or mental state for commercial organizations is to make money, not to help patients.”

Automated profiling

Algorithms increasing run our lives in ways in which we are not aware. “The collected data based on tracking behaviors enable automated decision-making, such as consumer profiling, risk calculation, and measurement of emotion. These algorithms broadly impact our lives in education, insurance, employment, government services, criminal justice, information filtering, real-time online marketing, pricing, and credit offers.”

I contacted the Google UK press team to ask: ‘How will the anonymised data kept by Google be used to improve user experience?’ ‘Who has access to this anonymised data within Google?’ ‘Will the anonymised data be combined with other anonymised data that google holds to better profile those experiencing depression?’ I did not receive a response before publication.

In his book The Psychopath Test Jon Ronson speaks to a producer on daytime television programme The Jeremy Kyle Show, asking them how guests were chosen: "I'd ask them what medication they were on. They'd give me a list. Then I'd go to a medical website to see what they were for and I'd assess if they were too mad to come onto the show or just mad enough."

Being able to recognise people who may be experiencing depression is not a neutral tool. What results from it is dependent entirely upon the intent of those who use that knowledge. While it may sound a sour note amongst the celebrations and plaudits, we must ask of Google’s actions: why this; why now and what comes next?

How does this make you feel? Join our Twitter chat at 12pm today, Wednesday 30 August, using #MHTchat.

Mark Brown is development director of social enterprise Social Spider and writes extensively on mental health and technology. @markoneinfour