*The mental health clinician knocks on the dorm room door for the 4th set of 3 loud knocks. No response. A Boston College police officer opens the door, alongside the dorm resident assistant. The two professionals and the RA enter to see a young male college student who looks disheveled, admits to not having eaten for 2 days, and reluctantly agrees to talk.
“How did you know?” asked the student. The clinician explained that student’s social media posts are monitored for mental health patterns, and, if urgent enough, an emergency intervention is begun. “What do you mean? I don’t understand.” said the student.
“You posted on Instagram, public information on the net. The post was scraped off the net by the University Counseling bot, run through pattern recognition and data mining algorithms, which concluded that the your situation was very likely a depressive episode with a significant risk of suicide. The bot sent an alert to a counselor on call, me. I called, emailed, and texted you. We tried to contact you, and you did not respond. For your safety, we came to help you.”
The counselor learns the student has been contemplating suicide and implements a care plan for the student, who will receive daily physical and mental care. The student pulls out of the nosedive toward suicide, learns cognitive behavioral health techniques, and has regular future visits with a counselor.
This student had no history of mental health issues, parents divorced 1 year ago, and 3 days ago broke-up with his girlfriend of 2 years. It is the week before final exams. The combination of factors pushed him over the edge.
*Theoretical, technically possible, fictional story of a Boston College University Counseling Psychological Emergency Clinician coming to the aid of a depressed college student.
- Depression is the leading cause of disability in the US for people aged 15-44
- Suicide is the 2nd leading cause of death for ages 15-44
- 7.4% aged 18 to 25 have serious thoughts about suicide, highest adult age group
- 42,773 = Number of deaths by suicide in US 2014
- 80% of US patients treated for depression show improvement within 4-6 weeks
- 66% of people with depression do not actively seek nor receive proper treatment
Analytics: data mining and pattern recognition
Data mining is the analysis of data for relationships that have not yet been discovered. For example, data mining may discover that the Sunday after Christmas for 19 year old women from New Jersey, who have recently traveled out of the country, is particularly likely to produce higher incidence of mild depression. Pattern recognition also produces correlation data relationships. The difference: Data mining tells the outputs resulting from the inputs, without knowing the relationship or pattern. Pattern recognition tells the outputs from the inputs and indicates how the outputs are related to the inputs.
Researchers at the Penn Social Media & Health Innovation Lab are learning how to analyze conversations and posts on social media, with the goal of identifying trends, make predictions, and lend insight into using social media for health surveillance, prevention and management. Director Raina Merchant says, “If someone has a lot of posts that may suggest that they’re depressed, they may not be as overt as ‘feeling sad,’ or ‘blue,’ or ‘unhappy,’ but there may be other words … that suggest depression, that aren’t as obvious.”
Traditional research or newer research, using crowd-sourced dictionaries, such as those created by researcher’s using Amazon’s Mechanical Turk (site which pays people to recognize or rate online images, text, etc …) can be used to attribute emotions to words on a weighted scale.(Schwartz and Ungar 2015)
(Asch, Rader, & Merchant 2015) use the term “social mediome”, as a comparison of a person’s social media behavior to their genome. To understand health of a person, these researchers look at multiple cues in social media:
- explicit language, such as “end it all” or “awesome”
- photographic cues: emotions displayed, other people, drugs, alcohol, weapons
- frequency of posts
- change in frequency or timing of posts
- tone of posts
- increase or decrease of followers or friends
- change to social status
The technology to perform the data mining and pattern recognition is not new. However, in order for the results to be useful, the algorithms must “learn” by validating the social media patterns against the electronic medical health records. This is currently being done and will take significantly more time.
But I don’t want to be analyzed!
Driving on the interstate under the EZ Pass scanners on I95 North into Maine, without slowing down, I wonder who has access to that piece of (when, where, who, driving what) data I just created. When I post this blog, is some government bot somewhere scraping it off the web, analyzing it for likelihood of terrorist intentions, insider trading, drug trafficking, or is Proctor & Gamble doing the same to predict the likelihood I may buy Pampers diapers? (Yes to the government monitoring, No to P&G, unless they have a way to market to the poster, which they don’t, in this case.) The worst-case scenario is when sophisticated criminals or enemies use analytics against us.
When we post publicly, we are signing an implicit agreement to be analyzed. The analysis is going to get very sophisticated, linking you in every possible way to your data, which you leave behind like Hansel & Gretel’s bread crumbs just by living your average American life. We know this, but in avoidance of discomfort, we block it from the front of our consciousness. We would be safer and live better, if we face this reality and advocate for systems and laws that optimize the safeguards and benefits of our citizens.
Even without using social media actively, a person (unless they are privacy-seeking technophobe) is likely leaving behind data. We have passed the point of having privacy like a generation ago, and it is very unlikely that we will ever have that back. The benefits AND risks are larger than we’ve experienced before.
Steps to improve healthcare by using Social Media data and analytics (Asch, Rader, & Merchant 2015):
- Enhance our ability to collect and interpret information from social media sources
- Link the information available from social media sources with the validated clinical data that typically reside in electronic health records or insurance claims databases
- Turn associations and observations into interventions that prevent and cure.
- Mining of social media data needs to occur in a way that is transparent to patients and consistent with their preferences for privacy
We are well on our way in step 1, and beginning to make progress in step 2. Since the laws are always years behind the technology, we must be vigilant about our own choices, especially the networks we link to and the access we allow, while advocating for the benefits of the new technology, if appropriate. We can not assume the system will do the right thing for our well-being. The system is still trying to learn what, in fact, is the right thing. Would I want a scenario like the Boston College mental health clinician alerted to a student in need, who otherwise would not get timely help? Yes.