Facebook Suicide Prevention Initiative

Facebook Suicide Prevention Initiative

On February 25th, Facebook’s safety division announced an extension of their suicide prevention initiative. They describe the initiative as being based on work with suicide prevention organisations, clinical research, and lived experiences from mental health survivors.

From what I’ve seen so far, parts of this initiative seem beneficial,  and  useful for helping people through a bad night or self-destructive impulse. However, there are still some concerning areas, and there has already been at least one example of just how this initiative can be dealt with wrongly.

THE BENEFITS

Firstly, the good parts. For a potentially-suicidal person, the idea of pointing out that they seem upset or distressed from what they have posted is a good one- it might be what makes them realise they are having difficulties beyond typical life, and might encourage them to see what the offered help is.

For the person who flags a status, having the option to send an anonymous “someone thinks you might be in trouble” message reduces one of the barriers people often have in talking about mental health issues. It starts the conversation in a low-risk way, without  requiring a face-to-face question that many people just don’t know how to carry out.

Facebook’s post showed some pictures of the support options.  The support page offers the following message:

“You’re not alone. We do this for many people every month”.

This is nicely worded, as it should make people feel less they’re being singled out or marked as deviant, and more like it’s a general service for various difficulties. This should hopefully reduce their anxiety about responding to the prompt and make them less likely to ignore it.

When a status is flagged as potentially suicidal, there are a few options to choose from. The person in need has the option to contact a friend or a helpline (currently the National Suicide Prevention lifeline, as the service is  US-only for now), or to read some advice and tips. None of the posts showed the content of this advice, so its use remains to be seen.

There are also options for the person who reported a status. They can choose to directly call or message the person in need, or contact a suicide helpline volunteer themselves.If the service stopped here, and only went further after the status of the person in need was confirmed, I would consider it to be a sensitive yet useful initiative. But Facebook’s additional options may be a cause for concern, and something that puts people off from using the service.

THE PROBLEMS

Here’s where service problems start:

If someone on Facebook sees a direct threat of suicide, we ask that they contact their local emergency services immediately. We also ask them to report any troubling content to us. We have teams working around the world, 24/7, who review any report that comes in. They prioritize the most serious reports, like self-injury, and send help and resources to those in distress

Reporting troubling content to facebook means it’s outsourced to a moderator. The moderator reads the flagged status, decides how serious or risky  the content is, then sends it  to the appropriate category for Facebook to respond.

Someone’s intensely personal information being read by a stranger an invasion of their privacy. But their status being analysed for something as serious as suicide risk by someone who has no idea what that person is normally like, and  no context to put their status in?  That’s going to be inaccurate. People are potentially going to fall through the cracks just because of how they express themselves.

Conversely, the opposite could happen. In fact, it happened straight away. After the program was announced,  a man tested the initiative by posting a fake suicidal status . He informed his friends it was fake. Yet local police were called, and his access to Facebook was blocked. When the police contacted him, he was handcuffed, and placed in a 72-hour mental health hold. Despite his (and his wife’s) full explanation of the experiment, he was held for the full length of time and subjected to unnecessary medical procedures.

The connection between the suicide prevention feature, and Facebook themselves calling local police to “check-up” on the person, is what turns this service from caring to invasive. In a perfect world, including the police would be a benefit, an extra layer of security.

But in practice,  it’s not.

People experiencing mental health issues, who aren’t diagnosed or haven’t used any services,  will usually have kept their experiences hidden for fear of drawing attention to it- calling the police to their house won’t help that. Some will feel unable to ask for professional help for fear of losing their autonomy and what control they do have over the situation- bringing the police in won’t help with that either.

Some people diagnosed  with mental health issues will have already had  very negative experiences with medical staff or police: having a mental health condition is often (unfortunately) drawing the short straw in terms of compassionate medical or legal treatment.

In the UK, people as young as 14 have been left in adult prisons overnight because the NHS hasn’t been able to find a bed in a mental health ward for them.  In the US, the stakes are even higher, as the majority of people killed by police have a mental illness.

Until there is better police training for handling situations involving mentally ill and/suicidal people, they simply aren’t the best or most appropriate  form of help. It would make sense to only include police if all other options have failed, or if there is a definite reason for police support, such as the person in need showing  homicidal intentions or a strong indication they are about to commit a crime.

Facebook could make this service a lot more trustworthy by prioritising friends and helplines. Helplines are designed for that purpose and staffed by people trained for this situation. It’s also more personal, less anxiety-inducing, and allows the person in need to retain greater autonomy. Having a friendlier first experience of contacting help means people will be more likely to follow-up, and more likely to use the services again themselves in the future.

For now, it’s a good idea, executed too invasively. It risks compromising its own goals by jumping to the highest level of emergency straight away, making itself less trustworthy (and therefore less usable) in doing so.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s