Tuesday, December 5, 2017

Facebook using AI to scan posts for suicide risks

Facebook is using artificial intelligence AI to scan and analyse users' Facebook posts for signs of suicidal thoughts.
When AI detects someone who could be in danger the post is flagged and sent to human moderators.

The moderators respond by sending the user resources on mental health, and in more urgent cases contacting first responders to try and find the individual.
Facebook has been testing the tool for months in the United States but may now extend it to other countries. However, the AI system will not be active in any EU country since it violates data protection laws.
The rationale for the project

Mark Zuckerberg, the CEO of Facebook in a Facebook post said that he hoped that the tool would remind people that AI is "helping save people's lives today". He said that during October alone, the AI tool helped flag cases to first responders more than a hundred times.

Zuckerberg said that“If we can use AI to help people be there for their family and friends, that's an important and positive step forward.."
How the AI tool works
The AI program scans for key words and phrases such as "Are your ok?" and "Can I help" in comments. However, Facebook is not providing many details as to who needs to be flagged. Human moderators will do the work of assessing each flagged case and responding.
A recent study used AI to try to predict who would attempt to commit suicide within the next two years. It did so with 80 to 90 percent accuracy. The study focussed on people who had been admitted to hospital after self-harming. Studies of individuals in the general population are yet to be published.
Facebook will use AI to prioritize risky or urgent reports so they will be addressed more quickly by moderators. Facebook has 80 local partners such as Save.org, National Suicide Prevention and Forefront.
Concerns about privacy
Some are worried about the privacy implications of the project. Your posts are being monitored without your consent and the data analysed without asking your approval. You are then sent messages of advice that were not requested. Perhaps you may find that first responders are at your door.
Facebook previously worked with surveillance agencies such as the NSA.
Alex Stamos chief security officer said that the "creepy/scary/malicious use of AI will be a risk forever" and that it was important to weigh "data use versus utility".
When Josh Constine a writer at TechCrunch asked Facebook how the company would prevent the misuse of the AI system he received no response. Perhaps it will come in time.
Facebook as social worker whether you like it or not
One would think that Facebook is a means of communicating with friends by text, videos links etc. We can expect some ads as a means of paying for this service but even now Facebook is using what is called native advertising techniques to make it look as if an ad is a post as it appears along with the other posts. You can identify them by the word promoted, sponsored, or suggested.
However now, Facebook is not just going to use data gleaned from your posts to tailor ads to what seem to be your needs and desires but it will also be checking on your mental health. If AI and Facebook's moderators find you have suicidal tendencies then they are going to send you unsolicited advice and perhaps even send first responders to your door.
Facebook may expand its social welfare role. CEO Zuckerberg writes ""in the future. AI will be able to understand more of the subtle nuances of language, and will be able to identify different issues beyond suicide as well, including more kinds of bullying and hate"
Facebook will become a monitor to ensure that all those bad things such as suicide, bullying, and hate are tracked before they even take place. It all sounds as if Facebook should receive an award for social responsibility.
Questions and critical comments
You cant opt out of being spied upon to see if you have suicide thoughts and need advice etc. In other words you are going to have your posts scanned and tagged by AI whether you like it or not. Even if you consider this an invasion of your privacy you cannot stop Facebook from its actions. Your only choice is to exit Facebook,
The justification for not being able to opt-out is very paternalistic. A Facebook spokesperson said that the feature is designed to enhance user safety and that support resources offered by Facebook can be quickly dismissed if a user does not want to use them.
It is not clear how the project enhances user safety. At most it may lead to some with suicidal thoughts to decide not to commit suicide. It has zilch to do with user safety in general.
Where does AI monitoring stop?
Zuckerberg suggests that the AI tool could be extended to tag bullying and hate speech. No doubt there could be AI tools developed for detecting potential alcoholics, and pedophiles.
The problem of biased algorithms

AI uses algorithms to sort through and tag those with suicide thoughts. However algorithms are often biased. The algorithm(s) used by Facebook need to be rigorously examined for bias.
A recent article in TechCrunch points out that biased algorithms are ubiquitous. Recent examples of flawed algorithms include flawed systems for ranking teachers.
Many stakeholders including the companies that develop and apply these AI systems show little interest in limiting algorithmic bias. Perhaps the bias is often in their interests or they may not understand it is even present.
Suicide
Although in some countries suicide is a crime it is not in Canada and the U.S. nor in most other countries. Should a social media outlet be attempting to prevent someone from committing what is not a crime if they do not request help?
For people with incurable and very painful diseases suicide may be a quite reasonable action and it is not clear that there should be attempts to prevent it. In Canada assisted suicide is lawful in such cases as it is in some states in the US. Does the AI distinguish these cases?
One might ask what training the human moderators in this project have. Are they paid? By whom?
One might also ask how one knows how many, if any, lives are saved by these actions. Those given advice may not have committed suicide anyway. Some may have actually committed suicide after it was made clear to them that they had suicidal thoughts. There is no real proof given that this project actually saves lives. At most one could say only that it may have saved some lives, at least for a while!
Previously published in Digital Jorunal

No comments:

US will bank Tik Tok unless it sells off its US operations

  US Treasury Secretary Steven Mnuchin said during a CNBC interview that the Trump administration has decided that the Chinese internet app ...