FACEBOOK’S NEW SUICIDE DETECTION A.I., COULD PUT INNOCENT PEOPLE BEHIND BARS | WHAT REALLY HAPPENED


FACEBOOK’S NEW SUICIDE DETECTION A.I., COULD PUT INNOCENT PEOPLE BEHIND BARS

Imagine police knocking on your door because you posted a ‘troubling comment’ on a social media website.

Imagine a judge forcing you to be jailed, sorry I meant hospitalized, because a computer program found your comment(s) ‘troubling’.

You can stop imagining, this is really happening.

A recent TechCrunch article warns that Facebook’s “Proactive Detection” artificial intelligence (A.I.) will use pattern recognition to contact first responders. The A.I. will contact first responders, if they deem a person’s comment[s] to have troubling suicidal thoughts.

Facebook also will use AI to prioritize particularly risky or urgent user reports so they’re more quickly addressed by moderators, and tools to instantly surface local language resources and first-responder contact info.

A private corporation deciding who goes to jail? What could possibly go wrong?

Facebook’s A.I. automatically contacts law enforcement

Facebook is using pattern recognition and moderators to contact law enforcement.

Facebook is ‘using pattern recognition to detect posts or live videos where someone might be expressing thoughts of suicide, and to help respond to reports faster.’

Dedicating more reviewers from our Community Operations team to review reports of suicide or self harm.

Facebook admits that they have asked the police to conduct more than ONE HUNDRED wellness checks on people.

Webmaster's Commentary: 

This "Big Brotherness" of Facebook almost reeks of the old Soviet style detention in "psychiatric hospitals" for the politically incorrect.

Comments

SHARE THIS ARTICLE WITH YOUR SOCIAL MEDIA