Facebook is rolling out a new artificial intelligence (AI) software that can detect warning signs of users who might commit suicide. The site has used AI software in the past to scan for inappropriate content that violates their community standards.
In this case, the AI software searches through posts of most of Facebook's more than 2 billion users and looks for comments and phrases like, 'Are you okay?' and 'Can I help?' to detect posts before they are reported.
Zuckerberg posted to his Facebook page:
In May, Facebook's CEO Mark Zuckerberg hired 3,000 new employees to it's "community operations" team who monitor videos posted that are reported for inappropriate content. In a statement he said, "we’re also building better tools to keep our community safe. We’re going to make it simpler to report problems to us, faster for our reviewers to determine which posts violate our standards, and easier for them to contact law enforcement if someone needs help."
Their latest use of AI acts as a baseline to determine what posts should be reviewed and gives certain posts priority. This became a priority after a slew of suicides were live-streamed on Facebook.
With the new AI software implemented, Facebook hopes to detect these posts early and connect the potentially troubled users with a friend, hotline, or law enforcement official.
Facebook has been testing this program in the United States and will expand the operation into many countries where it operates, given it does not violate Internet privacy laws.
This senator found 100 examples of wasteful federal spending. Here are five of them.
Who needs Barbie? These high-tech dolls teach girls how to code.
Are there random, dockless bikes in your city? They could be the key to an easier commute.