The social media giant announced on Wednesday that they would be seriously improving their suicide prevention features on the website in hopes of building a safer and more supportive community. While Facebook has had suicide prevention tools available for more than a decade, this improvement will see a burgeoning new tech utilized for something good. That’s right, artificial intelligence will play a part in keeping people safe on Facebook. For those of you crying Skynet already, don’t worry. The new features are noninvasive and will likely be very helpful in supporting people that feel that suicide is the only option. They will not auto-report problematic behavior without your knowledge. Rather, the feature will notice concerning language in posts and comments, like friends asking “are you okay?” on a regular basis, and will subsequently make reporting self-injury or suicide risks easier the next time they log in.

While these features are a noble attempt at quelling a serious issue on social media, Facebook’s commitment to suicide prevention isn’t coming from out of the blue. After the live-streamed suicide of a 12-year-old girl was seemingly unstoppable on a wide range of social media websites, Facebook decided that something needed to be done about the growing problem. And according to a number of notable health professionals, this might do the trick. The negative aspects of social media are all too familiar at this point in the game. Whether it’s the offensive trolls on Twitter, the lewd photographers of Snapchat, or the cyber bullies of Facebook, logging on has become an exercise in toeing the line of social interaction. Fortunately, as long as these social media companies are committed to solving the problems they create, the benefits of social media will always outweigh the problems.

Facebook Will Use Artificial Intelligence to Help Prevent Suicide - 61