Facebook is the first website that comes to mind when thinking about an online forum full of fake news and hatred. But it’s hardly alone, and platforms from Twitter to Wikipedia are all debating the correct remedies for the spread malice and misinformation. Companies are taking drastically different approaches to regulating their content, spurring a debate about the line between moderation, deplatforming and all-out censorship of free speech. The internet fosters toxicity like few other places on earth – so much so that governments are beginning to draft legislature to help control the spread of fake news and limit online hatred. When it comes to moderation, everyone has an opinion, but there’s precious little consensus on what on earth to do. There isn’t much money in self-moderation. In fact, by cutting people off for saying something factually incorrect or harmful, all that company does is serve to lose a customer, so this issue boils down to something almost impossible: asking a large company to put morality over profit. It’s no surprise that social media has its issues. Anyone over the age of 13 can create a Twitter or Facebook account and present themselves and their lives in any way they like, as (un)truthfully and (un)controversially as they wish, for the world to see and respond to. Prolonged usage of social media has been found to result in greater feelings of sadness and loneliness, and an increase of anti-science agenda recently has even slowed the COVID pandemic response, with the impact on loss of life likely to be analyzed and debated for years to come. The problems aren’t limited to personal profile-led sites like Facebook and Twitter, either. Professional social network LinkedIn has a large problem with men trying to romantically solicit women, and platforms for sharing video games over the internet have been tackling aggression (both from foes and teammates) since the creation of the medium itself. While the government can make strides to prevent hate speech or fake news, the sheer volume at which both of these things can be produced is higher than almost anyone can imagine. It falls on the individual platforms to take responsibility for their products, and impose swift and harsh penalties for anyone knowingly sharing false information or hate speech. However, that’s where the line gets hazy. How harsh should a penalty be if someone shared something false without realizing? Who decides if something is hate speech or if it’s just a divisive opinion? The platforms themselves are going to be the ones deciding this, and if it loses them customers, they’re always going to regret it.
How Does Facebook Handle Moderation?
Facebook has become synonymous with a laissez-faire, arguably lazier side of moderation. With its massive size, Facebook has a massive problem with content moderation. An absurd volume of fake news and hate posts can spread on its platform, and problems extend to Messenger and WhatsApp, too. Plus, Facebook knows it has a problem, and an overhaul in its approach is now close to unavoidable. Discourse on Facebook has become so frequently toxic and violent that an estimated 70% of the most active civic groups can’t be recommended to other Facebook users. Some of these groups were responsible for the Capitol Hill riots being organized through Facebook — though, Facebook COO Sheryl Sandberg said that she believed these riots were mostly orchestrated elsewhere.
How Is Twitter Handling Moderation?
Ex-president Donald Trump has to left behind quite a legacy, including elevating Twitter toxicity to an unprecedented levels. The platform always had its fair share of vitriolic conversation, but as toxic politicised debate on Twitter was spurred on by his near-constant tweets, the mood of the website took a turn, and debate over moderation became a hot topic. Twitter has usually taken a similar approach to Facebook’s – tackling smaller cases of absolutely unacceptable racism or personal attacks, but doing little to address root causes. After four years of stirring the pot, Trump was finally banned, but only when he had days left of his presidency, and after Twitter had already milked huge levels of traffic from his prominence on the platform. The result? An immediate debate over where moderation ends and censorship begins, with polarized choruses of backlash and support for Twitter’s move against Trump. Twitter is getting experimental with moderation and reporting of misinformation, even devolving some responsibility of it to a select group of approved user with its new Twitter Birdwatch initiative. It’s new ground, and though out-sourcing moderation responsibility is certainly novel, it raises questions of how users are chosen for the initiative and how they will be protected themselves should they bear the brunt of accusations of censorship of others. This quickly devolved into a hotbed of racism, antisemitism, and dangerous, anti-science conspiracy theories. And while the platform itself didn’t interject in what its users were posting, the app stores that hosted Parler soon demanded some degree of moderation. First, companies like Apple and Google said that either Parler would control its users, or they would delist the app from their stores. After Parler took no steps to moderate its content, the app was quickly taken down. Then, after the riots at Capitol Hill, Parler was removed from the internet entirely. The company itself is still hanging around, but who knows for how long? The CEO was just fired by the board of the company, and it’s possible that this is part of a longer list of steps towards Parler’s downfall. One thing that Wikipedia has done to stand apart from the moderation approaches of other platform is to be quite deliberate in its efforts to embrace wider representation among its moderators. On top of its already militant regulation of their content updates, Wikipedia has also recently introduced a new initiative that encourages a wider array of perspectives and representation, both across their team and their site. For business users, social media marketing can be one of the most profitable channels for sales, brand growth, and connecting with your target market. But, where there’s social media, there’s risk of divisive or hostile comments, which can be unnerving on a professional account feed. Knowing that content moderation (or the lack of it) may affect or limit your small business posts could be a scary prospect. It can pay to use a social media management tool to organize your social media marketing campaigns and activity – these tools help businesses to see new responses, and respond to them smartly or decide when they need flagging. Rather than rushing out a campaign that has some false information, or drowning under a storm of messages after making a mistake or faux pas, you can draft messages and organize your viewer’s engagement to keep everything under control. Or as controlled as it can be, until giants like Facebook put their consciences above their capital.