Home » Sector Topics » Social Media and AI

Social Media, Cyber Bullying, and Need For Content Moderation

Sad and scared young boy with computer laptop suffering cyberbullying and harassment being online abused by stalker or gossip feeling desperate and humiliated in cyber bullying concept.
Sad and scared young boy with computer laptop suffering cyberbullying and harassment being online abused by stalker or gossip feeling desperate and humiliated in cyberbullying concept.

A rise in netizens and social media platforms, coupled with the growth of the mobile internet, has resulted in a jump in the creation and consumption of User Generated Content (UGC). Social media platforms, in all honesty, have become a major channel for disseminating, circulating, and exchanging information to billions of people on the internet today. But this has spurred the cases of online harassment snapped out of vast information sharing through social media platforms while necessitating the demand for online content moderation. 

For those who have not heard of it, online content moderation is a content management and monitoring practice adopted for maneuvering user online behavior. Content moderation practices need remodeling with the changing content dynamics and users’ behavior on such social sharing platforms. While restricting users from sharing anything of their will on social media platforms can save other users from cyberbullying, this is also necessary for the social sharing sites to modify the content as and when necessary. 

The increasing online abuse has given rise to cyberbullying, resulting in exclusion, harassment, cyberstalking, impersonation, trolling (intentionally provoking a negative response), and catfishing’ (using fake profiles to deceive others). Users are taking leverage of liberty in the absence of appropriate regulations for online content sharing. Alongside developing cognizance in users about what is worth sharing on the social media sites, the site needs to partner with content moderation service providers to get the abusive content modified before it gets the grip of the psychology of the targetted users

Cyberbullying Soared with the Rise of Social Media

From Google’s Orkut to Facebook, Instagram, Snapchat to Twitter, and many other such social media platforms have got the world under their impression since their release. Smartphones played a catalyst in the course as they provided users with easy access to such social media platforms. Now users are spending more time on their smartphones, and much of the time they spend on the smartphone goes into social sharing. 

Content-Moderation

It is reiterated behavior intended at intimidating, enraging, or defaming the targeted users. Examples embrace:

  • Circulating fabricated facts and data about incidences, users, or posting discomfiting photos or videos of someone on social media platforms;
  • Dispatching malicious, offensive, or intimidating messages, graphical illustrations, images, or videos via socializing platforms
  • Mocking someone with the intent of humiliation and sending mean messages to others on their behalf or through fake accounts.

The social media platforms, in all honesty, have become the battleground for more likes, comments, and further sharing. Then these short video formats gained in the digital landscape to give users more space for sharing more content online — including the abusive ones as well — which triggered cyberbullying. 

Social Media Platforms are Being Intolerant to Bullying

No social media platform tolerates bullying and similar incidences of abuse and insults with the government scrutiny in place. As soon as an offensive post reflects in the news feed of a social sharing site, the platform takes immediate action to remove such post or warn and block the users from the site in extreme cases. 

Both social sharing sites and the users themselves can now report the incidences of bullying on social media channels using the report links, which, at some or other place, most probably in the ‘Help’ section, are provided on the social networking sites. They will examine the content and let you know its legitimacy based on the character of the post published on the site. 

The increasing incidence of bullying on social media platforms shows us the need for text moderation — as this is something that can help social sites, government, and users themselves to keep the virtual environment safe and secure against online social abuse. Both government and social media platforms have now developed a set of community standards that carry forth a non-tolerant policy for any content that provokes or promotes bullying in any shape or sense. 

Content Moderation — Why It’s Needed?

Social media operators maintain policies that prohibit users from posting certain content, such as content that exhibits graphic violence, child sexual exploitation, and hateful content or speech. An operator may temporarily or permanently ban users that violate its policies, depending on the operator’s perspective on the severity of the users’ violation(s). 

There is no uniform standard for user-generated content moderation, resulting in practices varying across social media sites.30 Some operators have chosen to release reports containing information on their online content moderation practices, such as the amount of content removed and the number of appeals, but operators are not required to release this information. 

Social media operators rely on several sources to: 

  1. Flag or remove the users’ content that appears to be abusive and offensive in any sense;
  2. Block users from accessing the site;
  3. Involve government authorities to take actions against such users in case of extremities of the incidences of bullying. 

The social networking sites can flag or mark inappropriate, unethical, or unlawful posts for content moderators to review and remove when applicable. Automated systems can also flag and remove posts. Content & comment moderators, principally contractors, may be able to identify nuanced violations of content sharing policy, such as taking into account the context of a statement. 

Need for New Reforms for Online Content Moderation

Before the AI integration into this social networking site, Facebook was able to find only 16% of users’ content that would provoke bullying and harassment. AI technology, for instance, blocked 99% of violent, graphic, and child-nudity content on Facebook in the first quarter of 2020 before any user expressed concern. What about the rest of the offensive content being shared on social sites that are still unrecognizable even with AI implementation? 

There is a vast scope of human content moderators, implying that content moderators are more likely to detect abusive content with more precision, no matter the class or character of the content shared by the users.

We have identified 8 actions that can make a big difference when it comes to tackling the incidences of bullying on social media platforms:

  1. Focus on the process, not the content. Look at how content is being magnified or restricted;
  2. Ensure real people – not algorithms or automated systems review content monitoring;
  3. Ensure the set of content-based rules & restrictions are based on clear and narrowly-tailored laws that are proportionate and non-discriminatory;
  4. Transparency is critical. Both the State and Social media platforms should have a transparent policy about how they correct, curate, or moderate the content posted by users on the social networking sites;
  5. States need to keep a watch on users sharing the content on the social networking sites and how the social platforms use and share users’ information across the platforms and with other users;
  6. It needs to be assured that users have access to legal opportunities to appeal against decisions they believe to be uneven and call for needful remedies available when actions by social media platforms or States call their rights into question;
  7. Independent courts of justice should have the final say over the lawfulness of the content rolling on social sharing sites;
  8. Participation is essential — make sure civil societies & Governing bodies are involved in designing and evaluating the content sharing regulations. 

Final Thought 

The role of intermediaries today extends beyond simply distributing content and facilitating interactions. Now, these service providers have near-total control in terms of the online experience of users, as well as the content & comment moderation. Even though they qualify for some of the same liability exemptions as intermediaries, they have some distinct characteristics. Therefore, platforms are subject to much debate as to whether or not they should be more strictly regulated. 

As part of automated procedures for notice and takedown, platforms must remove banned content. Proactive and automated measures are recommended to discover and remove banned content. By automating user-generated content moderation decisions, enormous workloads are reduced. Hence, algorithmic decision-making would appear to be the most efficient way to ensure perfect enforcement. Hence, algorithmic decision-making would appear to be the most efficient way to ensure perfect enforcement.