Chrome Extension
WeChat Mini Program
Use on ChatGLM

The Feasibility of Algorithmic Detection and Decentralised Moderation for Protecting Women from Online Abuse

CoRR(2023)

Cited 0|Views7
No score
Abstract
Online abuse is becoming an increasingly prevalent issue in modern-day society, with 41 percent of Americans having experienced online harassment in some capacity in 2021. People who identify as women, in particular, can be subjected to a wide range of abusive behavior online, with gender-specific experiences cited broadly in recent literature across fields such as blogging, politics, and journalism. In response to this rise in abusive content, platforms have been found to largely employ "individualistic moderation" approaches, aiming to protect users from harmful content through the screening and management of singular interactions or accounts. Yet, previous work performed by the author of this paper has shown that in the cases of women in particular, these approaches can often be ineffective; failing to protect users from multi-dimensional abuse spanning prolonged time periods, different platforms, and varying interaction types. In recognition of its increasing complexity, platforms are beginning to outsource content moderation to users in a new and decentralized approach. The goal of this research is to examine the feasibility of using multidimensional abuse indicators in a Twitter-based moderation algorithm aiming to protect women from female-targeted online abuse. This research outlines three indicators of multidimensional abuse, explores how these indicators can be extracted as features from Twitter data, and proposes a technical framework for deploying an end-to-end moderation algorithm using these features.
More
Translated text
Key words
decentralised moderation
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined