President Donald Trump's permanent ban from Twitter in January 2021 marked a turning point, as platforms increasingly rely on artificial intelligence for stricter moderation against violent content and misinformation.
Around the time of Trump’s ban, social media platforms rapidly expanded their use of AI and automated systems. In response to the growth of user-generated content, many platforms began using mass-scale technology to monitor every user’s content.
Eventually, most platforms adopted hybrid moderation models. However, humans rarely have the capacity to review all content, leaving most decisions to machines.
As noted by the Oversight Board, “Most content moderation decisions are now made by machines, not human beings, and this is only set to accelerate.”
Although AI and automated systems don’t face the same limitations as humans, they still encounter challenges. Unlike humans, AI struggles to consider the full complexity and context of words, leading to over-censorship while sometimes allowing problematic content to remain.
USM Assistant Professor of Communication Studies Brent Hale said AI models inevitably absorb some biases from the training data.
He said OpenAI’s “big kind of firehose approach” involves grabbing everything available to use for training.
“You inherently will end up with the biases that existed on the internet already,” he added. “There is no way to not do that because you feed it that content. That means that you also, wherever you intended to do this or not, you feed it content that was created to be provocative, that was created to marginalize, that was created on…you know—let's say—white supremacy websites, like that sort of thing was absorbed into the model.”
AI’s absorption of content without understanding context often results in flags or censorship of words based on biased patterns in its training data. In fact, the Journal of Research in Social Science and Humanities said, “AI censorship disproportionately targets specific groups and ideological perspectives…”
LGBTQ+ student Angel Pickett shared her experience with AI censorship in discussions related to the LGBTQ+ community.
“We are still talking about it…it is still being said but there are more warnings or ‘This content might be offensive’…which just overall block when it comes to anything that has to do with the situation, or the community…” Angel said. “I can actually think of something this morning…it blocks immediately viewing for something about the community… I believed it was some sort of dance back from my hometown saying that it might be offensive and it was just literally a dance for the LGBTQ+ community…put on by the local fairground back home…how us getting together going to a dance can be offensive to anybody?”
Black student Jaythan Comegys shared his experience with AI censorship.
"I have noticed that I see a lot less…I have a Twitter account, well, I have X…I don't see a lot of things I would normally see,” Comegys said. “I am not sure if it is related but I know a lot of things I used to see in my feed get repressed."
Comegys said it can be harmful for AI to decide what to censor and what not.
“I think it can be very harmful, especially taking away the voices on a platform from people that definitely don't have one,” he added. “I feel like it can really take away important issues that are going on…that people should know about…something like this can easily take away from everything that is going on…take away the important voices that a lot of people don't have access to."
However, not everyone’s experience has been the same with AI censorship. Samantha Romeo, a student who is not part of a marginalized community, said she has not had an account restricted by AI.
"I have yet to have an account restricted because of AI but I feel like what was restricted depends on whether it was valid or not," she said. "I feel like social media has very strict guidelines and if you are not following them, it automatically is a reason to be removed and when you sign up, you sign the conditions to follow up."