Imagine Rita Rich-Mulcahy’s surprise when Facebook flagged her for an offense she knew she didn’t commit.
This widowed 81-year-old knits to pass the time. And she says it nearly got her banned for “hate speech.”
“Facebook obviously use a bot to trawl around Facebook and I had made two comments, totally innocent, which the bot saw as hate speech,” Rich-Mulcahy, who lives in Adelaide, Australia, told the Shropshire Star last month. “It may seem a small thing to most people, but to someone who had never even had an overdue library book, being charged with using hate speech was frightening.”
Rich-Mulcahy, who describes herself as a “porcophile,” joined a knitting group on Facebook to help cope with the loss of her husband and decided to set a target of knitting 100 pigs.
The trouble started after she described her knitted stuffed piglets as “white pigs” in a Facebook comment.
The comment warranted one of two warnings she feared would eventually lead to her account’s removal.
“The second time was when I posted a picture and I said ‘hi-viz piggy,'” Rich-Mulcahy said.
“Now I have two strikes against me with no way to appeal. So the bot will watch everything I type now. It is ludicrous. If I ditch Facebook I would lose my great connection with my Shropshire friends.”
One more strike would lead to a permanent ban, Rich-Mulcahy said Facebook told her.
Her only crime was knitting fuzzy animals, yet she feared that any action she took on the social media giant would eventually alienate her from her community of knitting friends.
Rich-Mulcahy’s story is only one of many instances of Big Tech censorship, however.
Many users lament their negative experiences, from similar false “hate speech” accusations to obsessive “fact-checking” mechanisms on articles shared with friends.
Social media’s censorship efforts have reached new heights, however, as bots and algorithms plunder innocent accounts and punish users for specific trigger words even when those words are used in harmless contexts.
Using artificial intelligence to police language poses a big problem. Computers lack the discernment humans have to determine a user’s intent. In this case, human review proved to be the solution to a bot’s mistake.
If a term as simple as “white pigs” could warrant this response, what other harmless language could lead to baseless bans?
Better yet, who or what should possess the authority to determine what constitutes or goes beyond acceptable speech?
In Rich-Mulcahy’s case, the censorship proved erroneous. Facebook admitted and apologized for its mistake and restored her posts.
“Our systems made a mistake here and the comments have now been reinstated. We do sometimes make mistakes when reviewing content, which is why we give people the opportunity to appeal against our decisions,” Facebook told the Star.
Facebook admits ‘mistake’ in threat to ban 81-year-old knitter for hate speech https://t.co/PAzNyUZCpf pic.twitter.com/xFSDSUYRqC
— New York Post (@nypost) February 25, 2021
Other social media users may not share the same experiences with censorship efforts, however.
Of course, platforms will observe and report suspicious activity occurring on their sites. This much offers a virtual equivalent to the classic “falsely shouting fire in a crowded theater” analogy, allowing free speech with limitations that necessitate accountability.
But what can users do when “fact-checkers” and social media administrators inject their own biases into judgments or use bots to track and censor users’ language? Will their appeals yield the same result as Rich-Mulcahy’s?
It seems society explores that question today, in a political and social climate so polarized and reproachful of opposition.
Unless something changes, we may not like the answer we receive.
This article appeared originally on The Western Journal.