It sounds like a humorous made-up cautionary tale about the hypersensitivity surrounding race relations today — except that it’s completely true.
According to the Independent, chess champion Antonio Radic believes his chess channel was temporarily barred from YouTube because of a “glitch” in the algorithm that picked up the “black against white” chess terminology and flagged it as racist content.
Radic, a Croatian chess player who is known as Agadmator to his more than 1 million channel subscribers, was suspended June 28 for supposed “harmful and dangerous” content during a premiere viewing of an interview with grandmaster Hikaru Nakamura.
In a video recounting the shutdown of his channel, Radic explained that his electronic request for appeal after his suspension was almost instantaneously denied, leading him to believe “it was done by an algorithm that maybe heard some keywords that are not appropriate for the current situation in the world and decided to take down the video.”
He theorized that it might have been because of his explanation that “white will always be better” when discussing specific technical moves, and said that of his over 1800 uploads, “it’s pretty much black against white to the death in every video.”
At the time of his suspension, several major cities were dealing with race riots sparked by the death of a black man, George Floyd, in Minneapolis police custody.
Although the Google video platform didn’t specify what caused the suspension and got the channel up and running again within 24 hours, computer scientists from Carnegie Mellon University tested Radic’s overzealous-algorithm theory — and the results were instructive.
YouTube uses both humans and AI to detect racially charged content, but issues can arise when a computer trained to root out racism doesn’t understand that words like”threat” and “attack” along with “black” and “white” can refer to chess pieces and game strategy.
“If they rely on artificial intelligence to detect racist language, this kind of accident can happen,” said Ashiqur KhudaBukhsh, a project scientist at CMU’s Language Technologies Institute.
He and his colleague Rupak Sarkar ran a simulation of their own using a similar speech classifier apparatus to test whether AI would be able to tell the difference between actual racist content and chess terms among 680,000 comments from five other YouTube chess channels.
What they found was that 82 percent of the 1,000 comments flagged by the AI classifiers were incorrectly pulled because of commonly associated chess vocabulary.
Their findings were published in a paper presented to the Association for the Advancement of AI annual conference last month where the pair was awarded Best Student Abstract Three-Minute Presentation.
While the resolution for the chess channel was ultimately amicable and mostly harmless, what happened underscores two very important shifts in American society and provides a warning about where they intersect.
First, the ugly fact is that race relations today are worse than they’ve been in decades.
In just 10 short months since Floyd’s death, the race-baiters in America have bombarded non-white people with the idea that every slight against them, every disadvantage they face and every failure is the result of a white supremacist conspiracy against them.
This has been advanced through the media and organizations like Black Lives Matter and codified into corporate America through anti-white re-education programs.
The unfortunate results are just as destructive to the supposed victims as they are to the people perceived as the oppressors — resentment, distrust and anger abound.
Second, Big Tech has a stranglehold on everything from the dissemination of information and content right down to the internet infrastructure itself.
They have the power to persuade through targeted content, to silence everyone from former President Donald Trump to a private nobody, and they can even kill the competition to silence dissent for good.
It’s a dangerous combination of these tech companies holding the power to do whatever they please while also adopting a radical racial agenda that increasingly pits one group against one another.
The frightening part is that these problems with AI are similar to the problem with the humans who push that agenda — it’s almost as if they function like a bot that simply spits out “white supremacy” as the answer to every societal ill.
A game like chess can be explained as white versus black, but human society is so much richer and more complex than simply skin color and race — or at least it should be.
This article appeared originally on The Western Journal.