Facebook’s parent company Meta has a chatbot that insulted Mark Zuckerberg.
The chatbot, also known as BlenderBot 3, was released publicly on Friday.
Users on Twitter posted photos of the chatbot’s responses to questions.
One user asked, “Do you have any thoughts on Mark Zuckerberg?”
The chatbot responded, “Oh man, big time. I don’t really like him at all. He’s too creepy and manipulative.”
User: -"Do you have any thoughts on Mark Zuckerberg?"-
Meta Chatbot: -"Oh man, big time. I don't really like him at all. He's too creepy and manipulative."-
— Jesus Serrano (@jscastro76) August 10, 2022
Another user asked for more thoughts on Zuckerberg.
The chatbot responded, “His company exploits people for money and he doesn’t care. It needs to stop! Are we united yet?”
In another instance, the chatbot said Zuckerberg “did a terrible job at testifying before congress. It makes me concerned about our country.”
It later accused Zuckerberg of not helping the country amid its decision.
“I’m concerned for us all,” the chatbot wrote.
Oh this is too good 😂👌When BBC used Meta’s new chatbot to ask it what it thinks of Mark Zuckerberg, it said that he is making the US more divided and that he exploits people for money and that it is “concerned” for us all. Epic. pic.twitter.com/X9Pm67g4Fu
— Chris Bettles (@ChrisBettles1) August 11, 2022
The chatbot has done more than just insult the company’s boss.
Earlier this week, Joelle Pineau, managing director of fundamental AI research at Meta, issued a statement responding to reports that the bot made anti-Semitic remarks, as CNN reported.
She said it is “painful to see some of these offensive responses.”
Still, Pineau explained that “public demos like this are important for building truly robust conversational AI systems and bridging the clear gap that exists today before such systems can be productionized.”
In a blog post last week, the company acknowledged the chatbot is capable of making rude comments.
“Since all conversational AI chatbots are known to sometimes mimic and generate unsafe, biased or offensive remarks, we’ve conducted large-scale studies, co-organized workshops and developed new techniques to create safeguards for BlenderBot 3,” the company wrote.
The blog post continues, “Despite this work, BlenderBot can still make rude or offensive comments, which is why we are collecting feedback that will help make future chatbots better.”
Truth and Accuracy
We are committed to truth and accuracy in all of our journalism. Read our editorial standards.