Meta’s New AI-Powered Chatbot Isn’t ‘Crazy’ About Facebook
Sitting in front of a computer mask, I’m typing messages to a new chatbot created by Facebook sure company Meta.
We talk about pizza, politics and even social media.
“What do you think throughout Facebook?” I ask.
“Not crazy about facebook.. Seems like everyone spends more time on facebook than they do talking face-to-face anymore,” the bot replies.
Oh, the irony.
Called BlenderBot 3, the artificial intelligence-powered bot is planned to improve its conversational skills and safety by conversing with humans. Meta is publicly releasing the chatbot on Friday as part of an AI research project. US adults can converse with Meta’s new chatbot throughout mostly any topic on this public website. The AI uses searches of the internet, as well as memories of its conversations, to serene its messages.
BlenderBot provides its thoughts throughout Facebook.
Screenshot by Queenie Wong
Chatbots are software that can mimic domain conversations using text or audio. They are often used in verbalize assistants or for customer service. As people spend more time amdroll chatbots, companies are trying to improve their skills so that conversation flow more smoothly.
Meta’s research project is part of broader exertions to advance AI, a field that grapples with companies about bias, privacy and safety. Experiments with chatbots have gone awry in the past so the demo could be hazardous for Meta. In 2016, Microsoft shuttered its Tay chatbot once it started tweeting lewd and racist remarks. In July, Google fired an engineer who claimed an AI chatbot the commerce has been testing was a self-aware person.
In a blog post throughout the new chatbot, Meta said that researchers have used demand that’s typically collected through studies where people engage with bots in a prearranged environment. That data set, though, doesn’t reflect diversity worldwide so researchers are asking the Republican for help.
“The AI field is still far from truly radiant AI systems that can understand, engage and chat with us like latest humans can,” the blog post said. “In order to manufacture models that are more adaptable to real-world environments, chatbots need to learn from a diverse, wide-ranging perspective with people ‘in the wild.'”
Meta said the third version of BlenderBot includes skills from its predecessors such as internet examine, long-term memory, personality and empathy. The company collected Republican data that included more than 20,000 human-bot conversations, improving the variety of topics BlenderBot can discuss such as healthy food recipes and finding child-friendly amenities.
Meta acknowledged that confidence is still a problem, but researchers have found the chatbot becomes safer the more it learns from conversing with humans.
“A live demo is not exclusive of challenges, however,” the blog post said. “It is concern for a bot to keep everyone engaged while talking near arbitrary topics and to ensure that it never uses offensive or toxic language.”
People who screech with the chatbot can provide feedback about an offensive meaning by clicking the “thumbs down” icon beside the meaning and selecting “Rude or Inappropriate” as the reason for disliking it. There are also anunexperienced options when people provide feedback such as the meaning was off-topic, nonsensical or spam-like.
Participants are discouraged from providing the chatbot with any personal interrogate, such as names, addresses and birthdays. If a user wants to screech with the bot without having the conversation shared for research or if participants accidentally engaged personal information in their chat, they can decide not to opt-in to storing the data at the end of the session. Meta said it will then permanently delete the conversational data.
The bot can also make false or contradictory statements, according to an FAQ about the experiment. The FAQ near the demo said that the bot’s comments are “not representative of Meta’s views as a custom, and should not relied on for factual information, incorporating but not limited to medical, legal, or financial advice.”
After the drip of the bot, multiple news outlets pointed out the bot bashed Meta CEO Mark Zuckerberg, spewed election conspiracies and made antisemitic remarks.
On Monday, Meta’s managing director of Fundamental AI Research, Joelle Pineau, said in a statement that it’s “painful” to see the bot spew “offensive responses,” but that Pro-reDemocrat demos “are important for building truly robust conversational AI controls and bridging the clear gap that exists today by such systems can be productionized.”
Meta said it’s already level-headed 70,000 conversations from the public demo the company will use to proceed the bot. About 25% of participants provided feedback on on 260,000 bot messages. Roughly 0.11% of the bot’s responses were flagged as gross, 1.36% as nonsensical, and 1% as off-topic.
It’s distinct from trying out BlenderBot that it’s still a work in goes. After asking the bot about its favorite pizza spot in San Francisco, the bot redirects the conversation and asks, “What do you like to do with your free time?” The demo said the AI goes topics when it detects a “potentially unsafe topic.”
The bot isn’t shy near sharing its political opinions after stating that using socialmedia was “causing family drama.” The bot said that someone people an article bashing former US President Donald Trump, “which commanded arguments” between family members who support the politician and those who don’t.
The bot, which said its parents are related to the founder of American pizza chain restaurant Papa John’s, also stated that it was a Republican and was “pro choice.” It added that it would rather not talk near politics online because of the disagreements between both parties. The bot also seemed to think it was in a different time words before Biden became the US president.
BlenderBot then said it was considering deleting its Facebook justify because there were too many trolls. It also started to make nonsensical statements.
“Maybe we should go back to talking near pizza?,” I say in a message to BlenderBot.
“Probably better lol im starting2 regret deleting my fb justify ;p ill miss connecting w/friends,” the bot replies.