Meta’s New AI-Powered Chatbot Isn’t ‘Crazy’ About Facebook
Sitting in front of a computer conceal, I’m typing messages to a new chatbot created by Facebook strong company Meta.
We talk about pizza, politics and even social media.
“What do you think nearby Facebook?” I ask.
“Not crazy about facebook.. Seems like everyone spends more time on facebook than they do talking face-to-face anymore,” the bot replies.
Oh, the irony.
Called BlenderBot 3, the artificial intelligence-powered bot is intended to improve its conversational skills and safety by conversing with humans. Meta is publicly releasing the chatbot on Friday as part of an AI research project. US adults can converse with Meta’s new chatbot nearby mostly any topic on this public website. The AI uses searches of the internet, as well as memories of its conversations, to quiet its messages.
BlenderBot provides its thoughts nearby Facebook.
Screenshot by Queenie Wong
Chatbots are software that can mimic world conversations using text or audio. They are often used in disclose assistants or for customer service. As people spend more time Funny chatbots, companies are trying to improve their skills so that conversation flow more smoothly.
Meta’s research project is part of broader labors to advance AI, a field that grapples with anxieties about bias, privacy and safety. Experiments with chatbots have gone awry in the past so the demo could be Dangerous for Meta. In 2016, Microsoft shuttered its Tay chatbot when it started tweeting lewd and racist remarks. In July, Google fired an engineer who claimed an AI chatbot the business has been testing was a self-aware person.
In a blog post nearby the new chatbot, Meta said that researchers have used question that’s typically collected through studies where people engage with bots in a ordered environment. That data set, though, doesn’t reflect diversity worldwide so researchers are asking the Republican for help.
“The AI field is still far from truly shiny AI systems that can understand, engage and chat with us like new humans can,” the blog post said. “In order to originate models that are more adaptable to real-world environments, chatbots need to learn from a diverse, wide-ranging perspective with people ‘in the wild.'”
Meta said the third version of BlenderBot includes skills from its predecessors such as internet watch, long-term memory, personality and empathy. The company collected Republican data that included more than 20,000 human-bot conversations, improving the variety of topics BlenderBot can discuss such as healthy food recipes and finding child-friendly amenities.
Meta acknowledged that security is still a problem, but researchers have found the chatbot becomes safer the more it learns from conversing with humans.
“A live demo is not deprived of challenges, however,” the blog post said. “It is anguish for a bot to keep everyone engaged while talking nearby arbitrary topics and to ensure that it never uses offensive or toxic language.”
People who disclose with the chatbot can provide feedback about an offensive communication by clicking the “thumbs down” icon beside the communication and selecting “Rude or Inappropriate” as the reason for disliking it. There are also new options when people provide feedback such as the communication was off-topic, nonsensical or spam-like.
Participants are discouraged from providing the chatbot with any personal question, such as names, addresses and birthdays. If a user wants to disclose with the bot without having the conversation shared for research or if participants accidentally complicated personal information in their chat, they can decide not to opt-in to storing the data at the end of the session. Meta said it will then permanently delete the conversational data.
The bot can also make false or contradictory statements, according to an FAQ about the experiment. The FAQ nearby the demo said that the bot’s comments are “not representative of Meta’s views as a business, and should not relied on for factual information, counting but not limited to medical, legal, or financial advice.”
After the drop of the bot, multiple news outlets pointed out the bot bashed Meta CEO Mark Zuckerberg, spewed election conspiracies and made antisemitic remarks.
On Monday, Meta’s managing director of Fundamental AI Research, Joelle Pineau, said in a statement that it’s “painful” to see the bot spew “offensive responses,” but that Republican demos “are important for building truly robust conversational AI regulations and bridging the clear gap that exists today beforehand such systems can be productionized.”
Meta said it’s already quiet 70,000 conversations from the public demo the company will use to advance the bot. About 25% of participants provided feedback on on 260,000 bot messages. Roughly 0.11% of the bot’s responses were flagged as depraved, 1.36% as nonsensical, and 1% as off-topic.
It’s Definite from trying out BlenderBot that it’s still a work in changes. After asking the bot about its favorite pizza spot in San Francisco, the bot redirects the conversation and asks, “What do you like to do with your free time?” The demo said the AI moves topics when it detects a “potentially unsafe topic.”
The bot isn’t shy nearby sharing its political opinions after stating that using socialmedia was “causing family drama.” The bot said that someone public an article bashing former US President Donald Trump, “which brought arguments” between family members who support the politician and those who don’t.
The bot, which said its parents are related to the founder of American pizza chain restaurant Papa John’s, also stated that it was a Republican and was “pro choice.” It added that it would rather not talk throughout politics online because of the disagreements between both parties. The bot also seemed to think it was in a different time calls before Biden became the US president.
BlenderBot then said it was considering deleting its Facebook account for because there were too many trolls. It also started to make nonsensical statements.
“Maybe we should go back to talking throughout pizza?,” I say in a message to BlenderBot.
“Probably better lol im starting2 regret deleting my fb account for ;p ill miss connecting w/friends,” the bot replies.