Meta just released a new chatbot out to public only to find that it may be gaining some sense of self-awareness.
Just weeks after a former Google engineer claimed that its AI was sentient, Facebook parent company Meta is warning that its new BlenderBot 3 chatbot could be gaining some form of conscious awareness. Meta’s chatbot was developed for research purposes and the company has identified instances where the bot has overtly lied and treated users rudely.
Meta’s BlenderBot 3 is a part of the company’s targeted vision to “To build artificial intelligence (AI) systems that can interact with people in smarter, safer, and more useful ways.” To accomplish this Meta released BlenderBot 3 to engage with the public via conversation. The Meta chatbot has the capability to search the internet’s limitless amount of information, thus giving it the ability to converse about any given topic. It was Meta’s hope that through the conversations the chatbot would become more and more adept at that type of interaction.
Moreover, Meta built the bot architecture so that it could use machine learning in such a way that would cause it to only learn from conversations with “helpful teachers” and not bad actors trying to corrupt the AI’s conversational prowess. The thing is, Meta is finding that since BlenderBot 3 was released to the public it has now learned to both lie and be rude. These are things that the initial architecture safeguarded against. But they are occurring nonetheless. Thus, prompting the company to warn the public.
Whether or not Meta’s warnings indicate that its chatbot has actually developed sentience is inconclusive. Regardless of whether or not that’s the case, the social media giant is working to see if it can somehow rectify the issue. The company asserted that it believes the best way to accomplish this is by continuing to leverage the larger community as a whole. “This is all the more reason to bring the research community in. Without direct access to these models, researchers are limited in their ability to design detection and mitigation strategies,” said the company. This is why Meta made the bot’s architecture accessible to the larger research community so that its combined expertise and knowledge can be used to enhance and improve the chatbot AI.
Furthermore, Meta was also rather transparent about what the bot is currently capable of and is cautioning people to be cognizant of those capabilities. Meta says that the chatbot can “…misremember details of the current conversation, and even forget that they are a bot.” The tech conglomerate emphasized that because of this “Users should not rely on this bot for factual information, including but not limited to medical, legal, or financial advice.”
Ultimately, it doesn’t look like the world is at a point where it has to be worried about Skynet taking over. At least not yet. That being said, what is evident is that technology continues to advance at a pace more rapid than many humans can fathom. And that it itself paves the way for a future where sentient robots could emerge right underneath the noses of the entire global populace.