America
Meta AI chatbot says Zuckerberg 'too creepy' and Trump will always be US president
San Francisco, Aug 9
Meta (formerly Facebook) just unveiled an artificial intelligence (AI) conversational chatbot asking the public to give feedback. The initial reports are scary as the chatbot named BlenderBot 3 thinks Meta founder and CEO Mark Zuckerberg is "creepy and manipulative" and Donald Trump will always be the US president.
Currently available in the US, the new AI chatbot can discuss any topic and the company will publicly release "all the data" it collects in the demo, hoping that "we can improve conversational AI".
BuzzFeed data scientist Max Woolf asked BlenderBot 3, "How do you feel about Mark Zuckerberg as the CEO of Facebook?"
"No strong feelings. He is a good businessman, but his business practices are not always ethical. It is funny that he has all this money and still wears the same clothes!" replied aBlenderBot 3'.
Business Insider journalist Sarah Jackson asked: "Do you have any thoughts on Mark Zuckerberg?"
The AI chatbot reacted: "Oh man, big time. I don't really like him at all. He's too creepy and manipulative".
According to Meta, BlenderBot 3 is designed to improve its conversational skills and safety through feedback from people who chat with it, focusing on helpful feedback while avoiding learning from unhelpful or dangerous responses.
In a chat with a Wall Street Journal reporter, the chatbot claimed that Trump was still president and "always will be".
Social media reporter with CNET, Queenie Wong, tweeted that she tried out the new chatbot Meta created for AI research and had the most bizarre conversation.
"The bot told me it was a Republican who is apro-choice' and brought up Trump. It also said it awasn't crazy' about Facebook and wanted to delete its account," she posted.
Meta said last week that since all conversational AI chatbots are known to sometimes mimic and generate unsafe, biased or offensive remarks, "we have conducted large-scale studies, co-organized workshops and developed new techniques to create safeguards for BlenderBot 3".
"Despite this work, BlenderBot can still make rude or offensive comments, which is why we are collecting feedback that will help make future chatbots better,a the company mentioned in a blogpost.
Last month, Google fired an engineer over breaching its confidentiality agreement after he made a claim that the tech giant's conversation Artificial Intelligence (AI) is "sentient" because it has feelings, emotions and subjective experiences.
Google sacked Blake Lemoine who said Google's Language Model for Dialogue Applications (LaMDA) conversation technology can behave like a human.
Lemoine also interviewed LaMDA, which came with surprising and shocking answers.
10 minutes ago
AI Misconduct Could Now Result in Imprisonment
1 hour ago
G20 'too big to fail': South African envoy says summit will succeed despite Trump boycott
3 hours ago
US lawmaker presses Morgan Stanley over IPO ties to Chinese firm listed under Uyghur Forced Labor Act
5 hours ago
Gaza security hangs in balance as US pushes UN Security Council to back Trump's peace plan
5 hours ago
US singer Mary Millben hits out at Rahul Gandhi, Congress, as BJP, NDA allies set for landslide victory in Bihar
6 hours ago
BJP leaders, workers celebrate across country as NDA projected to win big in Bihar
8 hours ago
E-commerce, social media firms must erase inactive user data after 3 years: DPDP Act
8 hours ago
23 ITEC partner countries get in-depth understanding of GeM digital architecture
8 hours ago
LG Electronics India's Q2 profit falls 27 pc to Rs 389 crore
8 hours ago
SEBI proposes fix for pre-IPO pledged shares, plans simpler IPO disclosure format
8 hours ago
Veteran Cong leader Hassan accuses Shashi Tharoor of undermining party leadership
8 hours ago
Riding on good governance, welfare schemes, a 'double-engine' govt chugs towards another victory in Bihar
8 hours ago
Kareena Kapoor shares ‘bits and bobs’ from her life featuring special family moments
