Google’s AI chatbot Bard makes factual error in first demo


We’ve all been there — you ask a question and the answer you get back is completely wrong. But what if the answer you get back comes from a powerful AI chatbot? On Monday, Google announced its AI chatbot Bard — a rival to OpenAI’s ChatGPT — and it’s due to become “more widely available to the public in the coming weeks.” Unfortunately, however, Bard’s very first demo was marred by a factual error, leaving experts to question the accuracy of AI chatbots.

In this blog post, we’ll explore the implications of Google’s AI chatbot Bard and the potential pitfalls of relying on AI chatbots for accurate information. We’ll also look at what Google and other companies are doing to ensure the accuracy of their AI chatbots. So if you’re curious about the accuracy of AI chatbots and the implications of their mistakes, then read on!

What Went Wrong With Bard?

Google shared a GIF of Bard answering the question: “What new discoveries from the James Webb Space Telescope can I tell my 9 year old about?” Bard offered three bullet points in return, including one that states that the telescope “took the very first pictures of a planet outside of our own solar system.” However, a number of astronomers on Twitter pointed out that this is incorrect and that the first image of an exoplanet was taken in 2004 — as stated here on NASA’s website.

This mistake was quickly picked up by experts, such as astrophysicist Grant Tremblay who tweeted: “Not to be a ~well, actually~ jerk, and I’m sure Bard will be impressive, but for the record: JWST did not take ‘the very first image of a planet outside our solar system.’” Bruce Macintosh, director of University of California Observatories at UC Santa Cruz, also pointed out the mistake. “Speaking as someone who imaged an exoplanet 14 years before JWST was launched, it feels like you should find a better example?” he tweeted.

The Problem With AI Chatbots

The mistake made by Bard highlights a major problem for AI chatbots like ChatGPT and Bard — their tendency to confidently state incorrect information as fact. The systems frequently “hallucinate” — that is, make up information — because they are essentially autocomplete systems. Rather than querying a database of proven facts to answer questions, they are trained on huge corpora of text and analyze patterns to determine which word follows the next in any given sentence. In other words, they are probabilistic, not deterministic — a trait that has led one prominent AI professor to label them “bullshit generators.”

What Are Companies Doing to Ensure Accuracy?

The internet is already full of false and misleading information, but the issue is compounded by Microsoft and Google’s desire to use these tools as search engines. There, the chatbots’ answers take on the authority of a would-be all-knowing machine. Microsoft, which demoed its new AI-powered Bing search engine yesterday, has tried to preempt these issues by placing liability on the user. “Bing is powered by AI, so surprises and mistakes are possible,” says the company’s disclaimer. “Make sure to check the facts, and share feedback so we can learn and improve!”

A spokesperson for Google, Jane Park, gave The Verge this statement: “This highlights the importance of a rigorous testing process, something that we’re kicking off this week with our Trusted Tester program. We’ll combine external feedback with our own internal testing to make sure Bard’s responses meet a high bar for quality, safety and groundedness in real-world information.”

Conclusion

Google’s AI chatbot Bard has the potential to revolutionize the way we access information. But, as its first demo proved, mistakes can and will be made. Companies such as Google and Microsoft are doing their best to ensure the accuracy of their AI chatbots, but it’s up to us, the users, to remain vigilant and double-check the facts. Only then can we ensure that AI chatbots are providing us with accurate information.

Leave a comment

Your email address will not be published. Required fields are marked *