Google Bard Employees Question Usefulness of AI Models

Why Google Employees Are Questioning Bard’s Helpfulness

The screenshot of a Discord server dedicated to Google’s artificial intelligence (AI) chatbot, Bard, suggests that some Google employees question the helpfulness of large language model (LLM) chatbots.

AI chatbots became mainstream in 2023 due to OpenAI’s ChatGPT and Google’s Bard. However, are AI chatbots, in their current state, really helpful?

Google Bard Experience Lead Question if LLMs Are Making a Difference

According to Bloomberg, there is a Discord server dedicated to heavy users of Google Bard and some employees of the search engine giant. In the particular server, they discuss the effectiveness and utility of the chatbot.

Bloomberg had been collecting screenshots of the conversation from two of the Discord servers’ members from July to October. In August, the user experience lead for Google Bard, Cathy Pearl wrote:

“The biggest challenge I’m still thinking of: what are LLMs truly useful for, in terms of helpfulness? Like really making a difference. TBD!”

Read more: Most Popular Machine Learning Models in 2023

Screenshot of Google employees’ Discord chat. Source: Bloomberg

Community members believe Bard cannot answer even basic questions. An X (Twitter) user wrote:

“Anyone who has used Bard would probably agree. It can’t even answer basic questions about how Google’s own apps work, eg Analytics, reCAPTCHA etc. Go test it.”

Google Product Manager Believes That Output Cannot Be Trusted

There were discussions about the reliability of the data generated by the chatbots. Dominik Rabiej, a product manager for Bard, says that LLMs are not at a stage wherein users cannot trust their outputs without independently verifying them. He said:

“My rule of thumb is not to trust LLM output unless I can independently verify it”

LLMs are AI models trained with a large set of data to generate human-like outputs. Chatbots such as ChatGPT and Bard use LLMs.

As the data is not completely reliable, Rabiej suggests that it is ideal to use Google Bard for brainstorming purposes instead of relying on it only for information. As a matter of fact, when a user first starts using Bard, it shows a message stating, “Bard is an experiment.” The message further reads:

“Bard will not always get it right Bard may give inaccurate or offensive responses. When in doubt, use the Google button to double-check Bard’s responses.”

Read more: ChatGPT vs. Google Bard: A Comparison of AI Chatbots

Google Bard, AI, artificial intelligence
Google Bard’s message. Source: Official website

Do you have anything to say about the Google Bard chatbot or anything else? Write to us or join the discussion on our Telegram channel. You can also catch us on TikTok, Facebook, or X (Twitter).

For BeInCrypto’s latest Bitcoin (BTC) analysis, click here.

Disclaimer

In adherence to the Trust Project guidelines, BeInCrypto is committed to unbiased, transparent reporting. This news article aims to provide accurate, timely information. However, readers are advised to verify facts independently and consult with a professional before making any decisions based on this content.



Source link

Be the first to comment

Leave a Reply

Your email address will not be published.


*