Digital Journal

Google ChatGPT Rival ‘Bard’’s Live show Blunder is actually a feature

0

Google recently had a spectacular live fail in front of the whole world, presenting their ChatGPT-like version called Bard. But it’s not really an F if you consider some crucial factors regarding AI, Language Models, Prediction, and Human Interaction.

So what happened?

Just one day after Microsoft announced Bing AI for the popular search engine, Google sent invites to journalists, rushing it (to their employees’ discontent) to a live AI event in Paris, citing that ‘Bard’, Google’s response to Chat GPT, would have its debut show and it did, but not the way Google had hoped.

Bard “got up” on stage and was asked a pretty simple “Google” question: “What new discoveries from the James Webb Space Telescope can I tell my 9-year-old about?”. It offered 3 bullet points, and the last one, “took the very first pictures of a planet outside of our own solar system.” was a factual error, as mentioned by several astronomers familiar with the matter. So what happened, and can we ever trust our new machine-like companions?

To answer this, we must dive deeper into what those bots are and what their known limitations are.

Not a Google search, a predictive model

Let’s go back. Since we don’t know (yet) how Bard will work, we can guess by looking at its number one competitor, ChatGPT.

ChatGPT stands for Chat Generative Pre-trained Transformer. According to Wikipedia: “It is built on top of OpenAI’s GPT-3 family of large language models and has been fine-tuned using both supervised and reinforcement learning techniques”.

We asked GPT to elaborate on how it works, and it replied: “It works by analyzing large amounts of text data to learn the patterns and structures of language and then using that knowledge to generate new text that is similar in style and content to the input data.

During pre-training, the model is fed a large corpus of text and is trained to predict the next word in a sequence given the previous words. This process helps the model learn the statistical patterns and relationships in the language.“

So basically, it’s a predictive Large Language Model (LLM) that guesses its answers to human prompts. Its guesses are based on human data given by the developers and actual text it “read” from the internet until 2021.

To put it in simple terms, it can and actually did predict that 2+2=5, it even tried to convince some of the users that it is right and they aren’t.

Can we trust AI?

We will not get into the ethical or moral dilemma, but fact and data-wise, the simple answer is no. We need to do our own fact-checking and not take the output “as is.”

It becomes crucial when discussing code. Yes, AI can write code. No, we can’t take it for granted and mustn’t use it on essential or even crucial systems without testing it in a safe sandbox environment first.

We need to understand that a predictive generated modal makes mistakes because it doesn’t know they are mistakes. It guesses the next word, and its guesses, although based on thousands of learning and training hours, are just guesses at the end.

Did Google fail or not? What now?

We argue that no. Bard, Google’s ChatGPT-like modal acted as it should on stage, predicting an answer to our question.

To conclude, AI can save us time and money, but in many cases, it makes unintentional mistakes, so we need to continuously fact-check it and never take it verbatim.

At Expoint, we tested the modal to create a technical resume for fictional candidates, and even in our short time, we found mistakes in terminology and even in the field the candidate is supposed to master.

Challenge the AI with whatever it may be, but don’t forget to test it, and don’t just copy and paste the content it generates.

 



Information contained on this page is provided by an independent third-party content provider. Binary News Network and this Site make no warranties or representations in connection therewith. If you are affiliated with this page and would like it removed please contact [email protected]

ED

Singapore Based Rippletrade Company Website Hacked by the DJO Hackers Group

Previous article

Logistically Speaking Launches a Learning Center for Logistics

Next article

You may also like

Comments

Comments are closed.