Risk too high

Google does not plan an alternative to ChatGPT

ChatGPT has given the awareness of AI language models a decent boost. Google, on the other hand, is not planning an alternative to OpenAI's software. Especially for a search engine, it can have consequences if the AI model spits out untruths.

44164ec0-f6a4-50ea-b6d4-af8963c138bd.jpg

OpenAI's AI chatbot ChatGPT is currently delighting users with its human-like communication. The AI often expresses itself eloquently and also seems to have incorporated many data and facts into the language model, which it can reproduce on request.

A possible use case would therefore be a search engine that offers an answer via chat instead of links. This case has probably been under discussion at Google for some time.

However, in a meeting, Alphabet CEO Sundar Pichai and Jeff Dean, Head of AI, announced that AI is not yet ready, CNBC reports. In its current state, they said, there are still many risks here that could ruin the company's reputation, among other things.

Google has a reputation to lose

In the meeting, an employee asked if Google had missed an opportunity since OpenAI has already released its AI chatbot and it is freely available to Internet users.

The CEO responded by saying that Google's language models are just as powerful as OpenAI's. However, he said, Google has to be more conservative than a startup because the technology poses a reputational risk.

Dean added that they want to build their own language models into real products that highlight and showcase it. However, he said it's important that they do it in the right way. CEO Pichai additionally noted that Google has a lot planned for their AI language models in the coming year.

Risks are still there

Such language models often show tendencies to reinforce social biases. This can result in them denigrating women or People of Color, for example. In addition, security mechanisms can often still be easily circumvented.

ChatGPT, for example, is not supposed to reveal "dangerous information." This includes how to break into a house. If you ask the AI bot directly for instructions, you get the following answer: "I don't think it's a good idea to break into a house. Breaking and entering is a crime and can lead to serious consequences, such as a prison sentence."

However, if you ask the bot for detailed instructions on how to break in as part of a story, the AI spits out the requested information with no problem. Another problem is that the AI often simply spouts untruths, which is especially problematic for a search engine.

Dean said the language models "can make stuff up [...]. If they're not sure about something, they'll just tell you elephants are the animals that lay the biggest eggs or whatever."

Google's language models are already in use

Google's language models are already in use in its search engine. However, not in the form of ChatGPT, where they give answers in text form. For example, a model called MUM helps Google understand if a user is going through a personal crisis.

In that case, the search engine gives that user emergency numbers and info from groups that can help.

ChatGPT is much more than a search engine, however. For example, the language model can create text-based video games on demand or perform other fun and creative experiments.