Skip to content

Search the site

LLMsgoogleGenAI

Are hallucinating GenAI models careless or just plain ignorant? Google researchers found out

LLM investigators devise "WACK" approach for understanding why generative AI models give incorrect or totally far-out answers to user prompts.

ChatGPT's rather creepy depiction of a not-very-clever LLM experiencing a hallucination
ChatGPT's rather creepy depiction of a not-very-clever LLM experiencing a hallucination

Large language models (LLMs) are notorious for "hallucinating" responses containing totally false information which bears little response to reality.

But are the models making mistakes because they're ignorant or because of some other error which causes them to slip up?

Answering this question is becoming ever more important as LLMs move away from simply answering consumers' questions and start to become embedded in core mission-critical enterprise functions in which a wrong answer could be an expensive mistake or a major security risk.

The thorny topic of hallucinations is made even more complex by the fact that wrong answers can "snowball", so that one mistake is followed by subsequent errors as the model attempts to justify or compensate for its slip-ups.

To work out whether Gen AI models are stupid or just plain ignorant, researchers from Google and Technion, the Israel Institute of Technology, devised a new system called Wrong Answer despite having Correct Knowledge (WACK) that can test why hallucinations took place.

"Large language models are susceptible to hallucinations - outputs that are ungrounded, factually incorrect, or inconsistent with prior generations," they wrote in a pre-print paper.

READ MORE: Elon Musk to double power of world's largest AI supercomputer

The research sets out two types of hallucinations which take place either when the model "does not hold the correct answer in its parameters" or if it "answers incorrectly despite having the required knowledge". Or, in other words, whether it doesn't know the answer or has simply made a mistake.

"We argue that distinguishing these cases is crucial for detecting and mitigating hallucinations," they continued.

"This differentiation is crucial for understanding hallucinations’ underlying mechanisms and developing targeted detection and mitigation strategies."

Understanding why a model hallucinates requires a deep understanding of its "inner state" - with ignorance and error represented differently.

To work out whether models are ignorant or careless, the team built a dataset made specifically for each model which "captures the distinction between the two types of hallucinations".

This allowed them to work out whether hallucinations were caused by a lack of knowledge - a relatively easy problem to fix - or some other issue which is causing them to answer incorrectly despite knowing the answer (which is contained in their training dataset).

"We show it is possible to distinguish between the two hallucination types," the team wrote.

READ MORE: LinkedIn slams the brakes on GenAI data-harvesting in the UK amid privacy storm

However, each model requires its own training dataset in order to identify the nature of its hallucinations, showing that each GenAI model trips out in its own unique way.

"Datasets constructed using WACK exhibit variations across models, demonstrating that even when models share knowledge of certain facts, they still vary in the specific examples that lead to hallucinations," the researchers wrote.

"Training a probe on our WACK datasets leads to better hallucination detection... than using the common generic one-size-fits-all datasets."

In a report released earlier in October, investigators from the Government Audit Office (GAO) warned that vendors' focus on the capabilities of their models often comes at the expense of honesty about the true problems with GenAI. 

The GAO probe set out a range of major issues with GenAI and warned that the public has been left in the dark about key aspects of the training of models as well as their propensity for hallucinations and "confabulations".

"Commercial developers face some limitations in responsibly developing and deploying generative AI technologies to ensure that they are safe and trustworthy," GAO wrote. "Developers recognize that their models are not fully reliable, and that user judgment should play a role in accepting model outputs.

Join peers following The Stack on LinkedIn

Latest