The tendency of large language models (LLMs) to “hallucinate” continues to trouble CIOs eyeing production use-cases – even as efforts around fine-tuning and retrieval augmented generation-based optimisations continue.

Get the full story: Subscribe for free

Get the story, a weekly newsletter (you can turn that off if you want) and help us fight bots and feral algorithms. Subscribe today.

Subscribe now

Already a member? Sign in