New algorithm detects AI ‘hallucinations’ – #AIhallucinations

Scientists Develop New Algorithm to Spot AI 'Hallucinations'

The article discusses the issue of AI hallucinations, where AI tools like ChatGPT confidently assert false information, leading to embarrassing mistakes and legal consequences. A new research study published in Nature presents a method to detect when AI tools are likely hallucinating, improving correctness by 79%. This method focuses on confabulations, where AI models provide inconsistent wrong answers to factual questions. The researchers use semantic entropy to measure the similarity of answers’ meanings and determine if the model is confabulating. This method outperforms other approaches for detecting AI hallucinations and could lead to more reliable AI systems in the future.

The study’s lead researcher, Sebastian Farquhar, believes this method could enhance the reliability of AI systems, particularly in high-stakes settings. However, some experts caution against overestimating its immediate impact, noting challenges in integrating the method into real-world applications. While advancements in AI models have reduced hallucination rates, the problem may persist due to the intrinsic nature of large language models. As AI capabilities increase, the boundary between what tasks AI can reliably perform and what people want to use them for remains a complex issue with no straightforward technical solution.

Source link

Source link:

What do you think?

Leave a Reply

GIPHY App Key not set. Please check settings

NVIDIA's Open LLM Just Beat GPT-4

NVIDIA’s Open LLM outperforms GPT-4 in latest competition.


New AI tool Hendra brings your photos to life with voiceovers

Hendra AI tool animates photos with voiceovers #animationtech