Researchers at the University of Illinois Urbana-Champaign have created KnowHalu, a framework designed to detect and address hallucinations in large language models (LLMs) like ChatGPT. These AI hallucinations occur when LLMs produce nonsensical or inaccurate responses that do not align with user prompts. While LLMs have revolutionized the AI field, concerns about their reliability have emerged.
KnowHalu operates in two phases to ensure the accuracy of LLM outputs. The first phase focuses on non-fabrication hallucinations, which are factually correct but irrelevant to the user’s query. The second phase involves a fact-checking process that includes step-wise reasoning, knowledge retrieval, and judgment based on multi-form knowledge.
Bo Li, the project advisor, emphasized the importance of effectively leveraging real-world knowledge in addressing LLM hallucinations. Tests conducted by Li and her team demonstrated that KnowHalu outperformed other baseline methods and LLM hallucination detection tools, including GPT-4. The research also revealed that different prompts and models perform better with specific types of knowledge.
The study highlighted the impact of user queries on the quality of LLM responses, emphasizing the need for specific prompts that align with the models’ databases. The team aims to further explore diverse forms of knowledge and adapt the framework for applications in areas like autonomous driving and healthcare.
Source link
Source link: https://www.cnbctv18.com/technology/us-researchers-develop-a-framework-to-filter-out-ai-nonsense-19408463.htm/amp
GIPHY App Key not set. Please check settings