in

Uncovering hallucinations: Fátima’s story. #MentalHealthAwareness

Detectando Alucinações. Oi, eu sou a Fátima e vou contar como… | by fatima lira | will bank | Jul, 2024

The article discusses the challenge of detecting hallucinations in Large Language Models (LLMs) and how a model was created at Will to address this issue. The exponential increase in LLM usage poses a new challenge of identifying and preventing hallucinations generated by these models. After observing hallucinations in customer service chat messages, a model was developed to detect these instances. The initial challenge was to curate human data to manually detect hallucinations and understand the different types occurring. The model was built using a database constructed from this curation, converting words into numerical codes using techniques like TF-IDF. By reducing the dimensionality of the messages, clusters of hallucinations were identified, allowing for the classification of hallucinations and non-hallucinations. The KNN model was used for classification, achieving an accuracy of 0.98 and an F1 score of 0.71. The focus was on sensitivity, with a recall metric of 0.98. The model’s predictions can be integrated into chat systems to assess the probability of a message being a hallucination. The results demonstrate the effectiveness of the hallucination detection model. The article concludes by inviting readers to share their experiences with hallucinations and how they are handling them.

Source link

Source link: https://medium.com/tech-will/detectando-alucina%C3%A7%C3%B5es-ae5c7499d03d?source=rss——large_language_models-5

What do you think?

Leave a Reply

GIPHY App Key not set. Please check settings

A Powerful Tool to Control Image Generation

A Tool for Image Generation Control with #PowerfulControl

Text To Video AI Just Took a Giant Leap Forward - Runway Gen 3

Runway Gen 3 advances Text To Video AI technology. #AIprogress