in

Fine-tuning large language models for trust in high-stakes applications #TrustEnhancement

Enhancing Trust in Large Language Models: Fine-Tuning for Calibrated Uncertainties in High-Stakes Applications

Large language models (LLMs) struggle with accurately representing uncertainty in their output, especially in decision-making applications like healthcare. This challenge arises from linguistic variations in freeform generation and the dichotomy between black-box and white-box estimation methods. Researchers have explored various approaches to address this issue, including utilizing LLMs’ distribution over possible outcomes, prompting for uncertainty estimates, and using linear probes to classify correctness based on hidden representations.

To improve uncertainty calibration in LLMs, researchers from New York University, Abacus AI, and Cambridge University propose fine-tuning for better uncertainties. This method involves teaching language models to recognize what they don’t know using a calibration dataset, exploring effective parameterization, and determining the amount of data required for good generalization. Results show that fine-tuning significantly improves performance compared to baselines, with the quality of uncertainty estimates being examined against accuracy using models like LLaMA-2, Mistral, and LLaMA-3.

The study finds that out-of-the-box uncertainties from LLMs are unreliable for open-ended generation, but fine-tuning procedures produce calibrated uncertainties with practical generalization properties. The research demonstrates the sample efficiency of fine-tuning and its robustness to distribution shifts. The proposed method focuses on black-box techniques for estimating uncertainty in LLMs, using perplexity as a metric for open-ended generation and exploring prompting methods for uncertainty estimates from language model outputs. Overall, fine-tuning for uncertainties shows promise in improving the reliability of uncertainty estimates in LLMs.

Source link

Source link: https://www.marktechpost.com/2024/06/15/enhancing-trust-in-large-language-models-fine-tuning-for-calibrated-uncertainties-in-high-stakes-applications/?amp

What do you think?

Leave a Reply

GIPHY App Key not set. Please check settings

Unlocking the Potential of gptzero: Revolutionizing AI-Powered Content Creation | by Nguyenhung | Jun, 2024

Revolutionizing AI content creation with the power of gptzero #innovation

Welcome to GummySearch: Your Ultimate Guide to Effective Search Strategies | by Steven | Jun, 2024

GummySearch: The Ultimate Guide for Effective Search Strategies #GummySearch