in ,

Study finds AI chatbots inaccurately predict 2024 election 27% #AIChatbots

AI chatbots got questions about the 2024 election wrong 27% of the time, study finds

A study by GroundTruthAI found that popular AI-powered chatbots like Google’s Gemini 1.0 Pro and OpenAI’s ChatGPT provided incorrect information 27% of the time when asked about voting and the 2024 election. The study involved sending 216 questions to these models and analyzing their responses, with Google’s Gemini 1.0 Pro initially answering correctly only 57% of the time. The study highlighted discrepancies in responses to questions about voter registration and candidates’ ages, showing inconsistencies in the models’ knowledge. The analysis also raised concerns about the potential impact on voter decisions if they rely on inaccurate information from these chatbots. Despite efforts to improve transparency and accuracy, the study suggested that incorporating AI into search functions could lead to misinformation. The study also revealed fluctuations in accuracy over time and emphasized the need for caution when using AI-generated information for critical decisions like voting. The study’s findings underscore the importance of verifying information from reliable sources and not solely relying on AI-generated responses, especially in the context of elections. The study also noted that OpenAI is working to integrate updated information via ChatGPT to enhance transparency and accuracy in election-related content.

Source link

Source link: https://www.nbcnews.com/news/amp/rcna155640

What do you think?

Leave a Reply

GIPHY App Key not set. Please check settings

Study: Medical Artificial Intelligence and Human Values. Image Credit: Gorodenkoff / Shutterstock.com

#Balancing ethics and efficiency in integrating human values into medical AI

The Copyright Illusion and the Hypocrisy in the Age of AI | by Andy Rosen | Jun, 2024

Deception and contradiction in AI era: The Copyright Hypocrisy #IntellectualProperty