Former IT minister Rajeev Chandrasekhar discussed the issue of AI models like ChatGPT and Google Gemini providing strange answers due to being trained on poor-quality data. He highlighted the common programming saying ‘Garbage in, garbage out,’ explaining that if the input data is bad, the output will also be bad. AI chatbots scrape the internet for data, so encountering bad information can lead to incorrect responses, known as AI hallucinations.
Chandrasekhar emphasized that the problem arises from the lack of quality assurance in the datasets used to train these models, resulting in them sometimes spewing nonsense. Companies like Google, OpenAI, and Microsoft are continuously working to improve their AI models, but they caution users that the responses from their chatbots may not always be accurate.
While generative AI is a valuable tool, it is not flawless, and understanding its limitations can help users navigate and utilize the technology more effectively. Despite the advancements in AI technology, issues like poor training data and complex models can still lead to AI chatbots providing wrong or confusing answers.
Source link
Source link: https://www.businesstoday.in/technology/news/story/llms-buit-content-comes-from-ex-it-minister-explains-why-ai-models-like-chatgpt-google-gemini-give-strange-answers-433594-2024-06-17
GIPHY App Key not set. Please check settings