in

AI detectors can often detect students using large language models. #AcademicIntegrity

Students are using large language models and AI detectors can often detect their use

Students have been using digital writing tools for decades, with recent advancements in large language model (LLM) artificial intelligence changing the landscape. LLMs like BERT, GPT, and others are trained on large sets of textual data to predict the next word of a sentence. Educators are excited about the potential of LLMs to enhance learning by shifting the focus from grammar to higher-level functions. However, concerns exist about students using LLMs to cheat on assessments. Universities are grappling with policies around LLM use, with some prohibiting it outright and others allowing it with restrictions. AI detectors have emerged to differentiate between human and AI-generated text, with mixed success. While some detectors can identify AI-generated text, false positives remain a challenge. Students are using LLMs for various tasks, with some engaging in unethical behavior by using LLMs to answer exam questions or write entire essays. Despite familiarity with AI, students were not more successful at avoiding detection by AI detectors. The study highlights the need for clear guidelines on ethical LLM use in education and the limitations of AI detectors in detecting AI-generated text.

Source link

Source link: https://www.frontiersin.org/journals/education/articles/10.3389/feduc.2024.1374889/full

What do you think?

Leave a Reply

GIPHY App Key not set. Please check settings

Reflexiones sobre la inteligencia artificial: un viaje de 61 años con Ray Kurzweil | by Fernando Santamaría González | Jun, 2024

61-year journey with Ray Kurzweil: Reflections on artificial intelligence #AI

Aussie office workers are using AI tools without telling the boss

Aussie office workers secretly using AI tools #productivityboost