Students have been using digital writing tools for decades, with recent advancements in large language model (LLM) artificial intelligence changing the landscape. LLMs like BERT, GPT, and others are trained on large sets of textual data to predict the next word of a sentence. Educators are excited about the potential of LLMs to enhance learning by shifting the focus from grammar to higher-level functions. However, concerns exist about students using LLMs to cheat on assessments. Universities are grappling with policies around LLM use, with some prohibiting it outright and others allowing it with restrictions. AI detectors have emerged to differentiate between human and AI-generated text, with mixed success. While some detectors can identify AI-generated text, false positives remain a challenge. Students are using LLMs for various tasks, with some engaging in unethical behavior by using LLMs to answer exam questions or write entire essays. Despite familiarity with AI, students were not more successful at avoiding detection by AI detectors. The study highlights the need for clear guidelines on ethical LLM use in education and the limitations of AI detectors in detecting AI-generated text.
Source link
Source link: https://www.frontiersin.org/journals/education/articles/10.3389/feduc.2024.1374889/full
AI detectors can often detect students using large language models. #AcademicIntegrity
![Students are using large language models and AI detectors can often detect their use](https://i0.wp.com/webappia.com/wp-content/uploads/2024/06/1374889_Thumb_400.jpg?fit=400%2C263&quality=89&ssl=1)
GIPHY App Key not set. Please check settings