- Home
- Articles
- Reviews
- About
- Archives
- Past Issues
- The eLearn Blog
Archives
To leave a comment you must sign in. Please log in or create an ACM Account. Forgot your username or password? |
Create an ACM Account |
The rise of generative artificial intelligence (GenAI) has reshaped the educational landscape. Students now have access to powerful tools that can generate essays, answer questions, and simulate thought processes. While these tools present exciting opportunities for enhancing learning, they also raise significant concerns related to plagiarism, over-reliance, and erosion of academic integrity [1, 2]. As a result, institutions are grappling with how best to respond.
Many universities have defaulted to AI-detection software as a primary enforcement mechanism. However, these detection tools have limitations and are often applied in a punitive context. They may foster adversarial dynamics between students and faculty, especially when students are falsely accused [1]. Detection alone does not foster ethical engagement with technology or prepare students for the complex world in which GenAI will be the norm.
Rather than positioning AI literacy and AI policing as opposing paradigms, recent research emphasizes the value of a balanced, integrated framework [3, 4, 5]. This article argues that combining clear AI-use policies with proactive educational efforts will help students become informed, ethical users of AI technologies—both in academic contexts and in the workplace.
AI-detection tools operate by analyzing linguistic patterns, probabilities, and other characteristics to assess whether content is likely AI-generated. However, these tools are not consistently accurate. Research shows they frequently misclassify human-written work as AI-generated and sometimes fail to catch AI-written content at all [1].
Such false positives can carry serious consequences for students. A misidentified paper may lead to penalties, academic probation, or even expulsion if institutions fail to verify results properly. Moreover, the emotional toll of being wrongly accused of dishonesty can be substantial.
Weber-Wulff et al. emphasize that excessive reliance on these tools shifts the institutional focus from learning to policing [1]. Students may become more concerned with avoiding detection than developing their analytical or ethical reasoning skills. Institutions need to move away from a purely reactive model and instead adopt a more supportive, educationally focused approach.
AI literacy is defined as the ability to understand how AI systems work, evaluate their outputs critically, and use them ethically [6]. However, as Chan argues, literacy must be embedded in a broader institutional framework that includes clear, well-communicated policies on AI use [4]. Policy without literacy results in enforcement without understanding; literacy without policy leads to confusion and inconsistency.
Cordero et al. emphasize that the most effective institutional models combine AI ethics education with transparent usage guidelines [3]. In their study of best practices in higher education, researchers found that policies aligned with educational strategies resulted in higher levels of student engagement and improved learning outcomes. Policies should clearly define acceptable use cases for AI, whether AI-generated brainstorming, summarization, or grammar checking is permitted, and explain the consequences of misuse. When faculty and students are aligned through both policy and pedagogy, the result is a culture of trust and innovation. This dual approach enables students to explore AI's potential within clearly defined boundaries.
AI literacy refers to the ability to understand, critically evaluate, and effectively use AI tools in an ethical and informed manner [6]. Rather than penalizing students for engaging with AI, faculty should integrate AI literacy into their curricula by teaching students to assess AI-generated content for accuracy, bias, and ethical implications. Developing these competencies enables students to use AI as a tool to enhance learning, rather than as a shortcut to bypass intellectual effort.
To cultivate AI literacy, institutions should consider four practical and scalable strategies, each grounded in current research and case studies.
1. Transparent AI use guidelines. Clarity is essential when introducing AI-related policies. Faculty should explicitly outline what constitutes acceptable and unacceptable AI use, offering examples and rationale to help students understand the distinctions. A clear policy not only reduces confusion but also reinforces ethical decision-making.
Kasneci et al. report that students are more likely to adhere to guidelines when they are explained thoroughly rather than imposed [5]. Chan also recommends a tiered approach to policy, where departmental adaptations and customized classroom instructions support institutional rules [4]. This multi-level model provides students with a consistent foundation while allowing instructors to tailor policies to course-specific contexts. Embedding these policies into syllabi, assignment instructions, and course discussions further normalizes the conversation around responsible AI use. It turns abstract rules into everyday practice.
2. AI-assisted critical thinking exercises. Rather than banning AI tools outright, educators can utilize them to enhance students' critical thinking skills. One effective method involves assigning students to generate a response using an AI tool and then critique it. This fosters analytical skills and teaches students to question and verify information instead of accepting it blindly.
Smolansky et al. found that such comparison tasks improved students’ evaluative thinking and academic confidence [2]. Teachers also observed that students became more engaged and thoughtful in their written work when they were asked to challenge the quality or logic of AI-generated text. By positioning AI as a cognitive partner rather than a content generator, instructors can reframe how students approach learning. These activities teach students how to consume content and how to examine it through a critical lens.
3. AI ethics assignments. Ethics should be at the heart of AI education. Assignments that invite students to reflect on the implications of AI, such as privacy, algorithmic bias, misinformation, or labor displacement, help build awareness of the societal impacts of these tools. These reflections promote ethical reasoning and civic responsibility.
Cordero et al. highlight the value of integrating ethical discussions into disciplinary contexts [3]. For example, business students might explore the role of AI in marketing transparency, while future educators might assess its implications for differentiated instruction. When ethics is woven into students’ areas of study, it becomes more tangible and relevant. Such assignments can take many forms, including research papers, case studies, debates, or service-learning projects that involve evaluating the use of AI in local organizations. The goal is to ensure that students not only understand how AI works but also comprehend its real-world consequences.
4. Collaborative AI exploration. Working with AI does not have to be a solitary task. Group-based AI activities can foster collaborative learning and encourage students to reflect on the benefits and limitations of the tools. Through discussion and feedback, students gain multiple perspectives and can better understand the complexity of AI’s role in their learning.
Tzirides et al. found that collaborative AI exploration increased students’ confidence and promoted a more nuanced understanding of AI capabilities [7]. It also helped normalize the use of AI tools in academic spaces, reducing fear and uncertainty. After these activities, instructors should facilitate structured reflection. Students can share what they learned, where they saw value, and what concerns they still have. This collaborative approach supports deeper learning and a more balanced perspective on AI technology.
While student preparation is essential, faculty attitudes often shape how AI is perceived and used in the classroom. Many educators worry that GenAI will lower academic standards or encourage plagiarism. Others feel overwhelmed by the pace of AI development and lack confidence in their ability to keep up [8].
Palmer et al. discovered these concerns are often based on assumptions rather than evidence [9]. Faculty tend to overestimate the extent of student misuse while underestimating the potential of AI as a learning tool. To address this, institutions must provide targeted professional development that helps instructors better understand the possibilities and pitfalls of AI.
Mollick and Mollick recommend immersive, hands-on training sessions that allow faculty to experiment with AI tools and explore how they can be used to support learning [8]. These workshops may include demonstrations, case studies, and small-group problem-solving exercises. When instructors can see how AI supports rather than replaces their work, they are more likely to model responsible use for their students.
AI literacy in education is still a relatively new field, and more research is needed to refine best practices. Longitudinal studies can further determine whether AI literacy instruction leads to lasting changes in ethical awareness or academic outcomes. These studies should include diverse student populations and account for disciplinary differences.
Cross-institutional comparisons would also be valuable. How do different colleges interpret and implement AI guidelines? What models of faculty development or student assessment show the most promise? Answering these questions could inform national or even global standards for ethical AI integration in education.
Moreover, research should explore how equity factors into AI access and instruction. Do students from under-resourced backgrounds have the same opportunities to learn about and use AI responsibly? Identifying gaps can help institutions allocate resources more fairly and inclusively.
GenAI is transforming the way we learn, teach, and assess knowledge. While AI-detection tools may serve a limited role in flagging misuse, they should not be the cornerstone of institutional responses. Overemphasis on surveillance undermines trust and fails to prepare students for the reality of working in AI-enhanced environments.
Instead, institutions should invest in AI literacy by integrating it into curricula, grounding it in transparent policies, and providing professional development to educators. This holistic approach enables students to explore, question, and engage with AI in an ethical and effective manner. Ultimately, preparing students to thrive in an AI-driven world requires more than guarding against misuse. It demands that we teach them how to use these tools wisely, ethically, and with a deep understanding of their power and potential. Higher education is uniquely positioned to lead that charge.
[1] Weber-Wulff, D. et al. Testing detection tools for AI-generated text. International Journal for Educational Integrity 19, 1 (2023), 1–20.
[2] Cotton, D. R. E., Cotton, P. A., and Shipway, J. R. Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. Innovations in Education and Teaching International 61, 2 (2024), 228–239.
[3] Cordero, J., Torres-Zambrano, J., and Cordero-Castillo, A. Integration of generative artificial intelligence in higher education: Best practices. Education Sciences 15, 1 (2024), Article 32.
[4] Chan, C. A comprehensive AI policy education framework for university teaching and learning. International Journal of Educational Technology in Higher Education 20, 1 (2023), Article 56.
[5] Kasneci, E., et al. ChatGPT for good? Opportunities and challenges of large language models for education. Learning and Individual Differences 103 (2023), Article 102274.
[6] Zhai, X. ChatGPT for education: A pedagogical discussion. Frontiers in Psychology 14 (2023), Article 1130085.
[7] Tzirides, A. et al. Combining human and artificial intelligence for enhanced AI literacy in higher education. Computers and Education Open 5 (2024), Article 100024.
[8] Mollick. E. and Mollick, L. Assigning AI: Seven approaches for students with AI tools like ChatGPT. The Wharton School Research Paper. September 23, 2023.
[9] Palmer, E. et al. Findings from a survey looking at attitudes towards AI and its use in teaching, learning and research. ASCILITE Publications (2023), 212–223.
Rick Holbeck, M.Ed., M.S. is executive director of online instruction at Grand Canyon University. His research focuses on online learning, student engagement, and instructional technology. He explores ways to use technologies to foster student engagement and increase teaching effectiveness. In addition, he is currently exploring ways to use artificial intelligence to support teaching and learning. Holbeck is an active researcher and presenter in online education.
© Copyright 2025 held by Owner/Author. 1535-394X/2025/05-3729174 $15.00 https://doi.org/10.1145/3735548.3729174
To leave a comment you must sign in. |
Create an ACM Account. |