Where’s the Line? Navigating AI Use in Student Assessments
- Rikki Archibald
- May 2
- 6 min read
Artificial Intelligence (AI) is being used in classrooms whether educational institutions are ready or not. Students are already experimenting with it, some more confidently than others, and some not at all. The big issue isn’t just whether students are using AI, but how educators and institutions, are responding.
Students should have clear guidelines on how AI can be used to ensure that they have access to the same materials and assistance to achieve their goals. When academic policies aren’t clear, this creates two very different student experiences:
One group may push the boundaries and benefit.
The other may hold back for fear of getting it wrong.
The second group risk falling behind, not just in their coursework but in developing the digital skills that are increasingly essential in the workplace (Hampton & DeFalco, 2022). An overarching policy and training are critical to ensure that students are not receiving mixed messages from different academics resulting in under, or overuse of AI.
The Problem with AI Detection in Assessment

A lot of universities are turning to AI detectors such as ZeroGPT, Originality AI, and GPTZero to identify potential misuse of generative AI like Chat GPT. But the use of AI to detect AI brings its own set of risks, especially around bias.
Selection Bias
If only certain assignments are being put through an AI detector, who is deciding which ones? If that decision isn’t consistent, selection bias may be introduced where students from particular backgrounds or from certain demographics could end up under greater scrutiny (Ammanath, 2022).
Confirmation Bias
When an assignment has been flagged for potential AI use, confirmation bias may cause the assessor to unconsciously look for justification to support the finding, rather than an objective assessment of the work (Ammanath, 2022, p. 29). It’s very easy for an assessor to start looking for evidence to support the flag, rather than doing an objective review. It’s human nature, and it poses a risk when policies aren’t in place to mitigate it.
Unconscious & Conscious Bias
Unconscious bias (and sometimes conscious bias) exists in most humans. Writing style, student name, perceived language ability, and past performance can all trigger assumptions in the assessor. High performers might be seen as “too good”, while students who suddenly improve might be met with suspicion. This kind of scrutiny isn’t applied equally and may result in their work being more carefully scrutinised.
Implicit Bias
Assessors may also carry their own set of explicit and implicit biases such as racist or prejudicial views about the academic ability and / or commitment of certain groups of students. This could be based on race, nationality or religion and could be noticed by assessors simply through the name of the student or their writing style without knowing the student directly (Ammanath, 2022, p. 30) (Hampton & DeFalco, 2022, p. 165).
The AI detectors don’t have any context on the student’s previous work, progression or improvement which may explain why they are doing well, or alternatively it may mean a human could identify AI where the AI detector may not necessarily identify it.
What Needs to Happen
To move forward responsibly, we need to do a few key things:
Set clear, consistent policies around AI use in assessments. Don’t leave it up to individual academics or vague guidelines.
Train students on how to use AI ethically and effectively. That includes showing them the risks and the opportunities.
Train staff on how to assess fairly in the age of AI, including how to recognise and avoid bias when using AI detectors.
Without this kind of structured approach, there is a risk of harming student outcomes, damaging institutional credibility, and even sending graduates into the workforce without the skills they need, or with an overdependence on tools they don’t fully understand (Hampton & DeFalco, 2022).
Train Students on Appropriate Use of AI
The risks of students using AI in assessments:

Misconduct and plagiarism: Don’t ask ChatGPT (or any AI tool) to answer an assessment question for you and then try to reword it. That’s still academic misconduct, even if it sounds like your own voice. It is actually really difficult to rephrase something that is already written well.
Over-reliance harms your learning: If you let AI do the thinking for you, you won’t develop the critical thinking or problem-solving skills the course is meant to build.
AI can be wrong or misleading: Instead of asking AI to “write” the answer, try answering it yourself first. Then ask AI: “Here are the four points I’ve come up with. Have I missed anything?” Make sure your prompts are clear so that the AI is only giving you back dot points or short notes on what you are missing.
Data privacy is a real concern: Never paste private or sensitive company information into AI tools. If you're doing an applied business assignment, this could breach confidentiality agreements you have with your employer if you are uploading company information. You never know where your data might end up!
Opportunities for students to use AI to support (not replace) learning:
Use it to brainstorm: If you're choosing from several assignment topics, use AI to help you explore ideas. Try: “Give me 5 potential directions for an essay on [topic]”. Then dig in deeper with the ones you like.
Refine your thinking: For information you are already familiar with, you can ask AI to compare theories, summarise pros and cons of a model, or explain a concept in simpler terms as if you were asking a question to your academic—but always double-check the answers and sources.
Create an outline (not a full draft): Prompt AI to suggest headings only based on your assignment brief. Example: “Give me 5 possible subheadings for an essay on digital transformation in higher education”.
Clarify your argument: If your ideas feel messy, try explaining them to AI and asking it to help tighten your logic or check if the structure flows clearly.
Policy First: What Institutions Need to Do to Ensure Fair AI Assessment
When it comes to managing student AI use, the focus often lands on individual academics. But the real responsibility lies with institutions. Without clear, consistent policies, well-intentioned staff are left to interpret grey areas, leading to inconsistent outcomes.
If universities want to uphold academic integrity and student trust, they need to lead with policy. Here’s what that looks like:
Policy must ensure ethical and fair use of AI detection tools
Use detection tools consistently. Define a clear, institution-wide approach. Either all student work is checked, or a documented random sampling method is applied. Ad hoc use introduces bias and exposes staff to scrutiny.
Establish a standardised investigation process. Create clear documentation outlining how suspected AI misuse will be handled, including timeframes, review procedures, and appeals processes.
Require human oversight and transparency. AI tools should never make final decisions. Policies must require assessors to consider how the detector arrived to its decision, and overall context such as prior work, academic progress, and legitimate improvement.
Ensure transparency. Let students know what tools are being used, how their work will be assessed, and what happens if AI use is suspected. Clarity builds trust.
Policies must actively address bias and systemic inequality
Implement bias-mitigation strategies. Policy should mandate anonymised marking where possible, especially in flagged or borderline cases.
Monitor assessment patterns. Institutions must track who is being flagged or investigated and review this data regularly to identify any disproportionate impact on particular student groups.
Require team-based case review. Avoid individual judgment calls in misconduct investigations. Multi-person review helps reduce bias and improve consistency.
Embed regular staff training. Ensure academics and assessment teams are trained to understand confirmation bias, unconscious bias, and best practices for ethical AI assessment.
Policies must also address legal compliance and data protection

Ensure compliance with data laws. Any AI detection tool in use must comply with relevant privacy laws, including General Data Protection Regulation (GDPR), the EU AI Act, and the California Consumer Privacy Act (CCPA) if applicable.
Review third-party tools. Universities must vet any AI detection software for compliance, including where and how student data is stored and processed.
Obtain informed consent. If detection tools process any personal data, students must be clearly informed, and their consent must be properly obtained and recorded.
Without these safeguards, institutions risk much more than just flawed assessments. They risk alienating students, breaching legal obligations, and damaging their academic credibility. Strong, thoughtful policy is no longer optional, it is essential.
Need support with your AI policy?

Whether you need a ✍️ fresh policy written from scratch, a 🔍 critical review of what you already have, or 📚 practical training materials to educate your students and staff — I can help.
I work with education providers to create clear, ethical, and legally sound AI policies that support learning, reduce risk, and build trust.
✅ Let’s make sure your institution is ready.
📅 Book a free 15-minute discovery call to discuss how we can help your organisation take the next step forward.
👉 Contact me to get started
References
Ammanath, B., 2022. Trustworthy AI: A business guide for navigating trust and ethics in AI. s.l.:John Wiley & Sons.
Chierici, A., 2021. The ethics of AI. s.l.:New Degree Press.
Hampton, A. J. & DeFalco, J. A., 2022. The frontlines of artificial intelligence ethics: Human-centric perspectives on technology's advance. s.l.:Routledge.
Harvard Business Publishing, Education, 2024. How Students Are Actually Using Generative AI. [Online] Available at: https://hbsp.harvard.edu/inspiring-minds/how-students-are-actually-using-generative-ai [Accessed 26 04 2024].
United Nations Educational Scientific and Cultural Organization, 2023. UNESCO Recommendation on the Ethics of Artificial Intelligence, s.l.: UNESCO.
Comments