AI Detection for Schools: A Department Guide
A practical guide for heads of department considering AI detection tools. Covers policy, tool selection, rollout, and how to handle the inevitable difficult conversations.
Your school or department needs an approach to AI detection. Not just a tool — a policy, a process, and a way to handle the conversations that follow.
This guide is for heads of department, academic leads, and school administrators who need to make practical decisions about AI detection.
Start with policy, not tools
Before choosing a tool, your department needs to answer three questions:
1. What is your position on student AI use?
Options range from:
- Complete ban — no AI tools permitted in any assignment
- Allowed with disclosure — students can use AI but must declare how
- Permitted for specific tasks — AI allowed for research and brainstorming, not for writing
- Case by case — individual teachers decide
There's no universally correct answer. But everyone in the department needs to work from the same policy, and students need to understand it clearly.
2. What happens when AI use is detected?
Define the consequences before you need them:
- First instance: conversation, resubmission?
- Repeat instances: formal warning, grade penalty?
- Severe cases: disciplinary proceedings?
Unclear consequences lead to inconsistent handling, which leads to fairness complaints.
3. How will you handle false positives?
This is the question most departments skip, and it causes the most problems. Define an appeals process:
- Who investigates?
- What evidence is considered?
- How long does investigation take?
- What support is available to the student during investigation?
How should departments choose an AI detection tool?
For a department rollout, you need:
Consistency. Every teacher using the same tool with the same thresholds avoids the problem of one teacher being strict and another lenient.
Explainability. Teachers need to understand and explain results. A tool that says "78% AI" gives a teacher nothing to work with. A tool that says "this passage was flagged because of formulaic hedging language" gives them a starting point for a conversation.
Volume. A department of 10 teachers, each marking 5 classes of 30 students, needs to screen 1,500+ essays per term. Individual free tiers won't cut it.
Privacy. Student data handling matters — especially under GDPR (UK) and FERPA (US). Check whether the tool stores student work, uses it for training, or shares it with third parties.
Rollout recommendations
Phase 1: Pilot (1 month)
Pick 2-3 teachers to test the tool on a set of marked essays. They should:
- Run a mix of known human and known AI-generated text through the tool
- Test with ESL student work specifically
- Note false positives and false negatives
- Assess whether the results are actionable
Phase 2: Staff training (1 session)
Before the tool goes department-wide, every teacher needs to understand:
- What the tool measures and how
- What scores mean (probability, not proof)
- How to interpret flagged passages
- The department's policy and process for follow-up
- How to have a fair conversation with a flagged student
Phase 3: Department rollout
Roll out with clear guidelines:
- Detection is a screening tool, not a verdict
- No formal action based on a score alone
- Flagged essays go through the defined process
- Teachers document their investigation and reasoning
Phase 4: Review (end of term)
Review how it's going:
- How many essays were flagged?
- How many were genuine AI use vs false positives?
- Were ESL students disproportionately flagged?
- Did the process feel fair to students?
- What needs adjusting?
What are the common pitfalls of AI detection in schools?
Over-reliance on scores. A department that treats "70%+ = guilty" will generate false accusations. Scores are starting points, not conclusions.
Inconsistent application. If some teachers use the tool and others don't, students will notice. Apply it consistently or don't use it.
Ignoring the false positive problem. ESL students and students with learning differences are flagged at higher rates. Your process needs to account for this explicitly.
Not communicating with students. Tell students you're using AI detection. Explain why. Explain the process. Transparency prevents the adversarial dynamic that makes everything worse.
Department pricing
Most AI detection tools offer institutional pricing. Is It AI? offers:
- Teacher Lite (£9.99/month) — 50 scans for individual teachers
- Teacher Pro (£19.99/month) — 200 scans with flagged passages and full scan history
- Department plans — custom pricing for multi-seat, volume access. Contact us to discuss.
The bottom line
AI detection works best as part of a clear policy with a fair process. The tool is the easy part. The policy, training, and conversations are what make it work.
Start with policy. Choose a tool that explains its results. Train your staff. Communicate with students. And review regularly.
Contact us about department plans — we'll set up a pilot for your team.