Crux/Blog/Buyer's Guide
Buyer's Guide · 2026

Anti-cheating software for higher education: a 2026 buyer's guide.

Five product categories, deployment models, pricing realities, a usable RFP template, and the red flags every procurement team should know.

TL;DR

  • "Anti-cheating software" is not a single product category. It's five overlapping categories: lock-down browsers, online proctoring SaaS, AI-text detectors, LMS-native integrity modules, and integrated AI-proof platforms.
  • The right purchase depends on what assessment surface you're defending. Take-home essays need different tools from high-stakes proctored exams. Most institutions need a layered combination, not a single vendor.
  • Three deployment models matter: SaaS, on-prem, and hybrid. POPIA / NDPR / GDPR pressures push toward on-prem or sovereign-cloud for biometric-heavy products.
  • Pricing models vary widely: per-active-student, per-exam-attempt, per-faculty flat-fee, and source-license. Per-active-student is the dominant model; flat-fee favours large institutions; source-license suits regulators and specialist bodies.
  • The RFP template at the end of this guide is the questions every procurement should put to vendors. The red flags section is what to walk away from.

Who this guide is for

This guide is written for the people who actually run procurement at higher-ed institutions and certification bodies: Vice-Chancellors, Registrars, Deputy VCs (Academic), IT Directors, Procurement Officers, and the operational staff who own the assessment platform. It assumes you've already accepted that the post-ChatGPT threat model is real and that policy alone isn't sufficient. If you haven't gotten there yet, start with our piece on what AI-proof exam proctoring actually means.

The guide does not name and rank specific vendors. The vendor landscape changes too fast for that to be useful in print, and procurement contexts vary too much for a single ranking to apply. Instead, it gives you the framework to run your own evaluation honestly.

The five categories

1. Lock-down browsers

Examples: Respondus LockDown Browser, Safe Exam Browser (open source), various LMS-bundled equivalents. These are browsers that, when launched, restrict what the student can do — typically blocking other tabs, copy-paste, screenshots, and printing. They're the cheapest category and the longest-established.

What they do well: prevent casual cheating in low-stakes assessments, integrate with most LMSs cleanly, low cost, low complexity. What they don't do: prevent same-device AI agents (browser-only doesn't see the OS layer), prevent second-device cheating (no camera coverage), prevent paid impersonation (no continuous identity), or work offline. (More detail in our piece on browser vs OS-level lock-screens.)

Right fit: low-stakes assessments where the institution wants a deterrent but not a defensible audit trail. Wrong fit: high-stakes finals, certification exams, anything that has to survive a well-run appeal.

2. Online proctoring SaaS

Examples: Honorlock, ProctorU/Meazure Learning, Examity, ProctorTrack. The legacy category: webcam-based proctoring with continuous video upload and either live human invigilators or post-hoc review. Bolt-on AI text detection is now standard.

What they do well: established procurement category, broad integrations, mature support operations, strong familiarity in US higher ed. What they don't do well in 2026: native OS-level lock, dual-camera, on-device flagging, true offline mode, sovereignty-sensitive deployment. (See our direct comparison.)

Right fit: institutions in low-load-shedding, high-bandwidth, US-FERPA-style regulatory contexts where the legacy architecture still fits. Wrong fit: SA institutions, most of Africa, anywhere with intermittent power, anywhere with strict data sovereignty rules.

3. AI-text detectors

Examples: GPTZero, Turnitin AI Writing Detection, Originality.ai, Copyleaks. Standalone or LMS-integrated tools that score essay submissions on probability of AI generation.

What they do: produce a probability score per document, sometimes with sentence-level highlighting. What they fail at: defensible high-stakes decision support, especially against ESL students. (See why detection-only fails.) Several major detectors have been quietly de-emphasised by their own publishers since 2023.

Right fit: screening tool for generating discussion-starter signals on take-home work, when paired with human review and never used as sole basis for misconduct findings. Wrong fit: any context where the score itself is treated as evidence.

4. LMS-native integrity modules

Examples: Moodle's Quiz with secure browser settings, Canvas's New Quizzes with Studio, Blackboard's SafeAssign, D2L Brightspace's quiz security features. Built-in features of the learning management system that handle some integrity functions natively.

What they do well: zero integration cost, no second vendor relationship, baseline coverage. What they don't do: anything beyond the LMS's built-in capabilities, which typically lack OS-level lock, dual-camera, or on-device flagging. They're feature-set, not architecture.

Right fit: institutions that have already standardised on a single LMS and want minimum-viable integrity with no procurement overhead. Wrong fit: high-stakes exams, certification, anywhere needing defensibility beyond "the LMS has a setting for that."

5. Integrated AI-proof platforms

Examples: a smaller and newer category — Crux (built for SA / Africa), several US- and EU-based platforms emerging since 2023, plus regional players in Asia and Latin America. These platforms ship the full prevention-first stack: OS-level lock-screen, continuous identity, dual-camera, on-device flagging, offline-capable, evidence-chain logging.

What they do well: address all four post-ChatGPT cheating modes architecturally; produce defensible evidence chains; work in adverse-condition markets. What's harder: newer category means smaller installed base, longer sales cycles for risk-averse procurement, sometimes less mature integrations with legacy LMS stacks.

Right fit: high-stakes assessment, certification, anywhere the credential's defensibility matters. Wrong fit: institutions that only need lock-down browser-level deterrence and want to optimise for cost.

Choosing a deployment model

Three options, each with implications.

SaaS. Vendor-hosted, usually US or EU regions. Lowest setup cost, fastest time-to-first-exam. Hard to deploy in jurisdictions with strict cross-border rules (POPIA, NDPR, GDPR with biometric processing). Right for institutions with no sovereignty constraint.

On-prem. Deployed in the institution's own infrastructure (or sovereign-cloud region). Higher setup cost, requires institutional IT capability to run, but data never leaves the institution's control. Right for public-sector institutions, certification bodies, anywhere data sovereignty is a procurement non-starter.

Hybrid. Vendor-managed control plane (orchestration, updates, support) + customer-controlled data plane (where biometric and exam data lives). Increasingly common for integrated AI-proof platforms. Combines SaaS operational simplicity with on-prem data sovereignty.

SaaS-only, US-hosted vendors processing biometric data are, in 2026, an increasingly hard procurement story to tell at any African public institution.

Pricing models you'll see

Anti-cheating software pricing is not standardised, but the common models are these:

Per-active-student-per-year. The dominant model. The institution pays an annual fee per student who actually uses the platform during the year. Range varies hugely — from R30/student/year for lock-down browsers to R300+/student/year for full AI-proof platforms. Aligns vendor incentives with usage, but exposes institutions to enrolment-driven cost variability.

Per-exam-attempt. Common for proctoring SaaS. The institution pays per exam taken. Typical range R20–R200 per attempt depending on the level of human invigilation involved (live human review is at the high end). Right for institutions with light exam load; expensive at scale.

Per-faculty flat-fee. A single annual price for unlimited use within a defined faculty or department. Common for institutional-tier contracts. Range varies from R200,000–R2M per faculty per year depending on size and feature set. Predictable budgeting; favours large institutions.

Source license. Some vendors offer source-code access under a license fee, with annual support fees. Pricing varies enormously. Right for certification bodies and specialist regulators that need full code-level audit and roadmap influence.

What to watch for in pricing: per-faculty floor pricing that turns "per-student" into a flat fee for small departments. Hardware bundles that bundle tablet procurement at vendor markup. Setup fees that are higher than they need to be. Auto-renewal terms with short notice windows. None of these are necessarily wrong, but they all need explicit attention in the contract.

The 90-day pilot framework

No serious institutional procurement should sign a multi-year contract without a structured pilot. The 90-day framework:

Days 1–30: scope and contract. Define the pilot faculty, the exam window, the technical requirements, the support requirements. Sign a pilot agreement that's pilot-scoped, not a multi-year hidden in a pilot wrapper. Run the DPIA. Communicate to students and lecturers.

Days 31–60: integration and diagnostic. Deploy the platform. Run integration testing with the LMS, the SIS, the identity-management system. Run a low-stakes diagnostic exam. Capture every issue. Iterate.

Days 61–90: real exam and debrief. Run one real high-stakes exam end-to-end. Capture the operational data: support tickets, candidate experience, lecturer experience, time-to-flag-resolution, evidence-package quality. Run a structured debrief with all stakeholders. Make the go/no-go decision based on the actual data, not the vendor's pitch deck.

Vendors that resist this structure are vendors to walk away from.

RFP template (10 questions)

The questions every procurement document for anti-cheating software in 2026 should put to vendors:

  1. OS-level lock-screen: Do you ship a native lock-screen on macOS, Windows, and Android? Browser-only or browser-extension does not satisfy this requirement.
  2. Continuous identity: Is identity verification continuous throughout the exam (every 30–90 seconds), or one-time at start?
  3. Dual-camera: Does your platform support a workspace-coverage second camera, separate from the device camera?
  4. On-device flagging: Does behavioural anomaly detection run on the student device (output: clipped segments) or stream continuous video to your servers?
  5. Offline mode: Can the exam run fully offline after a single handshake to download? What happens if connectivity drops mid-exam?
  6. Deployment: What deployment models do you support — SaaS, on-prem, hybrid? Which regions are SaaS-deployable?
  7. Data sovereignty: Where does student personal information live, by default? Can we contractually constrain this to our jurisdiction?
  8. Evidence chain: Are flag records cryptographically timestamped and tamper-evident? Can the candidate review their own evidence pre-appeal?
  9. POPIA / NDPR / DPA / GDPR posture: Will you sign our standard operator agreement? What's your breach notification SLA?
  10. Pricing: What's the all-in cost for [our cohort size] over 3 years, including setup, hardware co-procurement (if any), and support?

A vendor that answers all ten cleanly is operating at the level your procurement needs. A vendor that hedges on three or more is exporting compliance, operational, or commercial risk to your institution.

Red flags in vendor pitches

Things that should make a procurement team pause and ask follow-up questions:

Conclusion

Anti-cheating software is not the place to optimise for procurement convenience. The credential value of a degree or certification is downstream of the integrity of the assessment that produces it. Cutting corners on the platform that defends that integrity is cutting corners on the institution's core product.

Run the pilot honestly. Ask the ten questions. Walk away from the red flags. The platform you end up with should be one you'd be willing to defend, in front of a Senate, a tribunal, or a regulator — because in some exam window, you'll be asked to.

Crux

Run the ten questions on us.

Send your RFP. We'll answer all ten questions in writing, in plain language, with reference customers and contract terms — no marketing redirection.

Request a demo