AI cheating in Norway is increasingly being flagged by universities: 190 students were sanctioned for suspected use of artificial intelligence in exam-related cheating in 2025, according to figures collected by TV 2 from the country’s ten largest higher education institutions.
Under Norway’s legal framework for higher education, confirmed cheating can lead to an exam being annulled and the student being excluded (suspended) from studies and the right to sit exams for a set period—typically up to one year, and up to two years in particularly serious cases.
AI cheating cases doubled in 2025
TV 2’s data indicate the number of students sanctioned for suspected AI-assisted cheating doubled compared with the previous year: 190 cases in 2025, up from 95 in 2024 (as previously reported by the higher education outlet Khrono). The figures cover the ten largest study locations in Norway and reflect cases where students were formally found to have cheated.
Several institutions reported that AI-related cases now represent a significant share of their overall caseload. At the University of Oslo (Universitetet i Oslo, UiO), 52 of 71 confirmed cheating cases in 2025 involved suspected AI use. Oslo Metropolitan University (OsloMet) reported 24 AI-related cases out of 70, while the Norwegian University of Science and Technology (Norges teknisk-naturvitenskapelige universitet, NTNU) reported 10 out of 35.
Other institutions cited in the dataset include the University of Bergen (Universitetet i Bergen, UiB), the University of Agder (Universitetet i Agder, UiA), UiT The Arctic University of Norway (UiT Norges arktiske universitet, UiT), and the University of South-Eastern Norway (Universitetet i Sørøst-Norge, USN), among others.

The minister’s warning: trust in degrees is at stake
Norway’s Minister of Research and Higher Education (Forsknings- og høyere utdanningsminister) Sigrun Aasland said the increase is “not good” and stressed that cheating must have consequences.
Aasland framed the issue as more than a disciplinary problem: exams are meant to measure students’ actual skills, and employers and society must be able to trust that grades and diplomas reflect real competence. At the same time, she noted that the majority of students follow the rules, and the policy response should not treat higher education as a system built around suspicion.
Fewer take-home exams, more controlled assessments
The figures are feeding into a broader debate in Norway about whether universities should rethink assessment methods.
A government-appointed expert committee on artificial intelligence in higher education (often referred to in Norwegian coverage as the Malthe-Sørenssen committee, after its chair Anders Malthe-Sørenssen) has advised institutions to reduce the use of take-home exams and other non-controlled formats, which can be difficult to safeguard against misuse of generative AI.
The committee argues that the sector does not need to revert to “old-style” exams as they looked decades ago, but it does need assessment designs that remain meaningful in a world where powerful text and code generation tools are widely available. Suggested alternatives include a stronger mix of in-person written exams, oral examinations, and supervised assignments, as well as assessment formats that test process and reasoning rather than final text alone.

Why AI detection tools remain controversial
Norwegian experts and student representatives have also pointed to a key dilemma: many AI-detection tools are unreliable, hard to audit, and risk harming students’ legal protection (rettssikkerhet) if used as the primary basis for accusations.
Professor Morten Goodwin (University of Agder), who researches artificial intelligence, has argued that it can be difficult both to prove that a student used AI to cheat and to prove that they did not—especially as students adjust their writing to avoid detection or as models become more fluent in producing “human-like” text. In this context, universities face pressure to build assessment systems that are robust even without depending on automated detectors.
Student representatives have made similar points. The Norwegian Students’ Organisation (Norsk studentorganisasjon, NSO) has called for clearer guidance and teaching on responsible AI use, arguing that students need structured training on what is allowed, what must be disclosed, and how to use AI tools in ways that support learning rather than replace it.
A wider Nordic debate about assessment in the AI era
Norway’s debate reflects a wider discussion across the Nordic region, where universities have been balancing digital assessment, academic integrity, and student rights for years—often relying on trust-based systems and flexible exam formats.
What is changing is the scale and accessibility of generative AI. As Norway considers reducing take-home exams and increasing controlled formats, the policy challenge will be to protect the credibility of degrees without turning higher education into a surveillance-heavy environment—while still teaching students how to use AI tools responsibly in academic and professional life.
In the coming months, the government is expected to use the expert committee’s work as an input for further guidance to universities and colleges. The key question for 2026 is whether institutions can redesign assessments quickly enough to keep pace with AI capabilities, while maintaining fairness, transparency, and trust in Norway’s higher education system.





