By Emil Bjerg, journalist and editor
Some MBA students are paying six-figure tuition to let a language model do their thinking. The credential looks identical to everyone else’s. The employers hiring them are only now realising what they’re getting.
In the spring of 2025, researchers at MIT’s Media Lab published findings that carry direct implications for anyone running – or paying for – a business education. A preprint study titled “Your Brain on ChatGPT” divided 54 adults into three groups and asked them to write essays: one group using ChatGPT, one using Google, one using no tools at all. Researchers monitored brain activity using EEG throughout. ChatGPT users showed the weakest neural connectivity of any group by a significant margin. The researchers named the cumulative effect “cognitive debt.” After the sessions, 83% of ChatGPT users could not quote a single line from the essay they had just finished writing. The other two groups had no such problem.
The Scale of the Problem
At King’s Business School, part of King’s College London, a peer-reviewed study published in 2024 produced the most precise picture available of how AI-assisted non-compliance actually works. During the 2023-24 academic year, the school introduced a mandatory AI declaration form as part of every coursework submission. Students were required to confirm either that they had used AI tools or that they had not. The policy explicitly stated that declared AI use, within guidelines, would carry no grade penalty. Transparency had no cost.
The study found that 74% of students who used AI failed to declare it regardless. In interviews, students explained why: declaring felt, as one put it, like “admitting to plagiarism.” AI use had become so normalised in group work that one student described ChatGPT as “the fourth man” on their project team. Non-disclosure was the default because disclosure made you conspicuous.
This is the behavioural logic business schools are now managing: a student body trained to weigh costs against benefits, in an environment where the cost of using AI is near-zero, the benefit is a completed assignment, and the probability of detection is low.
Wharton’s AI Experiment
The issue is exacerbated by the fact that generative AI has been able to produce output strong enough to pass MBA assessments for years.
In January 2023, Wharton professor Christian Terwiesch published a white paper titled “Would ChatGPT Get a Wharton MBA?” after feeding his own Operations Management final exam into ChatGPT. The bot passed, scoring a B to B-. On the first question – identifying the bottleneck in a seven-part iron-ore refinery process – Terwiesch would have awarded an A+. On inventory turns and working capital requirements, another A+. It stumbled on complex multi-variable calculations but recovered when given human hints, as a student might.
“ChatGPT,” Terwiesch wrote, had shown “a remarkable ability to automate some of the skills of highly compensated knowledge workers – specifically the knowledge workers in the jobs held by MBA graduates including analysts, managers, and consultants.”
His intention was to prompt a rethinking of the curriculum. What he also demonstrated was that the instrument MBA students were being assessed with had a freely available, $20-a-month solution. The models have improved considerably in the three years following Terwiesch’s paper.
MBAs Have Always Had an Integrity Issue
The willingness to cut corners has always been there among students – even at expensive MBAs. AI has introduced the ease of doing so.
A 2025 article in Springer’s Journal of Academic Ethics offered the most honest structural diagnosis in the available literature: “AI does not create the crisis of academic integrity; it exacerbates existing structural vulnerabilities within the contemporary higher education system. Massified, depersonalised, standardised assessment regimes, designed for administrative convenience rather than pedagogical fidelity, are especially susceptible.”
The MBA case analysis – for decades the signature assessment of graduate business education, designed to test judgment through complexity – maps almost perfectly onto what a large language model does well: synthesise information, apply frameworks, produce coherent prose under a deadline. The King’s Business School research found that students were not declaring AI use even when it carried no penalty, because the assessment structure gave them no reason to engage beyond the output. Only the final document matters – the process is invisible.
Oral defences, live case presentations, and time-pressured in-room assessments are harder to outsource. They are also more expensive and time-consuming to run, which is why most business schools have not restructured their programmes around them at scale – yet.
What Employers Are Starting to See
Employers are starting to notice. In 2025, 59% of hiring managers reported suspecting candidates of using AI tools to misrepresent their abilities during assessments. The AACSB, the global accreditation body for business schools, identified the structural consequence in April 2026: “When anyone can produce impressive outputs, AI-assisted work is no longer proof of competence — it’s merely a starting point. Employers must look for additional evidence that a candidate can do the work reliably, not merely present it well. They want to see how job candidates frame problems when the data is incomplete, how they choose trade-offs when stakeholders disagree, how they validate claims when AI produces plausible text, and how they stay accountable when the easy move is to outsource thinking to a tool.” That description is a precise inventory of everything a student skips when AI writes the assignment.
The Arithmetic
Students who use AI to complete MBA assessments without engaging with the material are not making an irrational decision. Within a system that prices the credential over demonstrated competence – and where detection is unreliable and the degree still opens every door regardless of how it was earned – the calculation is straightforward.
What those students are not building is the thing the degree is supposed to certify: the ability to walk into a room with incomplete information and real consequences, and produce a defensible analysis without a machine to do the thinking. That moment comes for every MBA graduate – perhaps in a client presentation, a board meeting, or a deal that is going wrong at speed.
The MIT researchers have a name for what accumulates in its place during two years of AI-assisted coursework. They call it debt. The difference from financial debt is that it carries no interest rate and sends no reminders. It simply waits – invisible on a CV, undetectable in many interviews – until the first time genuine judgment is required, under pressure, without a prompt box in sight.







