Interview with Nitish Jain of SP Jain Group
Artificial intelligence is changing business education, but the real challenge is helping students build judgment, adaptability, and confidence beyond technical skills.
As artificial intelligence reshapes how students learn and how institutions operate, business schools are rethinking what future leadership education should deliver. In this interview, Nitish Jain explains why judgment, global exposure, and the ability to question technology may matter more than traditional classroom knowledge.
Your career has been closely tied to the growth and global presence of SP Jain School of Global Management. When you reflect on that journey, was there a moment that changed how you think about leadership or the role of business schools?
In the early years, I spent a lot of time doing what most institution builders do: I studied the best schools in the world and asked how we could match them. How could we build something credible?
But credible compared to what? The schools I was benchmarking against had barely changed in decades. Prestigious, yes. But built for a world that was quietly becoming unrecognisable.
That’s when the real question hit me—not “how do we build a great business school” but “what does a business school actually need to do?” And I didn’t like the answer most schools were living by: hand students a body of knowledge, attach a credential, and wish them well. I thought that was a false promise.
So we made a decision that, at the time, many thought was impractical. No home campus. Every student would live and work across Dubai, Singapore, Sydney, London, and Mumbai. Not visit, live. Because if the world is genuinely global and genuinely unpredictable, you can’t prepare people for it inside a single building in a single city. You just can’t.
Many leaders experience a moment that reshapes their perspective. Was there an experience that made you rethink how business schools should prepare students for the realities of today’s business world?
I’d be standing at a graduation, looking at students who had worked hard and trusted us with twelve months of their lives, and quietly wondering how much of what they’d learned would still be applicable in five years. The honest answer was: less than we’d like to believe.
AI arrived and could do all of that in minutes. The knowledge hadn’t become wrong. It had just stopped being the differentiator.
The skills that were being taught as core MBA competencies—financial modelling, data analysis, competitor analysis—were genuinely valuable when we designed the program. Then AI arrived and could do all of that in minutes. The knowledge hadn’t become wrong. It had just stopped being the differentiator.
That’s not a design problem. It’s a moral one. Are we building programs around what students should know, or around what they should be capable of when the world looks nothing like the one we prepared them for? Those two things sound similar. They’re not, and it’s the question the entire industry needs to sit with.
Business education is changing quickly, especially with the rise of artificial intelligence. From your perspective, what are the biggest changes taking place in how students learn and how schools operate?
For most of the history of education, teaching has been an act of faith. You lecture students, you set the exam, and somewhere in between, you hope the learning happened. You never really knew. AI has changed that, and I think it’s the most underappreciated shift in education today.
We can now see inside the learning process as it unfolds. Where a student is struggling, which concept isn’t landing, and what needs to change. Faculty can adjust in real time rather than discovering the problem at the end of the semester. And the learning itself can be continuously shaped around each student. Their pace, their gaps, their career aspirations. A student heading into supply chain and one heading into entrepreneurship are learning the same concepts through completely different lenses.
For the first time in the history of education, we don’t have to choose between scale and personalisation. That’s never been possible before.
Technology can make learning faster and more accessible, but education is also about discussion, mentorship, and judgment. How do you see the relationship between AI tools and the human side of learning evolving?
Most of the education world is getting the framing slightly wrong. The debate is presented as a tension. AI versus human interaction, efficiency versus depth. I think that’s the wrong frame entirely.
What AI has actually done, if you design it properly, is give great educators more room to teach. When a student arrives at class having already worked through the foundational material with an AI tutor, the faculty member doesn’t have to spend the hour delivering content. They can provoke. They can challenge. They can sit with a student in the discomfort of a genuinely hard question rather than rushing to the next slide. The best teachers have always wanted to do that, but haven’t had the space to.
So the relationship isn’t AI versus the human side of learning. It’s AI creating the conditions for the human side to actually flourish. That’s a very different conversation, and a much more interesting one.
Companies today expect graduates to work confidently with new technologies while still making sound decisions. How should business schools rethink the learning experience so students develop both practical skills and strong judgment?
There’s an assumption buried in this question that’s worth challenging. The assumption is that practical technology skills and sound judgment are two separate things that need to be developed in parallel. I think they’re converging, and fast.
As AI becomes more capable, the practical skills gap closes quickly. Students can learn to work with new tools in weeks. Judgment takes years. And here’s the paradox: the better AI gets at the practical side, the more judgment becomes the only real differentiator. The graduate who can use every tool in the market but can’t tell when the output is wrong, or when the question itself is wrong, is less valuable than they appear.
So the rethink I’d push schools toward isn’t about the balance between technology and judgment. It’s about recognising that judgment is the destination, and everything else, including technology, is in service of that.
As students begin using tools like AI-ELT, what have you observed about how it changes the way they prepare for classes, work on projects, or build confidence in their understanding?
The change that has surprised me most is the confidence it has built, and it’s a specific kind of confidence. The tutor isn’t just delivering content; it’s shaping how a student engages with that content based on where they’re headed in their careers.
A student aspiring to a career in investment banking is being taught economics through the lens of pricing models, yield curves, and capital markets. One going into marketing is learning the same concepts through consumer behaviour and brand elasticity. Same topic, same classroom, completely different preparation. The system knows where each student is headed and shapes the learning accordingly.
We’ve seen graduates clear job interviews in roughly half the attempts compared to earlier cohorts, and I think that’s because they’ve learned not just the subject, but how it applies to the career they’re actually building. That’s far more powerful than a generic preparation.
When new technologies appear, leaders often face pressure to adopt them quickly. How do you decide when a technology truly adds value to learning and when it might simply be a trend?
I ask one question: Does this solve a real pedagogical problem, or does it solve an optics problem?
Most institutions adopt technology to appear current; to signal to prospective students and rankings bodies that they’re not standing still. I understand that pressure. But it’s a terrible basis for decisions about how people learn.
The test I apply is this: what specific limitation in our current model does this address? If I can’t answer that concretely, the tech doesn’t belong in the classroom yet. Is there evidence, even preliminary, that it improves genuine learning outcomes? And what does it cost in terms of faculty time, student attention, and institutional focus?
We built ELO in 2018, two years before COVID. This wasn’t because we saw the pandemic coming, but because we had a concrete problem: teaching students across multiple cities simultaneously without sacrificing classroom quality. The technology followed the problem, not the other way around. That sequencing—problem first, technology second—is an important discipline to cultivate.
In times of rapid change, leaders often have to make decisions without having all the answers. How do you approach those moments, especially when the choices could shape how students learn in the years ahead?
When it comes to decision-making, what I’ve learned is that the goal isn’t certainty; it’s directional clarity. You can’t know exactly where you’ll end up, but you can be clear about the direction and the values guiding your decisions. If those are solid, you can adjust execution as new information arrives without losing your footing.
The goal isn’t certainty; it’s directional clarity. You can’t know exactly where you’ll end up, but you can be clear about the direction and the values guiding your decisions.
When we committed to a multi-campus model, we didn’t have a spreadsheet proving it would work. When we invested in ELO studios, we didn’t have longitudinal data. We had a clear understanding of the problem and enough early evidence to move. That has to be sufficient because complete certainty is rarely on offer.
The rest is adjustment. You stay honest about what the evidence is telling you, and you correct as you go. Some may say that’s recklessness, but it’s the only way to lead when the ground is shifting.
As artificial intelligence becomes a normal part of business and everyday work, what do you think will matter most in preparing the next generation of business leaders?
The most important thing we can do is teach students to disagree with a machine. That sounds flippant. It isn’t.
When AI can produce a confident, well-reasoned answer in seconds, the real danger isn’t that students don’t know enough; it’s that they defer too readily to something that sounds authoritative. The skill that will separate the best leaders is knowing when to push back and having the conviction to do it. That’s not a technical skill. It’s closer to intellectual courage.
We can’t teach that by adding an AI module to the curriculum. We teach it by putting students in situations where they have to form a view, defend it under pressure, and sometimes discover they were right when the AI said otherwise.


