By Craig Wilson
AI is changing how developers work. It helps teams move faster, but something important is being lost. Developers are skipping the hard parts where real learning happens. This article looks at how AI affects growth, why it matters, and what companies should do to keep their teams strong and skilled.
Everyone keeps asking: Will AI replace developers? Wrong question.
The real issue is that AI isn’t taking jobs — it’s quietly removing the steps developers used to go through to get good at them. And the effects might not show up in sprints or velocity charts, but they’ll hit hard when no one knows how the code really works anymore.
In hands-on experiments across engineering teams, the pattern is becoming clear: productivity is up, but understanding is down. Developers skip past the uncomfortable middle — the part where skills are forged. And that middle used to be essential for turning good coders into great engineers.
The Disappearing Middle: What Developers Are No Longer Learning
Today’s junior developers can ship code in their first month. With copilots, chat-based debugging tools, and UI generators, they’re immediately productive. But in five years, when they’re “senior” on paper, will they be able to make changes to that code — or even understand how it works?
In one of our experiments, a frontend prototype went from Figma to working code in under 48 hours using AI tools. The speed was impressive. But when the team tried to extend or debug that code, it became clear: the developer who built it didn’t fully grasp the structure. Key decisions had been made by the model, not the human. The developer couldn’t retrace the logic or justify the architecture — because they hadn’t built it themselves.
Another team used GPT to generate comprehensive unit tests. When asked about certain test cases and why they existed, the team couldn’t say. The code passed, but no one knew exactly what was being verified or why those conditions mattered.
The middle of the learning curve — where devs used to build intuition through trial and error — is being automated out. We call this growth compression. And the risk is clear: a generation of engineers who can prompt, edit, and deploy, but not truly understand.
This also creates an internal tension. Senior engineers are increasingly pulled into review roles — not because juniors can’t code, but because no one fully understands what the AI wrote. Meanwhile, juniors are less likely to ask for help, because AI answer faster and without critique. Mentorship is quietly eroding, replaced by chatbot suggestions that don’t teach principles — just patterns.
And when seniors are reduced to validators of AI-generated content, rather than mentors or architects, they risk burnout and disengagement. The joy of building is replaced with the burden of quality control.
The Other Feedback Loop: When AI Starts Learning from Itself
It’s not just the developers who are stagnating. The AI is too.
AI models are trained on public data — repos, blogs, forums. Increasingly, that data is being generated by AI itself. Developers post AI-written code. Blogs get filled with LLM-generated tips. Models scrape that content. And the cycle repeats.
What happens when AI trains on AI? You get regression. Outputs become more uniform. Creativity declines. Errors compound. And the system starts reinforcing mediocrity instead of progress.
We’ve seen early signs of this already — hallucinated citations, recycled phrases, code suggestions that “look right” but lack context. And as the proportion of human-authored content declines, models will have fewer reference points grounded in expert understanding. They’ll be trained on approximations of approximations.
This doesn’t just affect quality — it affects trust. When no one understands how something works, but everyone uses it anyway, you get fragility at scale. And once AI becomes part of your delivery pipeline, the consequences compound quickly.
AI Is Changing How We Think About Work
One of the unintended consequences of AI integration is cultural. Engineering has always been a craft: a mix of logic, trade-offs, and experience. But when AI intermediates that process, some of the essential feedback loops vanish.
We’ve seen how developers begin to approach work differently once AI becomes embedded in the workflow. Instead of breaking down a problem, many jump straight to the prompt. Instead of discussing trade-offs, they rely on outputs. The result is faster delivery, but often shallower thinking.
This shift affects team dynamics, hiring expectations, onboarding, and long-term product maintainability. It changes how developers perceive their own value and how organizations measure it.
We believe the future of engineering won’t be defined by how well you prompt an AI assistant, but by how well you guide, correct, and build on top of what it produces.
Why Most AI Projects Will Stall and What to Do Instead
Gartner predicts that 60% of AI projects will be canceled within the next year. Not because AI isn’t powerful — but because too many companies chase results without building the foundation to support them.
The real challenge isn’t the model. It’s the environment around it.
AI can’t fix a broken architecture. It can’t create clarity where there’s no structure. And it can’t generate insight from poor data. That’s why the next wave of real AI value won’t come from more impressive outputs — it’ll come from more robust inputs.
We’re focused on strengthening the data foundation that makes AI genuinely useful — systems that are reliable, well-structured, and high in quality. No matter how advanced the model is. Without clean, contextual data and an environment it can operate within, it simply won’t deliver value.
The companies that win with AI won’t be those with the flashiest demos — they’ll be the ones with the cleanest data, the clearest processes, and the most thoughtful engineering cultures.
Where We Go From Here
We don’t need to slow down AI adoption. We need to make it smarter and more human-centered.
We believe in:
- Giving developers the tools they need, without replacing the growth they deserve.
- Preserving mentorship, reasoning, and decision-making alongside automation.
- Designing engineering environments where AI augments — but never erodes — long-term capability.
We can build faster. But we also need to build better. That means protecting the steps, the questions, and the challenges that make developers great at what they do.
Otherwise, we’re just automating ourselves into irrelevance.
Let’s make sure the future of engineering doesn’t forget how to think.


Craig Wilson




