Junior LLM

I recently stumbled upon an interesting—though German-language—podcast episode discussing how AI and large language models (LLMs) are impacting our work as web developers. Beyond the usual talking points (“We’re all going to be unemployed soon… or maybe not”), one particular thread of the conversation genuinely startled me.

The claim was made that many companies have already stopped hiring junior developers because the kind of work juniors typically do can now be handled by chatbots.

Frankly, I can’t even begin to understand what kind of massive misunderstanding of the concept of “junior developer” must be behind that thinking. It’s so fundamentally wrong on so many levels that the dystopian notion of us all being jobless in a year and the internet collapsing two weeks later doesn’t seem all that far-fetched anymore.

This perspective assumes that juniors are simply cheap labor brought in to handle the low-hanging fruit—simple tasks that an AI can now automate. I imagine a dimly lit basement where juniors, to the beat of a drum, crop images, copy boilerplate code, and scrub data. And after three years of that, they’re promoted to “developer” and promptly quit.

In that world, yes, juniors are easily replaced by LLMs. And honestly, thank goodness—because no young talent should be subjected to such a soul-crushing environment.

But my idea of a junior developer is entirely different. First and foremost, a junior is there to be a junior. The whole point is to learn what it means to be a developer—not to carry out mindless busywork. At our company, juniors are embedded into the regular development workflow from day one. Early on, they’re paired with a mentor and tackle real tasks together as part of the team. The junior decides how far they want to go and gradually expands their skills, pushing their boundaries at their own pace.

That process is absolutely not something an LLM can replace. Because if we stop training junior developers, we won’t have any senior developers left in a few years—and no one capable of guiding or even understanding what the LLMs are doing. That, to me, is one of the greatest risks of the current AI boom: if we offload all the “crunch work” to machines, we risk degrading skills at every level of the profession.

Of course, one could argue that developers are automating themselves out of existence, and the last CTO will just turn off the lights. But this might also be a massive overestimation of what current AI systems will actually be capable of in the foreseeable future. A self-sustaining, continuously improving system that works indefinitely without human guidance? That still feels like science fiction to me.

But if the people expected to run these systems are the same ones who already see junior developers as disposable cheap labor… well, then I’m genuinely worried.

#