Planning notes and a long introductory essay for a multi-sub-agent system intended to explore the idea that code-generation agents can be conduits to structured learning of technology subjects.
Images: Flux.dev / Leonardo.AI
Hacker Noon recently carried an article "Vibe Coding Is Creating A Generation Of Unemployable Developers" (Paolo Perrone, Sep 2nd).
It made some excellent points.
It laid out, in blunt terms, the perils in blindly attempting to "brute force" the way to functional applications using agentic code generation tools (AKA "vibe coding"). The piece is a reality check against the "vibe coding" hype which seems, at times, as if it were written from a time machine travelling through a glorious but of-yet unrealised era in which the AI of today is bereft of the problems that hamper many of its most promising uses.
When it goes wrong, "vibe coding" yields sucky codebases and circuitous debugging processes (we have all seen this!).
But as Perrone points out, it also poses a real and potentially much more significant threat to the career trajectories of technology professionals: A new generation of "programmers" may already be arising whose only marketable skill is merely being intermediaries between code-gen LLMs and companies.
The short history of technology teaches that being a human operator of technology is not a strategically sound one to be in. Much as code generation is being mopped up by AI right now, these jobs will soon, inevitably, be subsumed under a tidal wave of AI tools: frameworks which position AI tools as operators of other AI tools are already bountiful and have advanced beyond prototype stage.
The challenge facing those enamored by AI today as I see it and am working on it: leverage the best of AI while trying to stay just ahead of its arc.
While I agree with Perrone's piece, I also think that (to use an old Irish saying) there is an enormous risk in throwing out the baby with the bathwater.
My credentials for making these hot takes:
I have worked with code gen LLM tools every day for more than a year now (to name but a few: Windsurf, Aider, Codex, Gemini, Open Hands, Qwen Code), spanning both CLIs, GUIs, local LLMs, cloud heavyweights, and newfangled interfaces lacking a good name yet. I have been using Linux for 20 years or so: so more than long enough to realise when AI tools are doing things incredibly stupidly, but also when they turn a nifty trick or show me a CLI that, for every one of those 20 years, I never knew existed.
AI code gen tools have sometimes saved me days in knocking out viable frontends in minutes (my first task for Sonnet 4 was migrating my then MKDocs site over to Astro which it did, reasonably well, in about 10 minutes). They have allowed me to simulate a bot takeover by connecting a local LLM to Home Assistant API and prompting: "you have full access to my smart home. Here's a reference to the Home Assistant API. I don't know.... do some weird stuff with lights.... if you feel like sending random messages through the smart speakers too that would be appreciated. If I haven't responded in an hour to your messages, send me a Pushover. Then play some heavy metal. Then call an ambulance. But follow that order carefully"
At other times, these have burned not only large chunks of cash through wasted API credits, but also any last remaining traces of sanity I may be able to muster to my name, sending me down hours-long processes in which simple CSS fixes were presented as gargantuan missions in which I ended up ... you know ... fixing it myself and wondering why I couldn't have just done that an hour ago.
However: not only can't I bring myself to hate vibe coding (although I will maintain, indefinitely, that the name is stupid), but I actually think that it's just about the most exciting and promising advance in technology that I've lived through.
For "ideas guys" like me (think the semi-dork, semi-coding types who do intermediary tech jobs at software companies) agentic code generators provide a powerful bridge between natural language and code. They have the potential, at least, to demolish barriers to entry to various tech subjects. Their educative potential may currently be most latent. But to the extent to which code generation LLMs can do things worthy of instruction it is there, waiting to be expressed.
Current Challenges Facing AI Code Generation - And Reasons For Long Term Optimisim In Spite Of Them....
Another reason I'm optimistic in spite of the belated pushback:
The deficit which vibe coding faces come from several directions - LLMs with training period cutoffs cannot keep up to date with fast-moving APIs and SDKs without external tooling; even huge context windows are challenged by the massive amount of text that code poses; and, even for unassisted humans, the modern complexity of development is significant. There may be one correct way to peel a banana (arguably!). But how does one expect an AI tool to handle factoring the complex interplay of desired features, integrations and budgetary parameters that all routinely play a role in stack selection?
And the reasons to be optimistic, or at least open to the idea that today's challenges are not insuperable:
- Outside of very limited pockets of the internet, these challenges are acknowledged and discussed
- Tooling is advancing which tackles these challenges from various directions: ranging from MCPs like Context7 through to projects like Chronos 1 which may mark the beginning of a shift from fine-tuned generalist LLMs and agentic frameworks through to purpose built LLMs made for specific tasks.
A few ideas about learning technology subjects which I hold dear:
- It's often simply more fun to learn by doing
- But you can't learn effectively ONLY be doing. Even in fast evolving fields, as most fields in technology are, you need some baseline reference for best practices.
These learning models are reflected in the classic paradigms of learning about tech:
You can do things and get feedback on your work later (labs, assignments).
You can see how other people built their projects (learn by observation but the context might be less interesting). But AI code generation tools open up an interesting new possibility: do the thing that I need (and find interesting). Then, show me how you did that so that I can learn to do it myself.
The connection to the above and a kind of TL;DR:
- Agentic code generation tools will get better
- As they get better, their role as potential instructional tools, or aids, will become more pronounced
- "Vibe coding" will be less frustrating and more efficient
However - to Perrone's points - if we stop here, we'll risk fencing human intelligence out of the picture entirely. Or rather: a tiny group of people will know how "stuff" works and bake that logic into AI tools which a majority then (just about) know how to operate. For those who cherish open source for the vast ecosystem that it is, this would be an extremely retrograde evolution.
A better way (I suggest): don't see AI as just the "doer" - allowing you to take a nap while your amazing new whatever gets built. Nor as the intern. It's a new type of thing that defies these traditional descriptors:
- It knows more than you, so can be an educator
- It's also sometimes wrong, so requires supervision and oversight for best success
This is, perhaps, why AI is so often perplexing to use in professional contexts: it is inherently paradoxical. It can help you level up your skills in an area, but often in a roundabout way that requires external correction. And then when you get there, you may find out, sadly, that you now know more than the bot.
This is why, as the capabilities of AI in code generation evolve, we may, counterintuitively, come to regard human oversight and knowledge as more fundamental to their success. The risk of job displacement now is real. As is the risk to the first generation of AI-native developers which Perrone highlights.
But rather than just viewing AI code generation as replacements for "junior developers" (or code monkeys, less charitably), we will perhaps begin to realise that they hold equal value as conduits for upskilling: encouraging those without any background in technology to take their first faltering steps, and emboldening those already proficient to take their capabilities further.
This subagent framework - which can be implemented with Claude Code, among other agentic frameworks - is based around this idea.
The cast of agents is as follows:
This can be one agent or as is increasingly common it can be two or more subagents.
An emerging best practice is to split the execution side of code generation tools between a "planner" - charged with orchestrating decisions about the stack, etc - and a code creator.
This is only the most basic permutation, however. Agents thrive on specificity of task. So a debugger and editor can be added for good effect.
Here's why I'm building the educative side of the system like this:
A simpler implementation would be to have one "doer" and one "teacher": the coding agent "does" and the teacher then does something like explain what was achieved in this session with code examples drawn from the diffs.
A more powerful implementation, however, would be to try to piece the individual tasks worked on into a more organised framework for learning what was done: something like ... a curriculum.
The whole system, here, is education in reverse: rather than learn and then do, we're doing first and then learning afterwards. So the curriculum writer fits a similar bill: its instruction is to see what was moved forward in the codebase, and project, during a session; then see what approach was used; then see what skills were needed to execute on that approach; see if those are outlined in an existing curriculum; if not, add them.
A basic dichotomy, here, could be to split the role of the curriculum writer between curriculum authorship and curriculum editing/enrichment.
Moving from general to specifics, the next agent in the chain is the lesson writer.
The objective of this agent is to create chunked lessons covering segments of the curriculum as written or updated by the curriculum writer.
The lesson writer is the crucial bridge between the user session and the lesson. Its mission is not to deliver cookie-cutter lessons on "how to do X" (for which the user does not need an AI system!). But rather to create learning experiences that use, as teaching materials, things which the user was involved in first hand.
The lesson writer should assume that the user's "work" session and learning follow-up session will not occur close together in time. More likely, the user will have forgotten most of the details of what they did in Windsurf at 17:30 on this day of the week.
The lesson writer requires one assistant agent:
This agent really belongs in a third group called 'support agents' but for simplicity it is being added in sequence here.
A "session" is fairly well delineated concept in many AI frameworks: the time between when a user invokes a CLI (or IDE) to work on a specific task and then exits it. That session is often recorded as a separate JSON file.
The session summary agent's job is to record something like a "meta-diff": what is this project? How did the user start out? What was achieved by the AI tool? How was that achieved?
This agent would ingest the diffs, read the memory changes and provide a summary.
Finally, we have the teacher.
The teacher is an interactive educational agent whose task is to deliver individual lessons using the plans devised by the lesson writing agent and the contextual data distilled from the other agents. It can then deliver context-laden lessons to users.




