No critical developer nonetheless expects AI to magically do their work for them. We’ve settled right into a extra pragmatic, albeit nonetheless barely uncomfortable, consensus: AI makes a fantastic intern, not a alternative for a senior developer. And but, if that is true, the corollary can also be true: If AI is the intern, that makes you the supervisor.
Sadly, most builders aren’t nice managers.
We see this on daily basis in how builders work together with instruments like GitHub Copilot, Cursor, or ChatGPT. We toss round imprecise, half-baked directions like “make the button blue” or “repair the database connection” after which act stunned when the AI hallucinates a library that has not existed since 2019 or refactors a crucial authentication movement into an open safety vulnerability. We blame the mannequin. We are saying it’s not sensible sufficient but.
However the issue often will not be the mannequin’s intelligence. The issue is our lack of readability. To get worth out of those instruments, we don’t want higher immediate engineering tips. We want higher specs. We have to deal with AI interplay much less like a magic spell and extra like a proper delegation course of.
We have to be higher managers, in different phrases.
The lacking ability: Specification
Google Engineering Supervisor Addy Osmani just lately revealed a masterclass on this actual subject, titled merely “The best way to write spec for AI brokers.” It is likely one of the most sensible blueprints I’ve seen for doing the job of AI supervisor properly, and it’s a fantastic extension on some core rules I laid out just lately.
Osmani will not be making an attempt to promote you on the sci-fi way forward for autonomous coding. He’s making an attempt to maintain your agent from wandering, forgetting, or drowning in context. His core level is straightforward however profound: Throwing a large, monolithic spec at an agent typically fails as a result of context home windows and the mannequin’s consideration finances get in the way in which.
The answer is what he calls “sensible specs.” These are written to be helpful to the agent, sturdy throughout periods, and structured so the mannequin can comply with what issues most.
That is the lacking ability in most “AI will 10x builders” discourse. The leverage doesn’t come from the mannequin. The leverage comes from the human who can translate intent into constraints after which translate output into working software program. Generative AI raises the premium on being a senior engineer. It doesn’t decrease it.
From prompts to product administration
In case you have ever mentored a junior developer, you already know the way this works. You don’t merely say “Construct authentication.” You lay out all of the specifics: “Use OAuth, help Google and GitHub, preserve session state server-side, don’t contact funds, write integration checks, and doc the endpoints.” You present examples. You name out landmines. You insist on a small pull request so you’ll be able to examine their work.
Osmani is translating that very same administration self-discipline into an agent workflow. He suggests beginning with a high-level imaginative and prescient, letting the mannequin broaden it right into a fuller spec, after which modifying that spec till it turns into the shared supply of fact.
This “spec-first” strategy is shortly changing into mainstream, shifting from weblog posts to instruments. GitHub’s AI crew has been advocating spec-driven growth and launched Spec Package to gate agent work behind a spec, a plan, and duties. JetBrains makes the identical argument, suggesting that you just want assessment checkpoints earlier than the agent begins making code modifications.
Even Thoughtworks’ Birgitta Böckeler has weighed in, asking an uncomfortable query that many groups are quietly dodging. She notes that spec-driven demos are likely to assume the developer will do a bunch of necessities evaluation work, even when the issue is unclear or giant sufficient that product and stakeholder processes usually dominate.
Translation: In case your group already struggles to speak necessities to people, brokers won’t prevent. They are going to amplify the confusion, simply at the next token price.
A spec template that truly works
A very good AI spec will not be a request for feedback (RFC). It’s a software that makes drift costly and correctness low cost. Osmani’s suggestion is to begin with a concise product transient, let the agent draft a extra detailed spec, after which appropriate it right into a residing reference you’ll be able to reuse throughout periods. That is nice, however the actual worth stems from the particular parts you embrace. Primarily based on Osmani’s work and my very own observations of profitable groups, a useful AI spec wants to incorporate just a few non-negotiable parts.
First, you want goals and non-goals. It’s not sufficient to put in writing a paragraph for the purpose. You could record what’s explicitly out of scope. Non-goals forestall unintentional rewrites and “useful” scope creep the place the AI decides to refactor your total CSS framework whereas fixing a typo.
Second, you want context the mannequin gained’t infer. This contains structure constraints, area guidelines, safety necessities, and integration factors. If it issues to the enterprise logic, you need to say it. The AI can’t guess your compliance boundaries.
Third, and maybe most significantly, you want boundaries. You want specific “don’t contact” lists. These are the guardrails that preserve the intern from deleting the manufacturing database config, committing secrets and techniques, or modifying legacy vendor directories that maintain the system collectively.
Lastly, you want acceptance standards. What does “carried out” imply? This ought to be expressed in checks: checks, invariants, and a few edge instances that are likely to get missed. If you’re pondering that this seems like good engineering (and even good administration), you’re proper. It’s. We’re rediscovering the self-discipline we had been letting slide, dressed up in new instruments.
Context is a product, not a immediate
One cause builders get pissed off with brokers is that we deal with prompting like a one-shot exercise, and it’s not. It’s nearer to organising a piece atmosphere. Osmani factors out that giant prompts typically fail not solely resulting from uncooked context limits however as a result of fashions carry out worse once you pile on too many directions directly. Anthropic describes this identical self-discipline as “context engineering.” You could construction background, directions, constraints, instruments, and required output so the mannequin can reliably comply with what issues most.
This shifts the developer’s job description to one thing like “context architects.” A developer’s worth will not be in understanding the syntax for a particular API name (the AI is aware of that higher than we do), however relatively in understanding which API name is related to the enterprise downside and guaranteeing the AI is aware of it, too.
It’s price noting that Ethan Mollick’s submit “On-boarding your AI intern” places this in plain language. He says you need to be taught the place the intern is helpful, the place it’s annoying, and the place you shouldn’t delegate as a result of the error price is simply too expensive. That may be a fancy approach of claiming you want judgment. Which is one other approach of claiming you want experience.
The code possession entice
There’s a hazard right here, in fact. If we offload the implementation to the AI and solely give attention to the spec, we danger dropping contact with the fact of the software program. Charity Majors, CTO of Honeycomb, has been sounding the alarm on this particular danger. She distinguishes between “code authorship” and “code possession.” AI makes authorship low cost—close to zero. However possession (the power to debug, keep, and perceive that code in manufacturing) is changing into costly.
Majors argues that “once you overly depend on AI instruments, once you supervise relatively than doing, your individual experience decays relatively quickly.” This creates a paradox for the “developer as supervisor” mannequin. To put in writing spec, as Osmani advises, you want deep technical understanding. When you spend all of your time writing specs and letting the AI write the code, you would possibly slowly lose that deep technical understanding. The answer is probably going a hybrid strategy.
Developer Sankalp Shubham calls this “driving in decrease gears.” Shubham makes use of the analogy of a guide transmission automobile. For easy, boilerplate duties, you’ll be able to shift right into a excessive gear and let the AI drive quick (excessive automation, low management). However for advanced, novel issues, it’s essential to downshift. You would possibly write the pseudocode your self. You would possibly write the tough algorithm by hand and ask the AI solely to put in writing the take a look at instances.
You stay the driving force. The AI is the engine, not the chauffeur.
The longer term is spec-driven
The irony in all that is that many builders selected their profession particularly to keep away from being managers. They like code as a result of it’s deterministic. Computer systems do what they’re instructed (principally). People (and by extension, interns) are messy, ambiguous, and require steering.
Now, builders’ major software has turn into messy and ambiguous.
To achieve this new atmosphere, builders have to develop smooth expertise which might be truly fairly onerous. You could learn to articulate a imaginative and prescient clearly. You could learn to break advanced issues into remoted, modular duties that an AI can deal with with out dropping context. The builders who thrive on this period gained’t essentially be those who can sort the quickest or memorize probably the most normal libraries. They would be the ones who can translate enterprise necessities into technical constraints so clearly that even a stochastic parrot can’t mess it up.
