Rob Colantuoni

April 11, 2022

Tags: AI and Leadership

What I've learned coding with Copilot

The copilot model and what comes after

GitHub Copilot’s been generally available for a few months now, and I’ve been using it daily. The reactions from engineers have been predictably split: some people swear by it, others brush it off as fancy autocomplete that spits out buggy code. After living with it for several months, I think both sides are missing something.

Copilot isn’t the end state. It’s just the first real proof that AI-assisted development can actually work. And once you accept that it works, the implications start to reshape the whole profession.

So what does it actually do well

The thing Copilot nails is cutting through the boring stuff. Need to parse a CSV? Give it the function signature and it’ll crank out a reasonable implementation. Writing a unit test? Hand it the function name and it infers the test cases. Boilerplate? Gone.

That sounds minor until you actually track your time. A huge chunk of professional coding isn’t creative problem-solving — it’s the mechanical translation of patterns you already know into code. Write the endpoint. Write the data layer. Write the serialization. The developer knows exactly what needs to happen; they’re mostly just typing it out.

Copilot collapses those tasks from minutes to seconds. Over a full day, the savings add up.

Where it falls apart

The failure modes are just as useful to understand. Copilot struggles with:

Novel algorithms. When the problem doesn’t fit a common pattern, the suggestions range from irrelevant to subtly wrong. That makes sense — the model is pattern-matching against training data, not reasoning through the problem.

System-level thinking. Copilot operates at the function level. It has no concept of your broader architecture, production environment, performance constraints, or operational reality. It can write you a database query, but it can’t tell you that query will explode at scale because of your index structure.

Security and correctness. Sometimes the generated code has vulnerabilities or logical errors that look plausible but are wrong. A junior might accept them. A senior catches them — but that cognitive overhead of reviewing AI output partly eats into the time you saved generating it.

What I think is actually happening

Beneath all the hype, something structural is shifting: AI code generation is going to compress the value of writing code and expand the value of thinking about code.

If AI can handle the translation from “I know what this function should do” to “here’s the implementation,” then the advantage moves upstream. To understanding what the function should do in the first place. To designing the system. To defining interfaces. To anticipating failure modes. To reasoning through trade-offs.

Those are staff engineer skills. And I think their relative value is about to spike.

What this means if you’re running a team

For engineering leaders, a few questions keep coming up:

Should you adopt it? Yeah, with guardrails. The productivity gains on routine tasks are real. But set up guidelines for when AI-generated code needs extra scrutiny — especially around security-sensitive or performance-critical paths.

Will it reduce headcount? Not in the short term. Software development was never bottlenecked on typing speed. The real bottlenecks — ambiguous requirements, system complexity, coordination overhead, ops burden — barely budge when code generation enters the picture. Medium term, it might shift team composition: fewer junior roles focused on boilerplate, more senior roles focused on design and architecture.

How does it affect code review? Code review gets more important, not less. When code is generated instead of hand-written, the reviewer’s job is to verify that it’s correct, secure, and aligned with your conventions. That actually demands deeper engagement than reviewing something a human carefully crafted.

My prediction

Within three years, AI-assisted development will be the default for professional software engineering. Not using it will be as unusual as not using an IDE. The tools will get a lot better — better context, better codebase awareness, better reasoning about correctness.

But the engineers who thrive won’t be the ones who generate code fastest. They’ll be the ones who can evaluate generated code effectively, who can design systems that withstand the kinds of errors AI introduces, and who can reason at a level of abstraction the models can’t reach.

The skill premium is shifting from code production to code judgment. Plan accordingly.