It’s been about a year and a half since ChatGPT dropped, and honestly the pace has been ridiculous. We went from “can this thing write a decent email?” to models that can reason about code, images, and complex docs all at once. Capabilities are advancing faster than most orgs can even absorb them.
I’ve spent this period helping teams figure out how to integrate AI into their engineering practices. And I’ve noticed some patterns — what tends to work and what definitely doesn’t.
The variance across teams in the same org is wild. Some have fully integrated AI into their daily flow and are seeing real productivity gains. Others haven’t changed a thing. And the difference isn’t technical sophistication. It’s leadership.
Every team that’s adopted AI well has at least one person — usually a senior IC or tech lead — who actually invested time learning the tools, figured out practices that work, and modeled it for everyone else. Adoption spreads through demonstration, not mandates. Top-down “everyone must use Copilot” stuff generates compliance without learning. When people see a respected engineer they trust actually using these tools well, that’s when it catches on.
I wrote about this possibility a couple years ago, and now I’m watching it happen. The tasks AI handles well — boilerplate, pattern implementation, docs, tests — are exactly what junior engineers used to learn through. So how do they build foundational skills when that on-ramp is getting automated? We haven’t solved it yet. But I’m seeing some approaches that seem promising.
Pair programming with AI instead of removing juniors from the coding process. The junior drives, uses AI for suggestions, but has to understand and evaluate every line. That preserves the learning while speeding things up.
Review skills matter more than ever. If more code is AI-generated, the ability to critically review it — not just correctness but design, security, performance, maintainability — becomes the judgment muscle that actually compounds over time.
And there’s an opportunity to get junior engineers into system-level thinking earlier. With AI handling more of the function-level stuff, architecture discussions and design reviews are where human reasoning still dominates. That’s where they can accelerate.
This is where things get messy. AI is creating demand for capabilities most orgs don’t have yet.
Model serving at scale — when you move beyond prototypes to production AI features, you need reliable, low-latency model serving. GPU management, request batching, model versioning, graceful degradation. Non-trivial systems work.
Cost management. AI API costs are a new line item and they surprise people. I’ve seen teams launch features without understanding cost per request and then discover they’ve committed to a $50K/month API bill. Oops.
Evaluation pipelines. AI outputs are probabilistic. Measuring quality is harder than traditional features. You need test datasets, quality metrics, regression detection — automated eval pipelines or you’re flying blind.
And the most effective enterprise AI apps I’ve seen are the ones grounded in organizational data. Docs, code, tickets, architecture. Building pipelines to extract, process, and serve that context to models is a serious infrastructure investment.
Through all of this, I keep landing on the same thing: be honest about what you know and what you don’t.
We don’t know how capable these models will be in two years. We don’t know which jobs get hit hardest. We don’t know which architectural patterns will stick. Anyone claiming certainty about the AI trajectory is either selling something or kidding themselves.
What we can do is build organizational muscle for adaptation. Invest in learning. Create space for experimentation. Measure honestly. Be willing to change course when the evidence warrants it. And be transparent with your teams about the uncertainty.
The leaders who navigate this well won’t be the ones who predicted the future. They’ll be the ones who built orgs that could adapt to whatever showed up.