I started writing this blog ten years ago. Seems like a good time to take stock. I’ve worked across distributed systems, cloud infrastructure, and most recently AI infrastructure. Led from behind, made predictions that landed and predictions that didn’t, accumulated enough scar tissue to have opinions worth sharing.
This isn’t a victory lap. It’s an honest accounting.
Containers and Kubernetes would become fundamental. I wrote about that back in 2016 — containers and orchestration as fundamental as version control. That one landed. Kubernetes isn’t perfect, and the complexity concerns I raised were valid, but the declarative infrastructure model won. Almost every org of meaningful scale runs on containers now. The ecosystem matured impressively.
ML would leave the lab. Back in 2017 I argued the gap between ML research and production would close, and operating ML systems would become as established as DevOps. MLOps is now a discipline with its own toolchain and career path. The bit about ML infrastructure being the differentiator rather than algorithms held up too.
Language models would change interfaces. In 2020, after GPT-3 launched, I predicted natural language would become a standard input modality for dev tools within five years. We’re ahead of schedule. Every major IDE, cloud console, and dev tool has a natural language interface now. The “trust problem” I mentioned — how to present model outputs in a way that’s easy to verify — still keeps designers busy.
AI-assisted development would become the default. My 2022 prediction that it’d be standard within three years was conservative. Happened in about two. The shift from code production to code judgment that I described is well underway.
I underestimated the microservices correction. In 2018 I predicted a swing back toward pragmatism, away from aggressive decomposition. The correction happened, but it was more dramatic than I expected. The “modular monolith” I mentioned as a possible middle ground has become mainstream. Many orgs have actively consolidated services. I should’ve been bolder.
I was too optimistic about async-first adoption. My 2020 post on async-first organizations assumed the remote work experiment would produce lasting cultural change. In practice, many orgs reverted to meeting-heavy, synchronous cultures as soon as offices reopened. The ones that truly embraced async-first are thriving, but they’re a minority. Culture change is harder than I gave it credit for.
I didn’t see the GPU shortage coming. Even working in AI infrastructure by 2023, I didn’t fully anticipate how severe and prolonged the supply constraint would be. Should’ve recognized earlier that exponential growth in model size plus limited chip manufacturing would create a multi-year bottleneck.
Across a decade and multiple technology waves, some things have proven durable.
Leading from behind is still the most effective form of technical leadership. The post I wrote in 2016 about informal influence and trust-based leadership is the one I’d change the least. The mechanism is timeless: earn trust through demonstrated judgment, make your reasoning visible, invest in making others better. No technology trend changes that.
Culture eats strategy. I’ve seen brilliant technical strategies fail because the culture couldn’t execute them, and mediocre strategies succeed because the culture was strong enough to adapt. If I had to choose between a great strategy and a great culture, I’d choose culture every time. (Probably would’ve said that ten years ago too.)
Boring technology is usually the right choice. The temptation to adopt the newest, most exciting thing is constant. In most cases the mature, well-understood option is better because total cost of ownership — learning, debugging, operational expertise — is lower. This has saved me from countless mistakes.
The best engineers make others better. Individual brilliance matters, but it has a ceiling. The engineers who elevate everyone around them — mentoring, clear communication, thoughtful design, generous collaboration — have unlimited upside. Every great team I’ve been on had at least one person like this.
Writing is thinking. My insistence on written communication — design docs, RFCs, post-mortems, blog posts — has been one of the most valuable practices of my career. Writing forces clarity. Exposes gaps in reasoning. Creates durable artifacts that scale beyond the room. Never regretted investing in it.
The junior engineer pipeline. AI tools are automating the tasks juniors traditionally learned through. If we’re not intentional about creating new on-ramps, we risk hollowing out the pipeline that produces senior engineers. Slow-moving crisis the industry hasn’t adequately addressed.
Concentration of AI capability. The resources required to train frontier models are concentrating capability in a small number of organizations. Implications for competition, innovation, distribution of economic benefit. I work on making GPU infrastructure more accessible — this is a challenge I think about daily.
Technical debt at AI speed. AI makes it easy to generate code fast. It doesn’t automatically generate well-designed code. Orgs that use AI to accelerate development without proportionally investing in architecture, review, and maintenance are accumulating technical debt at unprecedented speed. The bill will come due.
I don’t know what the next decade holds. If the last decade taught me anything, it’s that predictions get increasingly unreliable as the time horizon extends. But I do know: the fundamentals don’t change. Technology evolves; human nature doesn’t. The organizations that invest in their people, their culture, their technical foundations will continue to outperform the ones chasing trends.
I plan to keep writing, keep building, keep learning. The best part of a career in technology is that the interesting problems never run out. The landscape keeps shifting, and that’s not a bug — it’s what makes this work endlessly engaging.
Thanks for reading along.