The AI adoption chasm in software engineering
I’m seeing a pattern in AI discourse among enthusiastic adopters:
You’re cooked if you don’t use LLMs to write code
This is counterproductive and not helpful.
I believe that all technologically minded folks, regardless of their stance on particular tools, have things they can learn from each other.
There’s more value in working across these groups, inside and outside technology, rather than isolating ourselves.
The unproductive discourse
To be specific, discourse gets unproductive when adopters project that other software practitioners are “cooked” or “not going to make it” because they’re not adopting AI.
As someone who has enthusiastically adopted several different applications of AI, most notably using AI as a coding partner and often as a coding driver, what I find most perplexing is the divide in experiences. For every person who finds these tools incredibly useful, there are many who tried them and found them frustrating, tiresome, irritating, or ineffective.
I am genuinely curious about what is different in the approaches and contexts of those folks that leads them to find something I think is really useful to be not that useful at all.
Tools and preferences
In part, I wonder if it’s a bit like a Vim versus VS Code/full-fledged IDE divide. I’ve never been able to be productive with Vim; it doesn’t fit how my brain works. Plenty of folks swear by it, and I don’t deny its benefits for them.
Nor would I say a Vim user couldn’t be a successful software engineer. There are examples to the contrary, as there are examples of successful engineers who use more fully featured IDEs.
To me, deciding whether to use AI to write software is similar to Vim vs. an IDE. I find AI an incredibly powerful and useful tool, but it’s not uniformly or universally better than a person in every capacity.
The impact of feeling the pain
You build new paradigms when the current ones block you. For folks coding with language models today, one notable pain is parallelizing agents without them interfering with each other, performing more inference, and finding ways to agents to run independently for longer.
I don’t see a ton of folks writing new programming languages using language models that meaningfully change the programming paradigm, at least not yet. They don’t feel the same pain as someone struggling through writing in the language.
A long-term perspective
We don’t know how all this will evolve, or which skills will be most conducive to enduring success. That’s fine.
If you plan to build software for decades, think long-term about craft, mentorship, and culture. A posture of animosity toward non-adopters is not a durable strategy.
Let people work how they think is best. This technology is going to be around for a long time.
What will best serve us right now is to understand why current tools don’t resonate with people who aren’t excitedly adopting them.
I don’t yet have a good explanation for why one person finds AI compelling and another doesn’t. To understand it, we need to keep telling and hearing more specific stories: what you tried, what hurt, what felt like magic, and what felt like a waste of time.
Tell me where AI has failed you or saved you. The specifics are how we close the gap.
This is a living document in my digital garden. It may grow, change, or branch into new ideas over time.