You can't automate what you don't understand

matt
aiproductivityprogramming

AI is automation. That's not a hot take; it's the same story we've been living through for decades. We automate the things we know how to do so we can do them faster, cheaper, or at scale. What's different now is that the automation can look like a conversation instead of a script. Under the hood it's still the same deal: you're asking a system to produce outputs that match your intent. You cannot automate things you do not understand.

Why this keeps coming up

AI doesn't look like the kind of automation we saw a decade ago. When I moved into DevOps it was when the idea of clouds was just kicking off and automation was in hot demand. We built ETL pipelines, step functions, graphs, and all sorts of things to automate the boring stuff. That all looked like infrastructure and process engineering. Large Language Models being based in conversation make everything look much fuzzier than that.

Back then my process was deeply rooted in operations culture, which on a non-sequitur note is why I got trapped in operations for so long. Basically, I'd do something by hand many times over until I was very confident that I'd discovered every way in which things could go sideways. I'd write code to the problem, tailoring in idempotency and immutability where appropriate. Eventually, a recurring task was no more. Today, there's this impression that in order to write code, or do anything technical for that matter, that an LLM can bridge the gap as long as you have any technical background. It's just not true.

I'm not saying you need to be the world's expert before you touch an LLM. I'm saying the ceiling of what you can reliably get out of AI is the ceiling of what you can evaluate. If you can't read the code and know it's wrong, you'll ship the wrong code. Worse, if you have no way to judge the trajectory the model is taking then you'll have no idea that you're headed straight for an iceberg until it's days and weeks of work - or a production incident - to get out. If you can't tell a coherent product spec from a plausible-sounding one, you'll end up with the latter. The model doesn't know your users, your constraints, or your definition of "done." You do. Or you don't, and then the automation is a liability.

The best use of AI I've seen is when someone who already knows the domain uses it to go faster. The worst is when someone who doesn't uses it to skip learning. One scales; the other accumulates hidden debt.

Understanding as a filter

Think of your understanding as a filter. Everything the model suggests passes through it. You keep what's right, you correct or discard what's wrong, and you notice when the whole direction is off. Without that filter you're just accepting outputs. That's not automation; that's hope.

This shows up everywhere: in code (did that refactor preserve behavior?), in writing (does that paragraph say what I mean?), in design (does that flow match how people actually work?). The model can propose. It can't sign off. You sign off. And signing off on things you don't understand is how you get bugs, bad copy, and systems that pass the demo and fail in the wild.

What "understanding" actually means here

I was recently in a conversation with an engineer that's looking to broaden the work that they do. It helps that they're already a great programmer (not all engineers are programmers, much less great ones) when it comes to frontends. Modern frontend code is just as complex as modern backend code. You're dealing with many similar systems, concerns, tradeoffs, and patterns in what amounts to a sandboxed workspace. This is a long way of saying they don't need a primer on programming principles. Instead, they will need a primer on the language they're seeking to write in.

In our case the language is Go. I suggested picking up The Go Programming Language by Donovan and Kernighan. Read the book front to back, understand the type system, idioms, and its elements of concurrency and parallelism. Once you understand those you're more than qualified to write and debug production Go code.

In general, you'd repeat this exercise with any language you're seeking to write in. This isn't a new paradigm at all; it is quite literally the polyglot way. Every engineer that I know that programs in multiple languages has a process very similar to this. The beauty of being polyglot is that you begin to recognize the similarities and differences between languages and even the features they have. You begin being able to intuit patterns, the way things should work, and how to use them to your advantage.

I talked a lot about programming here, but the same principles apply to other domains.

Conclusion

Use AI to automate the parts you understand. Use it to draft, to scaffold, to explore options, to handle rote work. Don't use it to skip building understanding. The moment you do, you're not automating: you're gambling. And the house has no reason to care if your project succeeds.

So get good at the thing, then get good at wielding the tool. Order matters.