Syntax Is Dead. Long Live Human Engineering
*** written by an actual human ***
Software engineers are not being replaced by AI. Stop saying and believing that. Syntax generation is being replaced by AI. And that’s a very good thing.
(If you don’t read the rest of this post but fundamentally internalize that, mission accomplished. But I hope you stick with me)
Here’s the thing: software engineers were never meant to be human autocomplete engines. We only did that because we had no other choice at the time. Now that part of the job is being retired, and it’s scaring the hell out of people. But (and let’s put our honesty hats on here) this shouldn’t scare anyone, because it’s actually been obsolete for a long time.
I was a software engineer at Microsoft when “if you’re not copying and pasting from Stack Overflow, are you even writing code?” was a running joke. Before that, we had snippet sites, forums, and code repositories passed around like sacred relics. Anyone old enough to remember planetsourcecode.com or Experts Exchange? The moment search became cheap (see Google), syntax stopped being the job. Everyone was borrowing, enhancing, modifying, improving, from everyone else. As it should be. Any other way and we’d still be manually managing our own jumps in Assembly. Or rubbing two sticks together to make fire in a cave somewhere.
In short, we moved on for a reason.
The hard truth is, syntax was always outsourced; we just kept changing who we outsourced it to. Now we’re outsourcing it to an incredibly eager and fundamentally inexpensive robot who has read every Stack Overflow article ever written and has the enthusiasm and energy of a toddler with a sugar rush.
Thus, LLMs didn’t fundamentally change the core purpose of engineering. They did change what we do (we type less code), but they didn’t change the goal (solving problems). What they did was automate the most brute-force, mechanical part of the work: generating and debugging predictable code faster than humans ever could. Thank God.
Where LLMs don’t excel is innovation. And they most likely never will. And if you understand how LLMs actually work, you’ll breathe a lot easier and feel better about your existence as a human as well as your continued employment.
LLMs are models trained to predict the next token (word piece) in a sequence. That’s it. And that sounds underwhelming because it is, a little. But the magic trick is that next-token prediction over vast datasets produces systems that are extremely good at generating and writing software code at machine speed, among other things. It’s really hard, but as we can all see, it’s mostly working with increasingly good results.
Recommended by LinkedIn
But with that framing comes the understanding that LLMs are leverage tools, not workers. They interpolate and remix existing patterns. They do not independently reframe problems or invent new approaches. They don’t think, in other words.
A recent real life example: Ralph (the “Ralph Wiggum” Claude Code plugin). It’s so simple: all it does is it has an LLM loop run against itself until it converges on a better answer, or hallucinates the better answer out of thin air. It’s a delightfully small idea. A handful (literal handful) lines of code. No new model architecture. No breakthrough paper. Just a human noticing a quirk in how the system behaves and exploiting it creatively.
The way an LLM would have attempted to solve this problem is to generate a gazillion lines of code. And knowing what an LLM is and what it isn’t makes that approach perfectly reasonable: it tried to predict an answer, not think about what the solution to a problem would be. HUGE difference.
And that’s the whole thing. That idea didn’t come from token prediction. It came from curiosity, playfulness, and the willingness to try something that isn’t “best practice” yet. In short, a very human thing.
And we should be grateful, because this is what AI has actually done for software engineers: it has mercifully automated the part of the job we were already brute-forcing manually, and absorbed much of the debugging along the way. And in return, it gave us back our most precious resources: time and cognitive space.
Time to learn. Time to think. Time to design. Time to invent.
So you see: the role of the software engineer doesn’t disappear. It moves up the stack, towards architecture, systems thinking, and creative problem solving. Where it should have been all along.
TL;DR in all this: AI didn’t replace engineers. It replaced the illusion that syntax was ever the point. And now the harder part is unavoidable: invention requires judgment, taste, continued learning, and original thought.
That’s still on us.
(crossposted to Medium)
Yes, 100. My crusade, if you can call it that, is that coding is such a small part of software production that the zealots were totally overselling and wins. I love agentic coding, but it isn't the human replacement that accountants, execs and consultants say it is.
Very refreshing to read content on LI written by actual human beings. Thank you.
Spot-on!! At the end of the day, AI gives us back our time and what really matters is how we use it. I completely agree that we should be channeling that time into architecture, systems design, out-of-the-box thinking, and creative problem solving.