Scattered Thoughts About Building with AI
TL;DR - I have more questions than answers
I have been heavily using AI coding agents for a little over one year now and, in case you haven't noticed, things are moving FAST. I have so many thoughts, ideas, opinions, etc. that have popped into my head throughout this time. Each of these snippets below could probably be its own post, but keeping it “scattered” is more truthful to how I actually feel about it: I have more questions than answers, I’m excited AND anxious at the same time, and I don’t yet know how to tie these things up into a nice cohesive narrative. So I’m not going to try, enjoy 😀
Vibe coding? More like doom coding. I get a way bigger dopamine hit by telling Claude Code to implement a feature from my phone than I do watching NFL memes on Instagram.
People are saying:
Learning to code / getting into software engineering is useless.
Senior engineers are the ones benefitting most from AI coding due to deep expertise built up over years.
Both of these can’t be true.
I may be writing basically 0% of my code now, but I’m having more fun than ever building. I can go from idea to something working in minutes. I can one-shot previously dreaded and tedious things and it just works. Magical.
One could argue, “AI coding is just doing the same thing, but faster!”. This is true. You could say the same thing of horses -> cars. It’s “just” faster. But orders of magnitude faster means new things are possible that just weren’t before.
Will automated integration / e2e testing be moved fully to AI? Instead of brittle Selenium tests based on CSS selectors, maybe there’s just a pipeline step: “Hey Claude, test the app to make sure this change didn’t break any core functionality, and give me screenshots as evidence.” The models aren’t good enough for this type of UI verification work, but it probably won’t be long before they are.
I used to think I needed to read and understand every single line of code I wrote (I even wrote a post about it). I’m rethinking this. If I have a senior engineer coworker and I trust the quality of their work, do I really need to read every line? This is what coding with Opus 4.5 feels like. It just... works. High-level? Of course. Putting guardrails in place? Absolutely. Read and scrutinize every single line? Not so sure anymore.
What happens when AI coding goes from me conducting AI and directly providing instructions -> AI sessions periodically starting and proactively asking me for input / direction?
What happens when context windows go from 200K to 20 million or 200 million? What happens if the continual learning problem is cracked?
Using anything but the latest models is a waste of time. When Opus 4.6 or 5.0 or whatever comes out, I will throw Opus 4.5 in the trash and never look back.
Things that previously blocked me in side projects (lack of frontend expertise or mobile app dev) are now no longer blockers. Sure, I’m no frontend expert. But I know what “good” looks like. AI helps to fill the gaps.
1.5 years ago I scoffed at $10 / month because it was glorified auto-complete that was wrong 70% of the time. Now $100 / month feels like a steal.
Historically, moving up the IC career ladder has been all about expanding influence and impact. At a certain point, you couldn’t meaningfully increase impact by being more productive individually, so to move up you had to get better at influencing others to get more leverage. What happens to the career ladder when wielding AI becomes higher leverage than influencing other teams and engineers? Are we already there?
Will AI do to software what social media did to entertainment? Will the “solo dev” at scale become more viable in the same way the influencer became viable at scale with social media / content creation?
(some) Tech debt has just been refinanced at a much lower rate.
CLIs are cool again. Who needs an MCP server when you have a simple CLI?
Local-first software will be an unexpected beneficiary of the AI era. Perfect example: Obsidian. They have no need to build a fancy AI wrapper, because agents can already read and understand markdown files. Sure, a SaaS could build an MCP server, but that’s a lot more friction than just “hey Claude, go read my files”. Shameless plug for my side project 😀 Treeline, a local-first personal finance app built on DuckDB.
I love Git, but it was not built for parallel work on the same machine. We need something else. Whatever it is, I suspect it will be some mashup of Git and Docker: version control paired with isolated runtimes / filesystems. I’m watching what Turso is doing here with AgentFS which might be a piece of the puzzle.
The pendulum seems to be swinging back to local computing from cloud. AI will surface things from your computer to have on the go. One of the most brilliant examples I’ve seen is Happy Code. Scan a QR code and - boom - you can code from your phone by talking to your laptop at home. The server is just a dumb pipe mediating messages between your phone and your laptop. This pattern is powerful and could apply well beyond just coding. Why pay Google $30 a year to store your 200GB of photos and documents when your personal laptop has 1TB? Historically, it’s because of convenience and to have access to it anywhere. I wonder if AI will shift the paradigm here, and if personal computers will start to turn into mini personal servers.
People say “but LLMs are just token predictors! They don’t have actual intelligence!” True. But productivity != intelligence. The marketplace usually rewards productivity, not intellect. LLMs may not be truly “intelligent” in the same way a human is, but they absolutely are productive. Make of that what you will.
Sometimes I wonder if the only thing standing between me and unemployment is my brain has a bigger context window than an LLM (for now).
I would much prefer a local LLM so I don’t give all kinds of data to some corporation. But right now, the frontier models are so much better than anything off the shelf, so I won’t self-host. If that gap closes to something marginal, I would absolutely consider self-hosting.
I got into software engineering because I was fascinated by the ability to build something with just an idea and a computer. Never has that been more true than right now.
“Prompt engineer” as a job title seems to have been short-lived, thankfully. And I still am not sure what an “AI engineer” is.
I really hope this is the last year before AI can fully file my taxes for me 😂
Apps used to be destinations. Now AI is the destination, and apps are just detours. Not everything can happen in a chat, but if my AI can’t interact with your app at all, I probably don’t want it.
It’s possible AI could render my career obsolete at some point (not being doomsday-ish, just recognizing it’s possible). But if it doesn’t, at minimum it will heavily disrupt it (already has). My best shot at “making the cut” further into the AI era as a software engineer is by riding this wave. Fortunately, my career so far has taught me to “surf”, so for now, I’m enjoying the ride 🤙
