mike cao

My Journey to AI

I’ve been writing software for a long time. I do it because I love it. I mostly write software for myself, to scratch an itch, to learn something new, to see if I can make a thing that didn’t exist before. I’ve been doing open source for as long as I can remember, and I believe in building things in the open. Code has always been the way I think through problems.

So when AI showed up, you’d think I would have been first in line. I wasn’t. I was one of the last.

The skeptic years

When the AI hype cycle started, I watched from a distance. Everyone was talking about AGI like it was six months away. Every product launch was framed as a revolution. The Rabbit R1 was going to replace your phone. Devin was going to replace developers. GitHub Copilot was supposedly writing production code, but every developer I knew hated using it.

I tried the products. All of them. Mainly because I’m a developer and curiosity is part of the job. I tried ChatGPT, Claude, Perplexity, Cursor, and various models. I tested them on real problems — writing functions, debugging code, generating tests. The results were consistently bad. Models would hallucinate APIs that didn’t exist. They’d write code that looked plausible but fell apart the moment you ran it. If a model couldn’t write a simple test without introducing syntax errors, how was it ever going to be good enough for real development?

None of it changed my habits.

The one thing that stuck

The only behavior change I noticed was small: I stopped googling things. ChatGPT replaced Google for me, and more specifically, it replaced Stack Overflow. Instead of reading through five answers to find the one that actually applied to my situation, I’d ask a question and get a direct, contextualized response. That was genuinely useful.

I also tried AI autocomplete in my editor. It was neat. Sometimes it guessed right, and there was a small thrill in watching it complete a line the way I would have written it. But it wasn’t a game changer. It was a faster tab key.

That was the extent of my AI adoption for a long time. A search replacement and a slightly smarter autocomplete. Everything else felt like hype.

Vibe coding and other bad ideas

Then the concept of “vibe coding” started making the rounds. The idea that you could build software just by prompting an AI, without really understanding or reviewing the code it produced. I hated it. I thought it would lead to a flood of garbage software riddled with security vulnerabilities and subtle bugs that nobody would catch because nobody was reading the code.

To me, vibe coding represented everything wrong with the hype cycle — the willingness to sacrifice quality for speed, the assumption that understanding didn’t matter as long as the output looked right. I’d spent my career caring about the code I wrote. The idea of not even reading it felt like malpractice.

December 2025

I kept hearing about how good Claude Code was. Not the usual hype — specific, concrete accounts from developers I respected about what it could actually do. So I tried it. Not with high expectations. Just curiosity, the same curiosity that had led me to try and dismiss every other AI tool.

It wasn’t just that it could write code. I’d seen that before, and it had never impressed me. It was that the code worked. Not “worked with some tweaks.” Not “worked after I fixed three bugs.” It worked. Tests passed on the first try. Syntax was clean. The code did what I asked it to do, in the way I would have done it, sometimes in ways I wouldn’t have thought of.

Things that used to not work suddenly did. Tests that would have taken me an hour to debug were generated correctly in one shot. Errors that used to cascade through a codebase were caught and handled. The gap between intent and implementation, the gap I’d spent my entire career learning to bridge, had narrowed to almost nothing.

The convert

I started vibe coding. Real vibe coding — the thing I’d railed against. I’d describe what I wanted, watch the code appear, and run it without scrutinizing every line. And it worked. Not sometimes. Reliably.

It was fun. That’s the part that surprised me the most. Programming had always been fun for me, but this was a different kind of fun. It was the fun of building without the friction. The fun of having an idea and producing a working prototype an hour later. The fun of moving at the speed of thought instead of the speed of typing.

I had spent years arguing that vibe coding was dangerous, that AI-generated code was unreliable, that the hype was overblown. And now here I was, doing the exact thing I’d criticized, and finding new joy as a developer.

What I got wrong

Looking back, my skepticism wasn’t irrational. The early tools really were bad. The hype really was overblown. The products really did fail to deliver on their promises.

Where I went wrong was in assuming the ceiling was the floor. I saw what AI could do in 2023 and 2024, and I extrapolated that forward as a permanent limitation. I confused “not good enough yet” with “never going to be good enough.” That’s a mistake I should have known better than to make. I’ve been in tech long enough to know that tools improve, and sometimes they improve fast enough to make yesterday’s skepticism look foolish.

The other thing I got wrong was treating my skepticism as an identity. Being the AI skeptic felt comfortable. It felt like having standards. It’s easy to be the person in the room saying “this doesn’t work” because you’re right often enough that it feels like wisdom. But at some point, not updating your beliefs in the face of new evidence isn’t wisdom. It’s stubbornness.

Where I am now

I still write software to scratch an itch. I still care about open source. I still believe in understanding what you build. But the way I build has fundamentally changed, and I don’t think it’s going back.

AI forced me to rethink ideas I’d held for years. About what it means to write code. About what parts of the process actually matter. About whether reading every line is diligence or just habit.

The journey from skeptic to convert wasn’t a sudden flip. It was a long period of watching and waiting, followed by a single experience that was undeniable. I think a lot of developers are somewhere on that timeline right now. Some are where I was in 2024, trying tools and being unimpressed. Some are where I was in early 2025, grudgingly using AI for search but dismissing everything else. And some haven’t tried it at all.

If you’re a skeptic, I get it. I was you. All I’d say is: keep trying. The tools that failed you last year might not fail you this year. And when one doesn’t fail you, pay attention. That’s the moment everything changes.