Currently building: himasjid · Week 16
Perth, AU · GMT+8 00:00:00
Back to writing
Apr 14, 2026· 8 min read· Workflow, AI

On shipping
software with AI
without vibe-coding.

Fourteen months ago I started letting an AI write code with me. I kept the parts of my practice that mattered — slow reading, type safety, code review by a human who cares — and I let go of the parts that were, honestly, typing. This is a report from inside that experiment.


The most useful thing I can say about working with AI coding assistants is that the productivity gains are real and almost entirely lateral. I do not type faster. I type less. What changes is the shape of the work: fewer hours spent grinding through boilerplate, more hours spent deciding whether the boilerplate should exist at all.

This sounds like a small shift. It is not. When the cost of writing a file drops, the cost of having the wrong file stays the same — and suddenly the wrong file is the expensive thing. Good taste becomes the bottleneck, which is a very different bottleneck from the one I had three years ago.

The hard part was never typing. The hard part was knowing what not to build.

What “vibe-coding” actually means

The phrase “vibe-coding” has come to mean, roughly, accepting whatever the model produces without reading it — shipping code you could not defend in a review. It is seductive because it is fast, and it is dangerous for the reasons fast things are usually dangerous: the bill arrives later, in a different currency.

I don’t vibe-code. I read every diff. When the model writes something I don’t understand, I ask it to explain, and if the explanation is not convincing I throw the code out. This sounds pious but it’s just self-interest: I am the one on call at 2am, and the model is not.

The one rule I actually follow

If I would not have written this code myself given enough time, I do not ship it. Everything else — style, idiom, comment density — is negotiable. This rule alone explains most of how my practice has changed.

A concrete example

Last week I was adding a feature to PeopleShift. A user should be able to flag a payroll discrepancy, and the flag should route to the right reviewer based on the department. Here is the shape of the hook I ended up with, after three rounds of back-and-forth:

TypeScript// Route a payroll flag to the correct reviewer
export function routePayrollFlag(flag: PayrollFlag): Reviewer {
  const dept = flag.employee.department;
  const reviewer = REVIEWER_MAP[dept] ?? REVIEWER_MAP.default;
  if (flag.amount > ESCALATION_THRESHOLD) {
    return reviewer.escalation;
  }
  return reviewer.primary;
}

The first draft the model produced was fine. It used a switch statement, it was slightly longer, and it had an implicit assumption about how the reviewer map was shaped. The third draft — above — is the one I would have written. Getting there cost me about four minutes. Writing it from scratch would have cost me twenty.

TIME SPENT 2023 2024 2026 Typing Reading Deciding
Fig. 01 — Estimated split of engineering hours, by activity.

How I structure a session

A working session with AI assistance, for me, looks like this:

  1. Write the test first. The test is a contract I understand; the model’s job is to satisfy it.
  2. Describe the function at the file level — a paragraph, not a class diagram. Let the model propose shape.
  3. Read the proposal critically. Ask questions. Reject freely.
  4. Refactor in my editor, not in chat. My brain is still the better refactorer.

The second rule is the one I get asked about most. I used to think specs had to be precise; in practice, prose is underrated. The model is good at inferring intent from reasonably-expressed prose — better, honestly, than from my half-formed type signatures.

Tools that matter

I use a handful of things every day. Cursor for editor-integrated edits. Claude for longer conversations about architecture. A local script that runs my test suite and feeds failures back to the model when I’m iterating on a hard bug. None of these are secret; the leverage is not in the tool choice, it’s in knowing when to switch.

· · ·

What I don’t do

I don’t let the model write tests for code it just wrote. I don’t let it touch security-sensitive code unsupervised. I don’t use it as a rubber duck — it’s too agreeable to be useful there. And I don’t use it for code reviews; I want a human who can say “this is the wrong abstraction” in a way that survives pushback.

If I had to summarize: the model is a very fast junior developer who has read everything and understood nothing. That is more useful than it sounds, and less useful than it looks. The practice of software engineering is, as far as I can tell, exactly the same as it was three years ago. The hands on the keyboard are different. The brain is not.

Written by
Tri Bagus
Perth-based web developer. Building things from zero.