Skip to content
ech
toolingaicraft

On Working with AI Tooling, First-Principles

AI coding tools are genuinely useful, but only if you stay in the driver's seat. Some thoughts on what that actually looks like in practice.

I've been using Claude Code to build this site, and I want to be honest about what that experience is actually like — not the pitch version, and not the backlash version either.

The honest version: it's useful in direct proportion to how much I understand what it's doing.

The trap

The failure mode with AI coding tools isn't that they write bad code. It's that they write plausible code confidently, and if you're not paying attention, you'll ship something you can't explain.

That's a real problem — not because it'll necessarily break (it might work fine) — but because it puts you in debt. Every piece of your codebase you don't understand is a liability the next time something goes wrong. And when it goes wrong at 11pm before a release, "I'm not sure, the AI wrote this part" is a bad place to start debugging.

The code has a particular texture when this happens: correct on the surface, slightly strange underneath. Abstractions in the wrong places. Patterns that don't fit the language's idioms. A helper function that exists for five lines of code that could have just been five lines of code.

The real risk isn't AI-generated code that breaks. It's AI-generated code that works — until it doesn't, and you have no idea why.

What actually works

The better frame: AI tools are fast at the things I'm slow at, and I'm good at the things they're inconsistent at.

They're fast at:

  • Boilerplate and scaffolding
  • Surfacing API shape without a trip to the docs
  • Translating fuzzy intent into a working first draft
  • Holding a lot of context in a single operation

I'm good at:

  • Knowing when the abstraction is wrong
  • Noticing when "clean" is actually hiding complexity
  • Having opinions about what the code should feel like six months from now
  • Catching the thing that almost makes sense but doesn't quite

So the workflow that's been working:

  1. Describe what I want in enough detail that I could evaluate the output
  2. Read every file before it gets committed — actually read it, not skim it
  3. Push back when something feels off, not just when it's clearly broken
  4. Ask "why" when I don't understand a choice

That last one matters more than it sounds. "Why did you use a useEffect here instead of deriving this value directly?" isn't just a sanity check — it's how you stay calibrated. If the answer makes sense, you learned something. If it doesn't, you caught a problem before it became your problem.

The skill that transfers

The engineers who get the most out of these tools are the ones who could do the work without them. The tool amplifies taste. If your taste is underdeveloped, it amplifies that too.

That's not an argument against using them. It's an argument for staying engaged — for treating the output as a draft worth scrutinizing, not an answer worth shipping. The parts of your brain that evaluate and judge need to stay on even when the parts that generate are on autopilot.

The goal is still to understand what you're building. The tool just changes how fast you get to a draft worth understanding.


The best analogy I've found: it's like having an extremely fast typist who knows every API by heart and has strong opinions about code style. Incredibly useful. Still needs a good editor.