Field note · Agentic coding

How I use Codex and Claude Code to ship working prototypes.

AI coding agents are useful when the work has a real product surface, a real repo, and a concrete definition of done. The advantage is not better prompts. It is tighter loops between product judgment, implementation, and verification.

Cong Fan · Brisbane · Updated May 4, 2026

Abstract AI product operator workspace

I use Codex and Claude Code most effectively when the project is already grounded: a founder wants to validate one product workflow, a small team has a messy internal process, or a repo has a user-visible failure that needs to be traced through the stack.

The agent is not the strategy. It is a way to compress the implementation loop once the target is clear enough to inspect, build, and verify.

The loop

Where this beats a normal ChatGPT session

Most ChatGPT sessions stop at advice. Good agentic coding work touches the live materials: the repository, the deployment target, the form, the database, the logs, the screenshots, the payment flow, or the automation history. That is where the useful truth is.

For a founder, this means the conversation turns into a working artifact faster. For a small business, it means an automation can be tested against the real workflow instead of described in a slide. For a product team, it means repo rescue can move from "maybe this is the issue" to a narrow patch with evidence.

What I still keep human

I keep product judgment, scope control, taste, and acceptance criteria outside the agent. The agent can write code, find mistakes, compare options, and produce drafts. It cannot know which tradeoff matters to the customer unless the operator makes that explicit.

If you have a real workflow, prototype, or repo issue, the useful first step is a project brief: what exists, what hurts, who uses it, and what done looks like.