My AI Debugging Workflow (Plan, Build, Verify, Ship)

AI can write a lot of code. But debugging is where it becomes genuinely powerful.

Over time I have developed a workflow for debugging with AI that is fast, reliable, and prevents the model from hallucinating solutions or guessing. This is the exact process I use when building projects with tools like Claude Code, Cursor, and ChatGPT.

The key is treating AI like a smart collaborator, not a magic box.

Start With Plan Mode

Before writing any code, I almost always start by putting the AI into plan mode.

Instead of immediately generating code, I ask the AI to create a plan for how the feature should be implemented. Planning forces the model to think through the problem first. During this stage it will often take into account things like:

  • the project's structure
  • framework conventions
  • existing architecture
  • Cursor rules or CLAUDE.md instructions

A typical prompt might look like this:

Before writing any code, create a step-by-step plan for implementing this feature.
Take into account the project's existing architecture and coding standards.

Once the plan is generated, I read through it carefully.

AI plans are often good, but sometimes you will notice things that feel like a red flag. Unnecessary abstractions, incorrect assumptions, or something that does not fit the current codebase. When that happens, I ask the AI to revise the plan before any code gets written.

Update the plan to avoid introducing unnecessary abstractions.
Make sure the solution stays consistent with the current project structure.

Taking a few minutes to validate the plan saves a lot of debugging later.

Use a Second Agent to Find Gaps

One technique I have found extremely useful is feeding the generated plan to another AI agent.

The goal is not to regenerate the plan. It is to critique it.

Review this implementation plan and identify gaps, edge cases, or potential improvements.
Suggest anything that might have been missed.

This often surfaces things like:

  • missing edge cases
  • overlooked features
  • performance considerations
  • architectural improvements

Sometimes the second agent suggests ideas I had not even considered, which can significantly improve the final implementation.

Think of it like a design review before coding begins.

Let the AI Help Build the Feature

Once the plan looks solid, I move on to implementation.

AI is excellent at scaffolding features quickly:

  • building components
  • generating API routes
  • wiring up UI behavior
  • creating database queries
  • handling validation

AI can often get you 80 to 90 percent of the way there very quickly. But after that, verification and debugging become important.

Write Tests and Manually Verify Behavior

After the feature is implemented, I verify that everything works properly.

My typical workflow looks like this:

  1. Write unit tests
  2. Run the app locally
  3. Manually test the feature
  4. Try edge cases

Manual testing helps me understand exactly what behavior is failing, which gives much better information to provide to the AI when something goes wrong.

Always Start With Error Messages

When something is not working, the best place to start is with error messages.

Check things like:

  • browser console errors
  • Next.js logs
  • terminal output
  • failing test results

Error messages give AI a clear starting point. Instead of asking something vague like "Why doesn't this work?" you can give the AI something concrete:

Here is the error message:

TypeError: Cannot read properties of undefined (reading 'map')

Here is the component code that triggers it...

This dramatically improves the quality of AI responses. AI tools hallucinate less when they have concrete information to reason about instead of guessing.

Provide Behavioral Context

Error messages are good, but behavioral context is even better.

When prompting the AI, I include things like:

  • what I expected to happen
  • what actually happened
  • steps to reproduce the issue
  • UI behavior I noticed

Example:

When I click the submit button the API request fires,
but the UI does not update.

The console shows this error:
[error message]

Here is the component code...

The more context you give the AI, the easier it is for it to reason about the problem. This is the same principle behind structuring your repository for AI tools. Context is everything.

If the AI Gets Stuck, Ask It to Gather More Context

Sometimes AI will suggest fixes that do not actually solve the problem.

When that happens, the worst thing you can do is keep asking the same question. Instead, ask the AI to gather more context.

Investigate the codebase and identify where this value originates.
Look for related files that might influence this behavior.

This pushes the AI to explore the codebase instead of guessing. Debugging with AI often becomes a back-and-forth investigation process. The more structured that process is, the faster you converge on the real issue.

Use Higher Effort for Complex Bugs

Some bugs require deeper reasoning.

When I am dealing with something complex, I increase Claude's effort setting. Claude allows you to adjust how much reasoning the model uses. I usually set effort to High or Max when debugging difficult problems.

The maximum effort mode provides the deepest reasoning available. This allows Claude to:

  • reason through more complex logic
  • analyze larger parts of the codebase
  • avoid shallow guesses
  • produce more thoughtful debugging steps

If you are dealing with a complicated issue, turning the effort level up can make a big difference.

Run Automated Safeguards Before Committing

Once the issue is fixed and everything works locally, I run several automated checks.

I always verify that:

  • linting passes
  • the build succeeds
  • unit tests pass

I strongly recommend using a pre-commit hook to enforce this automatically.

npm run lint
npm run build
npm test

If any of these fail, the commit is blocked. This protects you from accidentally committing broken code. The earlier post on debugging AI code covers setting up linting and guard rails in more detail.

Add an AI Code Review Step

Even after everything passes locally, I like having another layer of protection.

Before merging code, I open a pull request and have an AI review tool like CodeRabbit analyze the changes.

AI code reviewers are good at identifying:

  • potential bugs
  • missing edge cases
  • performance issues
  • risky patterns

It is an easy way to catch things you might have missed during manual review.

Run a Security Review Before Shipping

One step many developers skip is security scanning.

If you are using Claude Code, I recommend running:

/security-review

This scans your codebase for potential vulnerabilities.

The last thing you want is to ship code with a known exploit. Adding this step significantly reduces that risk.

The Full Workflow

Putting everything together, my workflow looks like this:

  1. Start in plan mode
  2. Review the AI-generated plan
  3. Have another AI agent critique the plan
  4. Implement the feature with AI assistance
  5. Write tests and manually verify behavior
  6. Check console errors and logs
  7. Provide detailed debugging context to the AI
  8. Ask the AI to gather more context when stuck
  9. Increase Claude's effort level for complex problems
  10. Run linting, build checks, and tests
  11. Run an AI code review
  12. Perform a security review

This process helps prevent AI hallucinations and ensures the code you ship is reliable.

Where to Go From Here

AI can dramatically accelerate development, but debugging still benefits from structure and discipline. The key is giving AI the right inputs: clear error messages, detailed context, and access to the relevant code.

When you combine good debugging practices with a well-structured project, AI becomes an extremely powerful development partner. But that structure has to exist before the AI starts writing code. Without it, you spend more time correcting drift than building features.

That's why we built ShipKit.

ShipKit is a rule-driven Next.js architecture designed specifically for AI coding tools like Cursor and Claude. It provides clear project conventions, structured patterns, and guidance files that help AI assistants generate code that actually fits your codebase.

Once that structure is in place, you can move much faster when starting new projects.

That is where ShipUI comes in.

ShipUI is our collection of production-ready Next.js starter themes built on top of ShipKit. Each theme includes real components, real project structure, and everything wired up so you can begin building immediately.

ShipKit gives your AI tools the structure they need to write better code. ShipUI gives you a clean, production-ready starting point so you can ship faster.

Buy once, own forever. Start building immediately.

More posts

I Built a Music Audio Features API Because Spotify Killed Theirs
How I built MeloData, an open audio features API using Essentia, after Spotify deprecated their Audio Features endpoint. BPM, key, energy, danceability for any track by ISRC.
March 26, 2026
Next.js Retro Diner Template (BOOTH // NEXT)
BOOTH // NEXT is a retro diner Next.js 15 starter with Righteous display font, cherry red and warm ivory palette, checker patterns, and a full component library.
March 25, 2026
AI Conventions Now Included in Every ShipUI Theme
CLAUDE.md and .cursorrules ship with every theme at no extra cost. No more bundles. One price, everything included.
March 24, 2026