This 1 Trick Turned AI Code Fixes from Garbage to Gold
Build a repeatable prompt loop from your tools’ feedback and stop AI from guessing.
Transform your development tool errors from frustrating noise into precise, actionable AI prompts that deliver consistent, accurate code fixes.
You’ve seen it before: a wall of ESLint errors after using AI to help write code. It feels like a tax on velocity. So you copy the lint output into ChatGPT hoping for clean fixes…and instead get more problems:
Lazy
any
types.Fixes that don't compile.
New errors that didn’t exist before.
I used to treat these error dumps as annoying noise—until I realized: they’re structured feedback begging to be weaponized.
ℹ️ ESLint is a popular JavaScript/TypeScript linter that flags bugs, enforces style, and keeps your codebase clean and consistent.
The Shift: Treat Errors as Specs
Rather than dumping raw output into an AI chat and hoping for the best, I started treating each ESLint error like a mini-spec. That one change turned everything around.
Inspired by how tools like CodeRabbit structure code review prompts, I built a schema to convert each ESLint error into a structured prompt.
Instead of dumping the output wholesale, I distilled them into a template.
The Template
I want you to take these ESLint Errors and generate concrete prompts using this format:
In {file_path} around lines {line_start} to {line_end}, the {parameter_or_variable} {extracted_from_or_used_in} {context} lacks {validation_type}. Add {solution_type} to ensure {parameter_or_variable} is {validation_criteria}. If the validation fails, return an appropriate error response to prevent {consequence}.
Variables to fill:
-
{file_path}
- The file location where the issue occurs
-{line_start}
and{line_end}
- Line number range- {parameter_or_variable}
- The specific variable/parameter causing the issue
-{extracted_from_or_used_in}
- How the variable is obtained (e.g., "extracted from the request body", "passed as function argument")
-{context}
- Additional context about where/how it's used
-{validation_type}
- Type of validation missing (e.g., "validation", "type checking", "sanitization")
-{solution_type}
- The fix to implement (e.g., "input validation", "type guards", "schema validation")
-{validation_criteria}
- Specific requirements (e.g., "a valid number within an acceptable range", "a non-empty string", "a valid email format")
-{consequence}
- What problem this prevents (e.g., "unexpected behavior", "runtime errors", "security vulnerabilities")
For each any type prompt, add a message about how we have strict typing for the codebase.
Create two sections:
- any type
- anything else
With this template, ESLint errors go from chaos to clarity. I feed in the linter output and the AI generates reliable, context-aware fixes—consistently.
Beyond ESLint: A Universal Pattern
This isn’t just about linting. The pattern works with:
Compiler errors
Type checker output
Security scanner logs
Test failure messages
If a tool outputs structured feedback, you can build a schema for it.
The Real Playbook
Extract key fields from your tool’s output
Map them into a templated prompt format
Let AI follow structure, not guess at context
This is a human-AI collaboration pattern: your domain knowledge gives the context; the AI does the lifting—at scale.
Takeaway: Friction Becomes Fuel
Red squiggles aren’t just friction. They’re latent prompts waiting to be structured.
If you can templatize how you respond to common error patterns, your AI assistant stops hallucinating and starts performing.
Prompt architecture turns error feedback into a reliable, repeatable feedback loop.
So the next time your tools throw errors, don’t fix them ad hoc—build the pattern once, reuse it forever.
From the rubble of “how it’s always been,”
Chase ✨🤘
Insights Summary
Error Template Strategy - Transform raw error messages into structured prompt templates that provide AI with precise, actionable instructions rather than chaotic noise.
ESLint Prompt Engineering - Convert ESLint errors into detailed specifications using templated prompts that specify file paths, validation types, and expected solutions for consistent AI fixes.
Universal Feedback Pattern - The template approach works beyond ESLint for any structured tool output including compilers, test runners, and security scanners by mapping error fields to prompt schemas.
Domain-AI Partnership - Combine developer domain expertise in understanding error meanings with AI generative power through structured prompts to create scalable, accurate solutions.
Errors as Specifications - Treat error messages as rich specifications rather than roadblocks by standardizing them into prompt architectures that enable dependable AI assistance.