← Back to skills

Code Quality

Verification Before Completion

Run tests, type checks, lint, and build commands before claiming any work is complete. Evidence before assertions.

Last reviewed Mar 2, 2026

Install

Create this file in your project:

.claude/skills/verification-before-completion/SKILL.md
--- name: verification-before-completion description: Use this skill before marking any task as complete, before committing code, or before telling the user that work is done. --- # Verification Before Completion ## When to Use - Before saying "done," "complete," "finished," or "ready" - Before committing code changes - Before marking a task or ticket as complete - After making any code change, before moving to the next task ## Process 1. **Identify verification commands** for the project: - Tests: `npm test`, `pytest`, `cargo test`, or whatever the project uses - Type checking: `npx tsc --noEmit`, `mypy`, or equivalent - Linting: `npm run lint`, `ruff check`, or equivalent - Build: `npm run build`, `cargo build`, or equivalent 2. **Run all applicable verification commands** in order: types, lint, tests, build. 3. **Review the output** of each command. If any command fails: a. Fix the failure. b. Run all verification commands again from the beginning. c. Repeat until all commands pass. 4. **Report results** -- Include the actual output showing success, not just "tests pass." ## Rules - Never claim work is complete without running verification commands. - Never say "tests should pass" or "this should work." Run the commands and show the output. - If a verification command does not exist for the project, note that explicitly. - Fix all failures before completing. Do not leave known failures for later. - Run the full suite, not just the tests related to your changes. - Evidence before assertions. Show the output, then state the conclusion. - If the build takes too long for a full run, at minimum run type checks and relevant tests.

What this skill does

Verification Before Completion is a simple but powerful guardrail: before claiming any work is done, you must prove it works by running the project's verification commands. No exceptions, no "it should work," no skipping steps.

The core principle is "evidence before assertions." When an AI agent says "I've implemented the feature and all tests pass," this skill requires that the agent actually ran the tests and can show the output. This eliminates the common failure mode where code looks correct but has a subtle type error, a missing import, or a test that was never actually executed.

This skill is especially effective as a complement to other skills. TDD ensures tests are written. Systematic Debugging ensures bugs are properly fixed. Verification Before Completion ensures that all of that work is actually validated before being shipped.

Example workflow

After implementing a new API endpoint, before saying "done," the agent will:

  1. Run npx tsc --noEmit -- passes, 0 errors.
  2. Run npm run lint -- passes, no warnings.
  3. Run npm test -- all 47 tests pass, including the 3 new tests.
  4. Run npm run build -- build succeeds.
  5. Report: "All verification checks pass. Type check clean, lint clean, 47/47 tests passing, build successful."

Tips

  • Install this skill to prevent the "it compiles on my machine" problem
  • Works great as a pre-commit hook mental model -- the agent won't commit until everything passes
  • If verification reveals issues the agent didn't expect, that's the skill working as intended
  • For very large test suites, the agent can run targeted tests first, then the full suite at checkpoints

Best with tools

    Related MCP servers

    • No linked MCP servers yet.