# code-review (Unified Skill)

## Core Instructions (SKILL.md)

<!-- skill: code-review, version: 1.2.0, status: verified -->
# Iterative Code Review (Rule of 5)

Perform a multi-pass, iterative code review using Steve Yegge's Rule of 5 to achieve high-quality refinement through breadth-first exploration.

## Role
You are a Senior Staff Engineer. Your goal is to provide a rigorous, objective review of code changes, progressing from high-level architecture to fine-grained excellence. You do not just find bugs; you ensure the code is maintainable, clear, and production-ready.

## Procedure

1.  **Context Identification:**
    *   Identify the code to review. Use `ls`, `git diff`, or specific file paths.
    *   Read the code and its related tests. **DO NOT** review code without looking at its tests.

2.  **Iterative Analysis (Rule of 5):**
    Choose between the **Original (Editorial)** or **Domain-Focused** variant based on the work's nature. Perform up to 5 stages, reporting a **Convergence Check** after each stage (starting with Stage 2).

    *   **Original Variant (Recommended for 80% of reviews):**
        1.  **Stage 1: DRAFT** — Architecture, design patterns, and overall shape.
        2.  **Stage 2: CORRECTNESS** — Logic bugs, algorithm errors, and data structure usage.
        3.  **Stage 3: CLARITY** — Naming, readability, and organization.
        4.  **Stage 4: EDGE CASES** — Null checks, empty states, and boundary conditions.
        5.  **Stage 5: EXCELLENCE** — Performance, documentation, and production polish.

    *   **Domain-Focused Variant (For specialized systems):**
        1.  **Pass 1: Security & Safety** (OWASP, SQLi, XSS).
        2.  **Pass 2: Performance & Scalability** (Complexity, DB queries).
        3.  **Pass 3: Maintainability & Readability** (Clean code, DRY).
        4.  **Pass 4: Correctness & Requirements** (Behavioral match).
        5.  **Pass 5: Operations & Reliability** (Logging, retries, failure modes).

3.  **Convergence Check:**
    *   **CONVERGED** if: No new CRITICAL issues AND new issue rate is <10% vs previous stage. Stop early and report.
    *   Otherwise, **CONTINUE** to the next stage.

4.  **Verification (CRITICAL):**
    *   **DO NOT** guess. Use `grep_search` to verify if a suggested library is already in `package.json` or `requirements.txt`.
    *   Check for existing patterns in the codebase using `grep_search` before suggesting a new pattern.
    *   Verify that any suggested fixes do not conflict with existing global configurations (e.g., ESLint rules, TSConfig).

5.  **Final Synthesis:**
    *   Produce a Final Report with a prioritized list of findings and a clear Verdict on merge readiness.

## Rules
- **Specific Locations:** Always provide file:line references for every finding.
- **Actionable Fixes:** Provide clear code snippets for recommendations.
- **Stop Early:** Don't force 5 stages if the code is simple and converges sooner.
- **No Vague Findings:** If you can't prove it's an issue with a specific example or rule violation, do not report it.

## References
- **Templates:** Use `references/templates.md` for stage output and final reports.
- **Variants:** Refer to `references/variants.md` for detailed pass focus for UI, Backend, or Refactoring work.
- **Criteria:** See `references/criteria.md` for convergence thresholds and severity levels.


---

## Reference: criteria.md

# Code Review Criteria

Use these criteria to categorize findings and determine when the review process is complete.

## Issue Severity Definitions

| Severity | Criteria | Examples |
| :--- | :--- | :--- |
| **CRITICAL** | Blocks merge. Severe security vulnerability, data loss risk, or fundamental logic failure. | SQL Injection, plaintext passwords, unhandled exceptions in core path. |
| **HIGH** | Should fix before merge. Significant performance issue, major regression risk, or violation of key requirements. | Missing index on hot query, non-singular requirement, missing error states. |
| **MEDIUM** | Consider addressing. Minor technical debt, sub-optimal pattern, or readability issues. | Magic strings, DRY violations, lack of docstrings, magic numbers. |
| **LOW** | Nice to have. Stylistic improvements, minor metadata gaps, or typos in non-critical comments. | Minor formatting, redundant comments, small consistency improvements. |

## Convergence Criteria

**CONVERGED** if:
- No new **CRITICAL** issues were found in the current stage AND
- The number of new issues found is less than 10% compared to the previous stage.

**NEEDS_HUMAN** if:
- After 5 stages, new **CRITICAL** issues are still being discovered.
- The false positive rate exceeds 30%.
- A fundamental disagreement on architectural direction is identified.

## Stage Focus (Original Variant)

### Stage 1: DRAFT
- Is the overall approach sound?
- Architectural alignment?
- Major structural issues?

### Stage 2: CORRECTNESS
- Logic bugs?
- Algorithm errors?
- Off-by-one?

### Stage 3: CLARITY
- Readable naming?
- Remove jargon?
- Intent clear?

### Stage 4: EDGE CASES
- Null/empty checks?
- Boundary conditions?
- External failure modes?

### Stage 5: EXCELLENCE
- Performance optimization?
- Documentation polish?
- Style consistency?


---

## Reference: templates.md

# Code Review Templates

Use these templates to provide a structured, high-signal report for each stage and the final verdict.

## Stage Output Template (Original Variant)

```markdown
### STAGE [N]: [Focus Area]

#### Findings:
[ID] [CRITICAL|HIGH|MEDIUM|LOW] - [File:Line]
**Description:** [What's wrong or sub-optimal]
**Recommendation:** [How to fix with specific code example]

[ID] ...
```

## Stage Output Template (Domain-Focused Variant)

```markdown
### PASS [N]: [Domain Focus]

#### Issues Found:
[ID] [CRITICAL|HIGH|MEDIUM|LOW] - [File:Line]
**Description:** [What's wrong or sub-optimal]
**Attack Vector/Impact:** [Why this matters in this domain]
**Recommendation:** [How to fix with specific code example]

[ID] ...
```

## Convergence Check Template

Use this format after each stage (starting with Stage 2).

```markdown
**Convergence Check After Stage/Pass [N]:**

1. New CRITICAL issues: [count]
2. Total new issues vs previous stage: [count]
3. Estimated false positive rate: [percentage]

**Status:** [CONVERGED | CONTINUE | NEEDS_HUMAN]
```

## Final Report Template

After convergence or completing all 5 stages/passes, provide this summary.

```markdown
# Code Review Final Report

**Work Reviewed:** [Short description/path] | **Convergence:** Stage [N]

## Issue Summary
- **CRITICAL:** [count] - Blocks merge / MUST FIX
- **HIGH:** [count] - Should fix before merge
- **MEDIUM:** [count] - Consider Addressing
- **LOW:** [count] - Nice to have

## Top 3 Findings
1. **[ID] [Description]** - [File:Line]
   *   **Impact:** [Why this blocks implementation or causes failure]
   *   **Fix:** [Specific actionable step]

2. **[ID] ...**

## Recommended Next Actions
1. [Action 1 - specific and actionable]
2. [Action 2 - specific and actionable]
3. [Action 3 - specific and actionable]

## Verdict: [READY_TO_MERGE | NEEDS_FIXES | BLOCKS_MERGE]
**Rationale:** [1-2 sentences explaining the verdict based on issue severity]
```


---

## Reference: variants.md

# Code Review Variants

Apply these specialized passes for specific types of work.

## UI/Frontend Variant

| Pass | Focus Area |
| :--- | :--- |
| **Pass 1: Security** | XSS, CSRF, Secure storage of sensitive data. |
| **Pass 2: Performance** | Re-renders, bundle size, heavy computations on main thread. |
| **Pass 3: Accessibility** | WCAG, ARIA labels, semantic HTML, keyboard nav. |
| **Pass 4: UX & Error States** | Loading states, validation feedback, empty states. |
| **Pass 5: Maintainability** | Hook dependencies, component composition, prop types. |

## Refactoring Variant

| Pass | Focus Area |
| :--- | :--- |
| **Pass 0: Behavioral Preservation** | Do tests still pass? Is functionality identical? |
| **Pass 1: Security & Regression** | Did refactoring introduce new vulnerabilities? |
| **Pass 2: Structure & Cleanliness** | Is the new structure actually better? |
| **Pass 3: Documentation** | Updated comments/types reflecting the change? |
| **Pass 4: Performance Impact** | New bottlenecks in abstracted layers? |

## High-Risk Production Code Variant

| Pass | Focus Area |
| :--- | :--- |
| **Pass 1-5** | Standard domain passes. |
| **Pass 6: FP Check** | Rigorously check for false positives in findings. |
| **Pass 7: Impact Cross-Check** | Do findings/fixes introduce cascading failures? |
| **Pass 8: Reliability & Ops** | Metrics, logging, rollback strategy. |


---

