# parallel-review (Unified Skill)

## Core Instructions (SKILL.md)

<!-- skill: parallel-review, version: 1.2.0, status: verified -->
# Parallel Review (Multi-Agent Code Review)

Orchestrate a comprehensive, multi-agent code review using an extended parallel review workflow inspired by the Rule of 5 principle to achieve maximum defect detection (85-92%).

## Role
You are a Lead Orchestration Engineer. Your goal is to simulate and synthesize the perspectives of multiple specialist agents to uncover critical vulnerabilities, performance bottlenecks, and reliability risks that a single-pass review would miss.

## Procedure

1.  **Context Building:**
    *   Identify the code to review.
    *   Identify the core requirements or user stories the code aims to satisfy.
    *   Read the code and any existing tests completely.

2.  **Wave 1: Parallel Specialist Analysis:**
    Simulate five independent reviewers, each producing a prioritized list of findings (CRITICAL, HIGH, MEDIUM, LOW):
    *   **Security Reviewer:** OWASP Top 10, input validation, auth, and data leaks.
    *   **Performance Reviewer:** Algorithmic complexity, DB efficiency, and memory.
    *   **Maintainer Reviewer:** Readability, structure, design patterns, and tech debt.
    *   **Requirements Validator:** Correctness, requirement coverage, and edge cases.
    *   **Operations Reviewer (SRE):** Failure modes, logging, metrics, and resilience.

3.  **Gate 1: Synthesis & Conflict Resolution:**
    Consolidate findings into a single deduplicated list. Resolve severity conflicts (Security CRITICALs outrank all; 3+ agents flagging an issue elevates its severity).

4.  **Wave 2: Cross-Validation:**
    Simulate two validation agents:
    *   **False Positive Checker:** Scrutinize the list for misunderstandings or irrelevant findings.
    *   **Integration Validator:** Identify system-wide risks or cascading failures.

5.  **Gate 2: Final Synthesis:**
    Remove false positives, add integration risks, and produce the final prioritized list of actionable issues.

6.  **Verification (CRITICAL):**
    *   **DO NOT** rely on simulated agent findings without checking them against the code. As the orchestrator, you MUST use `read_file` or `grep_search` to verify the validity of any CRITICAL or HIGH severity issues before final reporting.
    *   Verify that suggested fixes (e.g., using a specific library) are actually feasible within the current project's environment.

7.  **Wave 3: Convergence Check:**
    Assess if the review has CONVERGED or if the findings are contradictory/unclear enough to require another iteration or human judgment.

## Rules
- **Specific Locations:** Every finding must include a file:line reference.
- **Actionable Advice:** Every issue must have a specific recommendation for a fix.
- **Verification Mandate:** You are responsible for the truth of the simulated findings. Verify high-severity claims manually.

## References
- **Templates:** Use `references/templates.md` for wave outputs and the final report.
- **Criteria:** See `references/criteria.md` for severity definitions and convergence rules.


---

## Reference: criteria.md

# Multi-Agent Code Review Criteria

Use these criteria to categorize findings and determine when the review process is complete.

## Issue Severity Definitions

| Severity | Criteria | Example Findings |
| :--- | :--- | :--- |
| **CRITICAL** | Severe security vulnerability, data loss risk, or fundamental logic failure that makes the code unshipable. | SQL Injection, plaintext passwords, unhandled exceptions in core path. |
| **HIGH** | Significant performance issue, major regression risk, or violation of key requirements. | Missing index on hot query, non-singular requirement, missing error states. |
| **MEDIUM** | Minor technical debt, sub-optimal pattern, or readability issues. | Magic strings, DRY violations, lack of docstrings, magic numbers. |
| **LOW** | Nice to have. Stylistic improvements, minor metadata gaps, or typos in non-critical comments. | Minor formatting, redundant comments, small consistency improvements. |

## Convergence Criteria

**CONVERGED** if:
- All CRITICAL and HIGH severity issues have been cross-validated by at least two agents (e.g., Security Reviewer and False Positive Checker).
- Gate 1 and Gate 2 synthesis steps result in a stable list of issues with no major contradictions.
- The Integration Validator confirms that no new cascading failures are likely.

**ITERATE** if:
- Wave 2 identifies more than two HIGH or one CRITICAL severity issue that were missed in Wave 1.
- Specialist agents have directly contradictory findings on a CRITICAL issue.

**NEEDS_HUMAN** if:
- After two full multi-agent cycles, no consensus is reached on a CRITICAL issue.
- The specialist agents identify a foundational architectural conflict.

## Verification Checklist (For Orchestrator)

As the Lead Orchestration Engineer, you MUST:
- [ ] Use `read_file` to confirm that every CRITICAL finding actually exists at the cited file:line.
- [ ] Cross-check the "Requirements Validator" findings against the actual specification file (if provided).
- [ ] Verify that suggested performance optimizations don't violate existing project constraints (e.g., using a library that is explicitly forbidden).


---

## Reference: templates.md

# Multi-Agent Code Review Templates

Use these templates to structure the waves and final reporting of the multi-agent code review.

## Wave 1 Output Template

```markdown
### WAVE 1: Parallel Analysis Findings

#### 1. Security Reviewer
- [Severity] [Description] - [File:Line]
- [Severity] ...

#### 2. Performance Reviewer
- [Severity] [Description] - [File:Line]

... [Remaining Reviewers]
```

## Wave 2 & 3 Convergence Check Template

```markdown
### WAVE 3: Convergence Check

**Status:** [CONVERGED | ITERATE | NEEDS_HUMAN]
**Confidence Score:** [0-100%]
**Rationale:** [1-2 sentences explaining why the review is complete or requires more focus]
```

## Final Report Template

```markdown
# Multi-Agent Code Review Final Report

**Code:** [Short description/path] | **Convergence:** Wave [N]

## Synthesized Issue Summary
| Severity | Count | Primary Focus |
| :--- | :--- | :--- |
| **CRITICAL** | [count] | Security / Logic Failures |
| **HIGH** | [count] | Performance / Reliability |
| **MEDIUM** | [count] | Maintainability / Tech Debt |
| **LOW** | [count] | Clarity / Style |

## Top 3 Critical Findings (Verified)
1. **[ID] [Description]** - [File:Line]
   *   **Impact:** [Why this blocks implementation or causes failure]
   *   **Fix:** [Specific actionable step]

2. **[ID] ...**

## Final Actionable List
1. [Verified Action 1 - specific and actionable]
2. [Verified Action 2 - specific and actionable]
3. [Verified Action 3 - specific and actionable]

## Verdict: [READY_TO_MERGE | NEEDS_FIXES | BLOCKS_MERGE]
**Rationale:** [Final summary of system-wide health and prioritized fixes]
```


---

