# design-review (Unified Skill)

## Core Instructions (SKILL.md)

# Design Artifact Review

Review design artifacts for rigor, completeness, and adherence to Design in Practice principles.

## Setup

- If an artifact path is provided, read it completely.
- If no path is provided, ask which artifact to review or list available design documents.

## Procedure

1. Identify the artifact type:
   - Problem statement
   - Decision matrix
   - Scope document
   - Full design package
2. Run the appropriate review:
   - For problem statements, use `references/problem-statement.md`.
   - For decision matrices, use `references/decision-matrix.md`.
   - For scope documents, use `references/scope-document.md`.
   - For full design packages, use `references/combined-review.md`.
3. Generate the final output using `references/report-template.md`.

## Rules

- Reference exact evidence from the artifact.
- Use the artifact-specific output format from the reference file.
- Prefer the most critical issue over exhaustive nitpicks.
- If multiple artifacts are supplied, review each individually before synthesizing.


---

## Reference: combined-review.md

## Full Design Package Review

Use this when reviewing multiple related artifacts together.

### Alignment Check

Validate:
- The problem statement describes the actual problem without embedding the solution.
- The decision matrix evaluates approaches that actually address that problem.
- The scope document or plan respects the chosen approach and non-goals.

Output:

```text
## Alignment Review

Problem -> Options alignment:
- [Evidence of alignment or drift]

Decision -> Scope alignment:
- [Evidence of scope adherence or creep]

Decision -> Plan execution:
- [Does the plan execute on the decision rationale?]
```

Use the artifact-specific references first, then append this alignment section before the final report.


---

## Reference: decision-matrix.md

## Decision Matrix Review

### Check 1: Status Quo Baseline

Question: Is "do nothing" the first column?

Output:

```text
Status Quo Baseline: [PRESENT | MISSING]

If missing:
- Impact: Cannot compare alternatives to current state
- Fix: Add Status Quo column with honest assessment
```

### Check 2: Fact vs. Judgment Separation

Question: Is cell text factual and neutral, with judgment shown separately?

Contaminated cells:
- "Good performance"
- "Easy to implement"
- "Better than X"

Clean cells:
- "< 10ms p99 latency"
- "Requires changes to 3 files"
- "Uses existing auth pattern"

Output:

```text
Fact/Judgment Separation: [CLEAN | CONTAMINATED]

Contaminated cells:
| Row | Column | Current Text | Factual Rewrite |
|-----|--------|--------------|-----------------|
| [Row] | [Col] | "[Text]" | "[Suggestion]" |

Assessment indicators:
- Are judgments shown via color or symbol, not text? [YES/NO]
```

### Check 3: Criteria Completeness

Question: Are all relevant trade-off dimensions represented?

Common missing criteria:
- Maintenance burden
- Rollback difficulty
- Team expertise
- Integration complexity
- Failure modes

Output:

```text
Criteria Completeness: [COMPLETE | GAPS]

Present criteria: [List]

Potentially missing:
- [Criterion]: [Why it matters]
- [Criterion]: [Why it matters]

Recommendation:
- [Which criteria to add]
```

### Check 4: Approach Diversity

Question: Are the approaches fundamentally different, or variations of the same idea?

Output:

```text
Approach Diversity: [DIVERSE | SIMILAR]

Approaches listed:
- [Approach 1]: [Category]
- [Approach 2]: [Category]
- [Approach 3]: [Category]

Missing perspectives:
- [What category of solution was not considered]
```

### Check 5: Cell Verification

Question: Are the facts in the matrix verifiable and accurate?

Output:

```text
Cell Verification: [VERIFIED | MIXED | UNVERIFIED]

Verified cells:
- [Cell Row/Col]: [Claim] -> [Verification result]

Incorrect or unsupported cells:
- [Cell]: [What's wrong and what's correct]
```

### Verdict Template

```text
## Decision Matrix Review Summary

| Check | Result | Action Needed |
|-------|--------|---------------|
| Status Quo | [PRESENT/MISSING] | [Action or "None"] |
| Fact/Judgment | [CLEAN/CONTAMINATED] | [Action or "None"] |
| Criteria | [COMPLETE/GAPS] | [Action or "None"] |
| Diversity | [DIVERSE/SIMILAR] | [Action or "None"] |
| Verification | [VERIFIED/MIXED/UNVERIFIED] | [Action or "None"] |

Verdict: [READY_FOR_DECISION | NEEDS_REVISION | NEEDS_MORE_OPTIONS]
Selected approach justified? [YES/NO]
Top Issue: [Most critical thing to fix]
Recommended Action: [Specific next step]
```


---

## Reference: problem-statement.md

## Problem Statement Review

### Check 1: Solution Contamination

Question: Does the problem statement contain or imply solutions?

Red flags:
- "We need to..."
- Technology names unless they are the problem
- "Add", "implement", "create", "build"
- Comparisons to how other systems work

Test:
- YES: multiple fundamentally different solutions fit the statement -> clean
- NO: the solution is embedded -> contaminated

Output:

```text
Solution Contamination: [CLEAN | CONTAMINATED]

Evidence:
- [Quote from statement showing contamination, or "None found"]

Recommendation:
- [How to remove solution language]
```

### Check 2: Root Cause vs. Symptom

Question: Does this describe the mechanism, or just the observable effect?

Symptom language:
- "Users experience..."
- "The system is slow/broken/failing..."
- "Errors occur when..."

Mechanism language:
- "Because X happens, Y results in Z"
- "The [component] does [action] which causes [effect]"
- Clear causal chain

Test: Ask "Why?" If there is a deeper answer, this is still a symptom.

Output:

```text
Root Cause Analysis: [MECHANISM | SYMPTOM | UNCLEAR]

Depth check:
- Statement says: [quote]
- Ask "Why?": [deeper cause if exists]
- Assessment: [Is this the root or just a layer?]

Recommendation:
- [How to dig deeper, or "Root cause identified"]
```

### Check 3: Specificity

Question: Is the problem precise enough to guide solution selection?

Vague indicators:
- "Performance issues"
- "Users are confused"
- "The code is messy"

Specific indicators:
- Quantified thresholds
- Named components
- Reproducible conditions

Output:

```text
Specificity: [PRECISE | VAGUE | MIXED]

Vague elements:
- [Element 1]: [How to make specific]
- [Element 2]: [How to make specific]

Missing specifics:
- [What metric, threshold, or condition is unclear]
```

### Check 4: Evidence Quality

Question: Is the diagnosis based on verified facts or assumptions?

Look for:
- Evidence cited for each claim
- Ruled-out alternatives
- Hypothesis testing
- Data sources

Output:

```text
Evidence Quality: [VERIFIED | ASSUMED | MIXED]

Claims without evidence:
- [Claim 1]: [What evidence is needed]
- [Claim 2]: [What evidence is needed]

Ruled-out alternatives:
- [Listed? How many? Are they reasonable alternatives?]
```

### Check 5: The Obvious Solution Test

Question: Does the solution feel obvious after reading this statement?

Output:

```text
Obvious Solution Test: [PASS | FAIL | PARTIAL]

After reading, the obvious action is: [What seems like the right solution]

If FAIL:
- What's unclear: [What question remains]
- What's missing: [What would make it obvious]
```

### Verdict Template

```text
## Problem Statement Review Summary

| Check | Result | Action Needed |
|-------|--------|---------------|
| Solution Contamination | [CLEAN/CONTAMINATED] | [Action or "None"] |
| Root Cause | [MECHANISM/SYMPTOM] | [Action or "None"] |
| Specificity | [PRECISE/VAGUE] | [Action or "None"] |
| Evidence | [VERIFIED/ASSUMED] | [Action or "None"] |
| Obvious Solution | [PASS/FAIL] | [Action or "None"] |

Verdict: [READY_FOR_DIRECTION | NEEDS_REVISION | BACK_TO_DESCRIBE]
Top Issue: [Most critical thing to fix]
Recommended Action: [Specific next step]
```


---

## Reference: report-template.md

```text
## Design Artifact Review Report

Artifacts Reviewed: [List]
Date: [YYYY-MM-DD]

### Summary

- Artifact types: [Problem Statement / Decision Matrix / Scope / Full Package]
- Overall readiness: [READY | NEEDS_REVISION | BLOCKED]
- Main concern: [Highest-value issue]

### Critical Findings

1. [Finding 1]
   - Evidence: [Quote, section, or file reference]
   - Impact: [Why it matters]
   - Fix: [Concrete action]

2. [Finding 2]
   - Evidence: [Quote, section, or file reference]
   - Impact: [Why it matters]
   - Fix: [Concrete action]

### Recommended Actions

1. [Most important next step]
2. [Next action after that]
3. [Optional follow-up]

### Overall Verdict

[READY_FOR_DIRECTION | READY_FOR_DECISION | READY_FOR_PLANNING | NEEDS_REVISION | NEEDS_MORE_OPTIONS | BACK_TO_DESCRIBE]

Rationale: [1-2 sentences]
```


---

## Reference: scope-document.md

## Scope Document Review

### Check 1: Explicit Non-Goals

Question: Are out-of-scope items explicitly listed?

Output:

```text
Explicit Non-Goals: [CLEAR | PARTIAL | MISSING]

Missing non-goals:
- [Related item that might get added]
- [Adjacent problem that might expand scope]
```

### Check 2: Constraint Realism

Question: Are the constraints achievable and honest?

Look for:
- Conflicting constraints
- Unstated constraints
- Over-optimistic assumptions

Output:

```text
Constraint Realism: [REALISTIC | CONFLICTED | INCOMPLETE]

Conflicts:
- [Constraint A] vs [Constraint B]: [Why they conflict]

Unstated constraints:
- [What is missing but materially affects the work]
```

### Verdict Template

```text
## Scope Document Review Summary

| Check | Result | Action Needed |
|-------|--------|---------------|
| Non-Goals | [CLEAR/PARTIAL/MISSING] | [Action or "None"] |
| Constraints | [REALISTIC/CONFLICTED/INCOMPLETE] | [Action or "None"] |

Verdict: [READY_FOR_PLANNING | NEEDS_REVISION]
Top Issue: [Most critical thing to fix]
Recommended Action: [Specific next step]
```


---

