Review Gate

Every course must pass 10 mechanical quality checks before it can be published. The review gate runs locally via graspful review and automatically on the server when you import with --publish.

How review works

CLI review (local)

Run graspful review course.yaml to check your course locally. Returns a score (e.g., "8/10") with details on each failure. Fix failures and re-run until you hit 10/10.

graspful review my-course.yaml

# JSON output for CI
graspful review my-course.yaml --format json

Server-side review (on publish)

When you import with --publish or call graspful publish, the server runs the same 10 checks. If any check fails, the course is imported as a draft (not published) and the failure details are returned.

MCP review

The graspful_review_course MCP tool runs the same checks. Agents should review before importing and fix failures iteratively.

Scoring

The score is the number of checks passed out of 10. A score of 10/10 is required to publish.

Stub concepts are ignored. Concepts with no knowledge points (stubs) are treated as graph structure only and are excluded from content quality checks. This lets you publish a course with a scaffolded graph and gradually fill in content.

The 10 checks

1yaml_parsesblocker

What it checks

The course YAML is valid and conforms to the Zod schema. All required fields are present, types are correct, and enum values are valid.

How to fix

Run graspful validate to see specific schema errors. Fix the reported field paths.

2unique_problem_idsblocker

What it checks

Every problem ID across the entire course is unique. Duplicate IDs cause conflicts in the adaptive engine's state tracking.

How to fix

Search for the duplicated problem IDs and rename them. Use a consistent naming convention: {concept-id}-kp{n}-p{n}.

3prerequisites_validblocker

What it checks

Every prerequisite reference in every concept points to a concept ID that exists in the course.

How to fix

Check for typos in prerequisite arrays. Ensure the referenced concept exists. Only list direct prerequisites — transitive ones are inferred.

4question_deduplicationblocker

What it checks

No two problems have the same question text at the same difficulty level (compared via normalized text + MD5 hash). Near-duplicate questions reduce the adaptive engine's effectiveness.

How to fix

Rewrite one of the colliding questions to test a different angle of the same concept. Vary the scenario, change the distractor set, or adjust difficulty.

5difficulty_staircaseblocker

What it checks

Each authored concept has problems at 2 or more distinct difficulty levels. A single difficulty level means there's no progression — the adaptive engine can't create a learning path.

How to fix

Add problems at different difficulty levels (1-5). Start with recognition (1-2), then application (3), then analysis (4-5).

6cross_concept_coveragewarning

What it checks

Checks that no single meaningful term dominates too many concepts (appearing in more than 3 concepts' problem text). High overlap suggests concepts aren't distinct enough.

How to fix

Review concepts that share heavy vocabulary overlap. They may need to be merged or their problems need to be more specific to each concept's unique content.

7problem_variant_depthblocker

What it checks

Every knowledge point in authored concepts has at least 3 problems. Fewer than 3 problems means the adaptive engine can't do meaningful selection and retry.

How to fix

Add more problems to the flagged KPs. Each problem should test the same idea from a different angle — different scenario, different distractors, different phrasing.

8instruction_formattingblocker

What it checks

Instruction text longer than 100 words must include structured content blocks (images, callouts, etc.). Walls of text without visual breaks hurt comprehension.

How to fix

Add instructionContent blocks to long instructions. Use callouts for key rules, images for diagrams, or links for reference material.

9worked_example_coverageblocker

What it checks

At least 50% of authored concepts have at least one knowledge point with a worked example. Worked examples are critical for transfer — they show the concept applied step by step.

How to fix

Add workedExample text to KPs in the flagged concepts. Focus on applied or high-transfer KPs first.

10import_dry_runblocker

What it checks

Validates the prerequisite DAG: no unknown references, no cycles. This is the same validation the import endpoint runs — passing this check means the import will succeed.

How to fix

Fix any cycle or broken reference reported in the details. Use graspful validate for the full error list.

Output format

With --format json, the review command returns structured output for CI integration:

{
  "passed": false,
  "score": "8/10",
  "failures": [
    {
      "check": "difficulty_staircase",
      "passed": false,
      "details": "\"vpc-basics\" has problems at only 1 difficulty level(s) — need 2+"
    },
    {
      "check": "worked_example_coverage",
      "passed": false,
      "details": "1/5 authored concepts have worked examples (20%) — need 50%+"
    }
  ],
  "warnings": [],
  "stats": {
    "concepts": 42,
    "kps": 10,
    "problems": 30,
    "authoredConcepts": 5,
    "stubConcepts": 37
  }
}

Tips

  • Run graspful review early and often during authoring — don't wait until the course is complete.
  • Fix blocker checks first. The cross_concept_coverage check is a warning and passes unless overlap is severe.
  • Use graspful describe to check how many concepts still need KPs and how many KPs still need problems.
  • The review gate is intentionally strict. Every check exists because its absence degrades the adaptive learning experience.