Build mathematically rigorous knowledge graphs from any course materials.
Assess students adaptively. Generate personalized instruction. Plan smarter lectures.
Quick Start · Skills Reference · How It Works · Using the Skills · Schema · Bibliography
Knowledge Spaces is a suite of 10 AI-powered Agent Skills that implement the full Knowledge Space Theory (KST) pipeline. Give it your course materials — a syllabus, textbook, standards document — and it will:
- Extract atomic knowledge items from your materials
- Discover prerequisite relationships between items
- Construct the mathematical knowledge space (all feasible learning states)
- Assess individual students adaptively (like ALEKS)
- Generate personalized learning materials targeting what each student is ready to learn
- Plan class-wide instruction using data from every student's knowledge state
The output is a knowledge graph — a structured JSON file that captures everything: items, prerequisites, competences, learning paths, and student states. It's the mathematical backbone for adaptive education.
- Instructors who want to understand prerequisite structure and plan differentiated instruction
- Instructional designers building adaptive courses or assessments
- EdTech developers who need a principled knowledge model (not ad hoc tagging)
- Researchers in Knowledge Space Theory, educational data mining, or learning analytics
- Anyone using Claude Code who wants to explore KST with real course materials
Unlike keyword tagging or simple topic trees, this suite is grounded in 50+ years of mathematical learning theory:
- Knowledge Space Theory (Doignon & Falmagne, 1999) — the mathematical framework behind ALEKS, used by millions of students
- Competence-Based KST (Heller & Stefanutti, 2024) — the current state-of-the-art, adding a latent skill layer that explains why prerequisites exist
- Formal Concept Analysis — rigorous lattice-theoretic methods for discovering concept hierarchies
- Evidence-Centered Design — principled assessment architecture (Mislevy et al., 2003)
Every skill embeds its theoretical grounding and academic references inline. Using the skills teaches you the theory.
- Claude Code CLI installed
- Python 3.9+ (for the computational utilities — standard library only, no pip installs needed)
# Clone the repository
git clone https://github.com/vanderbilt-data-science/knowledge-spaces.git
cd knowledge-spaces
# That's it. No dependencies to install.
# The skills follow the Agent Skills open standard in .claude/skills/
# The Python utilities use only the standard library.# Start Claude Code in the project directory
claude
# Step 1: Extract knowledge items from your course materials
> /extracting-knowledge-items path/to/your/syllabus.pdf
# Step 2: Build the concept map and discover prerequisites
> /mapping-concepts-and-competences graphs/your-domain-knowledge-graph.json
# Step 3: Construct the formal prerequisite relation
> /building-surmise-relations graphs/your-domain-knowledge-graph.json
# Step 4: Derive the full knowledge space
> /constructing-knowledge-space graphs/your-domain-knowledge-graph.json
# Step 5: Validate everything
> /validating-knowledge-structure graphs/your-domain-knowledge-graph.jsonYou now have a mathematically validated knowledge space. Use it to assess students, generate materials, or plan instruction.
┌──────────────────────────────────────────────────────────────────────────────┐
│ PHASE 1: Domain Analysis │
│ │
│ /extracting-knowledge-items ─→ /decomposing-learning-objectives │
│ Course materials → items Learning objectives → atomic items │
│ (Bloom's, DOK, SOLO, Fink's, ECD) │
│ ↓ │
│ /mapping-concepts-and-competences │
│ Concept map & competences (CbKST) │
├──────────────────────────────────────────────────────────────────────────────┤
│ PHASE 2: Structure Construction │
│ │
│ /building-surmise-relations ─→ /constructing-knowledge-space │
│ QUERY algorithm → prerequisites Enumerate states, fringes, paths │
│ ↓ │
│ /validating-knowledge-structure │
│ Mathematical & educational checks │
├──────────────────────────────────────────────────────────────────────────────┤
│ PHASE 3: Application │
│ │
│ /assessing-knowledge-state /generating-learning-materials │
│ Adaptive BLIM assessment Personalized content for outer fringe │
│ │
│ /planning-adaptive-instruction │
│ Class-wide JIT lecture planning │
├──────────────────────────────────────────────────────────────────────────────┤
│ PHASE 4: Maintenance │
│ │
│ /updating-knowledge-domain │
│ Evolve the structure when curriculum changes │
└──────────────────────────────────────────────────────────────────────────────┘
| Skill | Purpose | Input | Output |
|---|---|---|---|
/extracting-knowledge-items |
Extract atomic knowledge items from course materials | Syllabus, textbook, standards | items[] with Bloom's, DOK, competences |
/decomposing-learning-objectives |
Decompose learning objectives into testable items | Learning objectives | Refined items[] with 5-framework classification |
/mapping-concepts-and-competences |
Build concept map, identify competences (CbKST) | Knowledge graph + materials | competences[], preliminary prerequisites, Mermaid diagrams |
| Skill | Purpose | Input | Output |
|---|---|---|---|
/building-surmise-relations |
Construct the prerequisite relation (QUERY algorithm) | Knowledge graph with items | surmise_relations[], competence_relations[] |
/constructing-knowledge-space |
Derive all feasible knowledge states | Knowledge graph with relations | knowledge_states[], learning_paths[], Hasse diagram |
/validating-knowledge-structure |
Validate mathematical and educational properties | Complete knowledge graph | Validation report (PASS/WARN/FAIL) |
| Skill | Purpose | Input | Output |
|---|---|---|---|
/assessing-knowledge-state |
Adaptive assessment using BLIM/PoLIM | Knowledge graph + student ID | Student knowledge state, fringes, competence state |
/generating-learning-materials |
Generate personalized learning materials | Knowledge graph + student state | Explanations, examples, problems (UDL 3.0) |
/planning-adaptive-instruction |
JIT lecture planning from class data | Knowledge graph + all students | Session plan with groupings, targets, peer tutoring |
| Skill | Purpose | Input | Output |
|---|---|---|---|
/updating-knowledge-domain |
Update structure for curriculum changes | Knowledge graph + change description | Updated graph + impact analysis |
A knowledge space is the set of all feasible knowledge states for a domain. Not every combination of items is feasible — if you know calculus, you must also know algebra. The prerequisite relationships (the surmise relation) constrain which combinations are possible.
Student who knows {algebra, geometry, trig} ← feasible state
Student who knows {calculus, but not algebra} ← NOT feasible (violates prerequisites)
Knowledge Items (Q): The atomic units of knowledge in a domain. Each is testable with a single assessment question. The skills extract these from your course materials and classify them using Bloom's Revised Taxonomy, Webb's Depth of Knowledge, SOLO Taxonomy, and Fink's Taxonomy of Significant Learning.
Surmise Relation: The prerequisite quasi-order. If item A is a prerequisite for item B, then any student who has mastered B must also have mastered A. This is built using the QUERY algorithm (Koppen & Doignon, 1990), with Claude acting as the domain expert.
Knowledge States: Feasible subsets of items — sets that are downward-closed under the surmise relation. The family of all such states forms the knowledge space.
Fringes: For any knowledge state:
- The inner fringe is the set of most-recently mastered items (remove any one and the state is still feasible)
- The outer fringe is the set of items the student is ready to learn next (add any one and the state is still feasible)
Fringes are remarkably compact — ALEKS research shows a state with 80 items typically has only ~9 fringe items.
Competences (CbKST): Latent skills that explain why items cluster together. A student might struggle with 5 different items not because they're missing 5 things, but because they're missing one underlying competence. The Competence-Based KST framework (Heller & Stefanutti, 2024) adds this explanatory layer.
The /assessing-knowledge-state skill implements an ALEKS-style adaptive assessment:
- Start with uniform probability over all feasible states
- Ask a question about the item that maximally discriminates between states (~50/50 split)
- Update probabilities using the Basic Local Independence Model (BLIM) with lucky-guess and careless-error parameters
- Repeat until entropy drops below threshold (~20-30 questions for moderate domains)
This is orders of magnitude more efficient than testing every item individually.
The skills follow the Agent Skills open standard and live in .claude/skills/. Clone this repo and work from within it:
cd knowledge-spaces
claude
# Use any skill with /skill-name and pass arguments
> /extracting-knowledge-items path/to/syllabus.pdf
> /building-surmise-relations graphs/my-course-knowledge-graph.json
> /assessing-knowledge-state graphs/my-course-knowledge-graph.json student-aliceTo use the skills in a different project, copy the .claude/skills/ directory, the scripts/ directory, and the schemas/ directory into your project:
# From your project directory
cp -r path/to/knowledge-spaces/.claude/skills/ .claude/skills/
cp -r path/to/knowledge-spaces/scripts/ scripts/
cp -r path/to/knowledge-spaces/schemas/ schemas/
mkdir -p graphsThe skills work identically in Claude Code's IDE integrations. Open the project in your IDE, open the Claude Code panel, and type /extracting-knowledge-items (or any skill name) to invoke it.
The pipeline has natural parallelism that Cowork can exploit:
Phase 1 — Parallel domain analysis:
You can run /extracting-knowledge-items, /decomposing-learning-objectives,
and /mapping-concepts-and-competences in parallel if they operate on different
source materials. They all contribute to the same knowledge graph and will be merged.
Phase 2 — Sequential (each step depends on the previous):
/building-surmise-relations → /constructing-knowledge-space → /validating-knowledge-structure
These must run in order.
Phase 3 — Parallel per student:
Run /assessing-knowledge-state for multiple students simultaneously.
Run /generating-learning-materials for multiple students simultaneously.
Each operates on its own student state independently.
Example Cowork session:
Start 3 agents:
Agent 1: /extracting-knowledge-items syllabus.pdf
Agent 2: /decomposing-learning-objectives objectives.md
Agent 3: /extracting-knowledge-items textbook-ch1.pdf
When all complete, merge results and run:
Agent 4: /mapping-concepts-and-competences graphs/combined-knowledge-graph.json
→ /building-surmise-relations → /constructing-knowledge-space
→ /validating-knowledge-structure
Then fan out for assessment:
Agent 5: /assessing-knowledge-state graphs/course-kg.json student-alice
Agent 6: /assessing-knowledge-state graphs/course-kg.json student-bob
Agent 7: /assessing-knowledge-state graphs/course-kg.json student-carol
Finally, plan instruction:
Agent 8: /planning-adaptive-instruction graphs/course-kg.json
The SKILL.md files are self-contained markdown prompts. You can use them on any platform that supports Claude:
- Copy the skill text from any
.claude/skills/<skill-name>/SKILL.mdfile - Paste it as a system prompt or prepend it to your message
- Replace
$ARGUMENTSwith your actual input - Include
scripts/kst_utils.pyin the conversation if the skill references it (for computational validation) - Load reference files from
references/orshared-references/if the skill mentions them for deeper context
The skills are designed so that an agent with no prior context can execute them — all methodology and output format specifications are embedded in each SKILL.md file, with extended theoretical grounding available in reference files.
import anthropic
client = anthropic.Anthropic()
# Read the skill prompt
with open(".claude/skills/extracting-knowledge-items/SKILL.md") as f:
skill_prompt = f.read()
# Replace $ARGUMENTS with your input
skill_prompt = skill_prompt.replace("$ARGUMENTS", "Analyze the attached syllabus...")
message = client.messages.create(
model="claude-sonnet-4-5-20250929",
max_tokens=8096,
messages=[{"role": "user", "content": skill_prompt}]
)All skills produce and consume a shared JSON format defined in schemas/knowledge-graph.schema.json.
KnowledgeGraph
├── metadata # Domain name, version, provenance, change log
├── items[] # The knowledge domain Q
│ ├── id, label, description
│ ├── bloom_level # remember/understand/apply/analyze/evaluate/create
│ ├── knowledge_type # factual/conceptual/procedural/metacognitive
│ ├── dok_level # Webb's Depth of Knowledge (1-4)
│ ├── solo_level # SOLO taxonomy level
│ ├── required_competences[] # CbKST: which competences this item needs
│ ├── source_objectives[] # Original learning objectives
│ ├── assessment_criteria # How to test mastery
│ └── tags[]
├── surmise_relations[] # Prerequisite pairs with confidence & rationale
├── competences[] # CbKST: latent skills (optional)
├── competence_relations[] # CbKST: competence prerequisites (optional)
├── knowledge_states[] # All feasible states with fringes (optional)
├── learning_paths[] # Named sequences through the space
└── student_states{} # Per-student tracking
└── [student-id]
├── current_state # Mastered items
├── competence_state # CbKST: possessed competences
├── inner_fringe # Most advanced mastered items
├── outer_fringe # Ready to learn next
├── history[] # State transitions over time
└── assessment_log[] # Assessment interactions
The CbKST fields (competences, competence_relations, required_competences, competence_state) are optional — omit them for a purely item-based workflow.
The scripts/kst_utils.py module provides Python functions for KST math that skills call during execution. It requires only Python 3.9+ standard library — no pip installs.
python3 scripts/kst_utils.py validate <graph.json> # Validate structure
python3 scripts/kst_utils.py closure <graph.json> # Transitive closure
python3 scripts/kst_utils.py enumerate <graph.json> # Enumerate knowledge states
python3 scripts/kst_utils.py paths <graph.json> # Generate learning paths
python3 scripts/kst_utils.py analytics <graph.json> # Class-wide analytics
python3 scripts/kst_utils.py cycles <graph.json> # Detect cycles
python3 scripts/kst_utils.py stats <graph.json> # Print summary statisticsknowledge-spaces/
├── README.md
├── LICENSE # MIT License
├── CLAUDE.md # Project context for Claude agents
├── .claude/skills/ # Agent Skills (open standard)
│ ├── shared-references/ # Shared reference files
│ │ ├── taxonomy-frameworks.md # Bloom's, DOK, SOLO, Marzano's, Fink's
│ │ ├── cbkst-overview.md # CbKST theory
│ │ ├── kst-foundations.md # Core KST definitions
│ │ └── ecd-framework.md # Evidence-Centered Design
│ ├── extracting-knowledge-items/SKILL.md # Phase 1: Extract items
│ ├── decomposing-learning-objectives/SKILL.md # Phase 1: Decompose objectives
│ ├── mapping-concepts-and-competences/ # Phase 1: Concept map & competences
│ │ ├── SKILL.md
│ │ └── references/fca-methodology.md
│ ├── building-surmise-relations/ # Phase 2: QUERY algorithm
│ │ ├── SKILL.md
│ │ └── references/query-algorithm-detail.md
│ ├── constructing-knowledge-space/ # Phase 2: Knowledge space
│ │ ├── SKILL.md
│ │ └── references/lattice-theory.md
│ ├── validating-knowledge-structure/ # Phase 2: Validation
│ │ ├── SKILL.md
│ │ └── references/validation-criteria.md
│ ├── assessing-knowledge-state/ # Phase 3: Adaptive assessment
│ │ ├── SKILL.md
│ │ └── references/blim-polim-models.md
│ ├── generating-learning-materials/ # Phase 3: Learning materials
│ │ ├── SKILL.md
│ │ └── references/udl-scaffolding.md
│ ├── planning-adaptive-instruction/ # Phase 3: Lecture planning
│ │ ├── SKILL.md
│ │ └── references/differentiation-strategies.md
│ └── updating-knowledge-domain/ # Phase 4: Maintenance
│ ├── SKILL.md
│ └── references/trace-operations.md
├── schemas/
│ └── knowledge-graph.schema.json # JSON Schema for the graph format
├── scripts/
│ └── kst_utils.py # Python computational utilities
├── references/
│ └── bibliography.md # Consolidated academic bibliography (60+ refs)
└── graphs/ # Output directory for knowledge graphs
This suite implements methods from a mature body of mathematical learning theory spanning 40+ years:
| Area | Key References | Used In |
|---|---|---|
| Knowledge Space Theory | Doignon & Falmagne (1999), Falmagne & Doignon (2011) | All skills |
| Competence-Based KST | Heller & Stefanutti (2024), Stefanutti & de Chiusole (2017) | All skills (CbKST layer) |
| QUERY Algorithm | Koppen & Doignon (1990), Cosyn et al. (2021) | Building Surmise Relations |
| BLIM / PoLIM Assessment | Falmagne et al. (2006), Stefanutti et al. (2020) | Assessing Knowledge State |
| Formal Concept Analysis | Ganter & Wille (1999), Huang et al. (2025) | Mapping Concepts, Building Surmise |
| Bloom's Revised Taxonomy | Anderson & Krathwohl (2001) | Extracting Items, Decomposing Objectives |
| Webb's Depth of Knowledge | Webb (1997), Hess et al. (2009) | Extracting Items, Decomposing Objectives |
| Evidence-Centered Design | Mislevy et al. (2003) | Assessing Knowledge State, Decomposing Objectives |
| Universal Design for Learning | CAST (2024) UDL 3.0 | Generating Materials, Planning Instruction |
| Learning & Forgetting | de Chiusole et al. (2022) | Planning Instruction, Updating Domain |
The complete bibliography with 60+ references is in references/bibliography.md.
Here's what a complete workflow looks like for an Introductory Statistics course:
# Start Claude Code
claude
# 1. Feed in the syllabus
> /extracting-knowledge-items Here is my Intro Stats syllabus: [paste or provide path]
# → Creates graphs/intro-statistics-knowledge-graph.json with ~30-50 items
# 2. Refine with explicit learning objectives
> /decomposing-learning-objectives graphs/intro-statistics-knowledge-graph.json
# "Students will be able to: 1) Calculate descriptive statistics..."
# → Adds/refines items with Bloom's, DOK, SOLO classification
# 3. Build the concept map and identify competences
> /mapping-concepts-and-competences graphs/intro-statistics-knowledge-graph.json
# → Adds competences[], concept relationships, Mermaid diagrams
# 4. Construct the formal prerequisite structure
> /building-surmise-relations graphs/intro-statistics-knowledge-graph.json
# → Adds surmise_relations[] with confidence scores and rationales
# 5. Derive the knowledge space
> /constructing-knowledge-space graphs/intro-statistics-knowledge-graph.json
# → Adds knowledge_states[], learning_paths[], Hasse diagram
# 6. Validate everything
> /validating-knowledge-structure graphs/intro-statistics-knowledge-graph.json
# → Reports PASS/WARN/FAIL for mathematical and educational checks
# 7. Assess a student
> /assessing-knowledge-state graphs/intro-statistics-knowledge-graph.json student-alice
# → Adaptive quiz → determines Alice's knowledge state and outer fringe
# 8. Generate personalized materials for Alice
> /generating-learning-materials graphs/intro-statistics-knowledge-graph.json student-alice
# → Custom explanations, examples, practice problems for her outer fringe
# 9. Plan next lecture using all student states
> /planning-adaptive-instruction graphs/intro-statistics-knowledge-graph.json
# → Session plan: review targets, groupings, peer tutoring pairingsContributions are welcome! Some areas where help is especially valuable:
- New skills for specialized workflows (e.g., exam generation, curriculum alignment)
- Empirical validation — testing the pipeline against real student data
- Integration with LMS platforms (Canvas, Moodle, Blackboard)
- Additional computational utilities (e.g., IITA implementation, concept lattice computation)
- Translations of skill prompts for non-English instruction
Please open an issue to discuss your idea before submitting a PR.
If you use this in academic work:
@software{knowledge_spaces_2025,
title={Knowledge Spaces: AI-Powered Knowledge Space Theory for Adaptive Education},
author={{Vanderbilt Data Science Institute}},
year={2025},
url={https://github.com/vanderbilt-data-science/knowledge-spaces},
note={A suite of Claude Code skills implementing the full KST pipeline}
}This project is licensed under the MIT License.
Built at the Vanderbilt Data Science Institute