Skip to content

LLM Workflows

This page documents concrete LLM workflows for maintaining and improving the knowledge base. Each workflow is designed to be run in Claude Code and produces specific outputs.


WorkflowInputOutputCost
Page ImprovementLow-quality pageQ5 page with tables, diagrams, citations$3-5 (Opus)
Research ReportTopic questionComprehensive report with causal factors$5-10 (Opus)
Causal DiagramEntity + researchYAML causeEffectGraph$2-4 (Opus)
Quantitative EstimatesFactor listDocumented estimates with reasoning$3-6 (Opus)
Full Topic PipelineTopicReport + diagram + estimates$10-20 (Opus)

Goal: Upgrade a knowledge base page to quality level 5.

When to Use: Page has quality < 4 and importance > 50, missing tables/diagrams/citations, or content is bullet-heavy.

Find Candidates: node scripts/page-improver.mjs --list --max-qual 3 --min-imp 50

View Prompt & Details

Prompt:

Improve the page at [path/to/page.mdx] to quality level 5.
Requirements for Q5:
- Quick Assessment table with 5+ rows (Dimension, Assessment, Evidence)
- 2+ additional substantive tables with real data
- 1+ Mermaid diagram showing key relationships
- 10+ citations from authoritative sources (with real URLs)
- Replace vague claims ("significant") with quantified claims ("25-40%")
- 800+ words of substantive content
Follow the style guide at /internal/knowledge-base/. Use tables over bullet lists. Add <Aside> components for key insights.

Validation:

Terminal window
npm run validate:mdx && npm run validate:style

Reference Examples:

  • Gold standard: src/content/docs/knowledge-base/risks/misuse/bioweapons.mdx
  • Good example: src/content/docs/knowledge-base/risks/structural/racing-dynamics.mdx

Goal: Create a comprehensive research report that can inform diagram creation.

When to Use: Need deep understanding before building models, filling knowledge gaps, or investigating specific questions.

Output Location: src/content/docs/internal/research-reports/{topic-id}.mdx

View Prompt & Details

Prompt:

/research-report
Create a research report on [topic].
Focus areas:
- [Specific question 1]
- [Specific question 2]
- How this connects to AI safety / the AI Transition Model
Use web search to find:
- Academic sources (arxiv, Nature, Science)
- Policy sources (RAND, Brookings, government reports)
- Recent developments (2024-2025)
Output format: Follow the research report style guide. Include:
- Executive summary table
- Causal factors tables (organized by strength)
- Open questions table
- Organized sources by type

Skill Invocation:

Use the research-report skill to investigate [topic].

Post-Workflow: The Causal Factors section can be directly used to create a cause-effect diagram.


Goal: Create a cause-effect diagram for an AI Transition Model entity.

When to Use: Entity lacks a causeEffectGraph, research report completed, or modeling a new factor.

File Location: src/data/entities/ai-transition-model.yaml (search for id: tmc-{factor-name})

View Prompts & Details

Prompt (From Research):

/cause-effect-diagram
Create a cause-effect diagram for [entity-id] based on the research report at [path/to/report.mdx].
Map the causal factors from the report to diagram nodes:
- Primary factors → strong edges
- Secondary factors → medium edges
- Minor factors → weak edges
Use the node type hierarchy:
- leaf: Root inputs, external factors
- cause: Derived from leaves
- intermediate: Direct contributing factors
- effect: The target outcome
Target: 10-15 nodes, max 20. Avoid feedback loops.

Prompt (From Scratch):

/cause-effect-diagram
Create a cause-effect diagram for [entity-id] answering: "[question about what drives this factor]"
First, identify:
1. The target outcome (effect node)
2. Direct factors (intermediate nodes)
3. Upstream causes (cause nodes)
4. Root inputs (leaf nodes)
Then define edges with appropriate strengths based on causal importance.

View Result:

  • Development: http://localhost:4321/diagrams?entity={entity-id}
  • Diagram index: http://localhost:4321/diagrams

Goal: Create documented estimates for AI Transition Model factors.

When to Use: Populating the estimates table, adding quantitative backing to claims, or comparing expert positions.

Common Factors: Timeline estimates, probability estimates, resource estimates, impact magnitudes.

View Prompt & Details

Prompt:

Create quantitative estimates for the following factors in the AI Transition Model:
Factors to estimate:
- [Factor 1]
- [Factor 2]
- [Factor 3]
For each factor, provide:
1. Point estimate or range
2. Confidence interval (if applicable)
3. Key assumptions
4. Sources that informed the estimate
5. How the estimate would change under different assumptions
Use web search to find existing estimates from:
- Expert surveys (AI Impacts, Metaculus)
- Research papers
- Policy reports
Format as a table with columns: Factor | Estimate | Confidence | Key Assumptions | Sources

Integration: Estimates can be added to entity frontmatter. The table page at /ai-transition-model/table/ aggregates these.

Opinion Fuzzing for Robust Estimates

The Problem: LLM outputs vary based on model, prompt phrasing, and simulated perspective.

The Solution: Sample across three dimensions:

DimensionWhat to VaryExample
ModelsDifferent LLM providersClaude, GPT-4, Gemini
Prompts4-20 phrasings of the same question”What’s the probability…” vs “How likely is…”
PersonasSimulated expert perspectivesSkeptic, optimist, domain expert

Opinion Fuzzing Prompt:

Generate estimates for [factor] using opinion fuzzing:
1. Create 5 different prompt phrasings for this estimation question
2. For each phrasing, generate estimates from 3 personas:
- Optimistic AI researcher
- Skeptical safety researcher
- Policy analyst focused on near-term risks
3. Present results as a table showing:
- Prompt variant | Persona | Estimate | Key reasoning
4. Analyze the variance:
- Where do estimates cluster?
- Which prompts/personas produce outliers?
- What drives the disagreements?
5. Provide a final calibrated estimate that accounts for this variance structure

When to Use: High-stakes estimates, contentious topics, estimates with high uncertainty.

Reference: Opinion Fuzzing: A Proposal for Reducing and Exploring Disagreement


Goal: Complete end-to-end coverage of a new topic.

When to Use: Adding a new factor to the AI Transition Model, creating comprehensive coverage from scratch.

View All Steps
  1. Research the topic

    /research-report
    Create a comprehensive research report on [topic].
    Focus on:
    - What is this and why does it matter for AI safety?
    - What are the key causal factors?
    - What do experts disagree about?
    - What quantitative data exists?
  2. Create the cause-effect diagram

    /cause-effect-diagram
    Based on the research report just created, build a cause-effect diagram for [entity-id].
    Extract causal factors from the report's Causal Factors section and map them to nodes and edges.
  3. Add quantitative estimates

    Based on the research report and diagram, create quantitative estimates for the key factors identified.
    Focus on the most decision-relevant estimates:
    - Factors with high sensitivity (downstream effects)
    - Factors with high changeability (intervention points)
    - Key uncertainties that would shift the overall picture
  4. Create or update the knowledge base page

    Create or update the knowledge base page for [topic] incorporating:
    - Findings from the research report
    - The cause-effect diagram (embed or link)
    - Key estimates with confidence levels
    Follow the Q5 page requirements from the Page Improvement workflow.

Tracking Progress:

Terminal window
# View queue
node scripts/document-enhancer.mjs list --sort gap --limit 20
# After completing each step, update quality ratings
node scripts/grade-content.mjs --page [page-id] --apply

Cost Management & Model Selection
ModelBest ForCost
Opus 4.5Complex research, synthesis, diagrams$3-5/page
Sonnet 4.5Page improvement, grading$0.50-1/page
HaikuSummaries, simple edits$0.02/page

Parallelization: When working on multiple pages/topics:

Run the following in parallel:
1. Research report on [topic A]
2. Research report on [topic B]
3. Page improvement for [page C]
Validation & Quality Checks

Always run validation after any workflow:

Terminal window
npm run validate # All validators
npm run validate:mdx # MDX syntax
npm run validate:style # Style compliance
npm run validate:data # Entity data integrity
Common Issues
IssueCauseFix
LaTeX rendering in currencyUnescaped $Use \$
Diagram won’t renderFeedback loop in edgesRemove cyclic edges
Validation failsSchema mismatchCheck frontmatter dates aren’t quoted
Missing backlinksEntity not in registryRun npm run build:data
Workflow Integration Map
┌─────────────────┐
│ Research Report │
│ /research- │
│ report │
└────────┬────────┘
│ Causal Factors
┌─────────────────┐ ┌─────────────────┐
│ Cause-Effect │ │ Quantitative │
│ Diagram │ │ Estimates │
│ /cause-effect- │ │ │
│ diagram │ └────────┬────────┘
└────────┬────────┘ │
│ │
└───────────┬───────────┘
┌─────────────────┐
│ Knowledge Base │
│ Page │
│ (Q5 quality) │
└─────────────────┘