Module 3: Prompt Chaining
A single prompt can handle a simple question. It can't handle an 80-page contract analysis. Prompt chaining is how you break complex work into a sequence of focused steps.
The Problem with Single Prompts​
You could ask: "Analyze this contract, extract all clauses, compare against our policies, calculate financial exposure, and write a summary."
That prompt will produce mediocre results on every task instead of excellent results on any one. The model tries to do everything at once, loses focus, and cuts corners.
How Prompt Chaining Works​
Break the work into steps. Each step has one focused prompt. The output of each step feeds into the next.
Step 1: Extract → "Parse this PDF. Return all clauses as structured JSON."
↓
Step 2: Classify → "Categorize these clauses: payment, liability, termination, IP, other."
↓
Step 3: Analyze → "Compare these liability clauses against our approved templates."
↓
Step 4: Assess → "Based on the deviations found, generate a risk score and recommendations."
Each step does one thing well. Each step produces structured output that the next step can consume.
In Our Contract Workflow​
| Step | Input | Prompt Focus | Output |
|---|---|---|---|
| Extract | Raw PDF | Pull out text by section | Structured clauses JSON |
| Classify | Clauses JSON | Categorize by type | Tagged clauses |
| Compare | Tagged clauses + Policy templates | Find deviations | Deviation report |
| Assess | Deviations | Score risk, recommend actions | Risk assessment |
| Summarize | Full analysis chain | Executive-friendly summary | Final report |
Why This Beats a Single Prompt​
Debuggability. When the final output is wrong, you check each step's output to find where it went wrong. Was the extraction incomplete? Was the classification off? Was the risk scoring too aggressive? You can pinpoint the problem.
Tunability. Each step has its own prompt. You can tune the classification step without touching extraction. You can make the risk assessment more conservative without affecting anything upstream.
Cost control. Simple steps can use cheaper, faster models. Only the complex reasoning steps need the most capable model. Step 1 (extraction) might work fine with Haiku. Step 4 (risk assessment) needs Sonnet.
Testability. Each step has a defined input and expected output. You can build test cases for each step independently.
Patterns​
Sequential chain - Each step feeds the next. Most common for document processing.
Branching chain - Output of one step determines which chain to follow. "If the contract is in a foreign language, insert a translation step."
Parallel chain - Multiple steps run simultaneously on different parts of the input. Extract payment clauses and liability clauses at the same time.
Validation chain - A separate chain reviews the output of the main chain. The analyzer generates a risk assessment. The validator checks whether every claim in the assessment is grounded in the actual contract text.
Common Mistakes​
- Chains too long - Every step adds latency. If you have 10 steps and each takes 3 seconds, that's 30 seconds before any output. Keep chains under 5-6 steps.
- Unstructured handoffs - If step 1 returns free text and step 2 needs to parse it, you'll lose information. Use JSON between steps.
- No intermediate validation - If step 1 fails silently, every downstream step works on bad data. Validate output at each step before passing it forward.
What's Next​
Prompt chaining handles sequential work within a single agent. In Module 4: Orchestration, we cover how to coordinate multiple agents working in parallel.
Prompt Chaining Lab
Build a 5-step document analysis chain with structured handoffs, branching logic, and mixed model routing (Haiku for simple steps, Sonnet for complex reasoning).