Module 2: Tool Use
An AI model that only generates text is a chatbot. An AI model that can call functions, query databases, and trigger workflows is an agent. Tool use is what makes the difference.
The Concept​
Tool use (also called function calling) is the ability of an AI model to decide it needs external help, format a request to an external function, call that function, read the result, and continue reasoning.
The model doesn't execute the code itself. It generates a structured request that your application intercepts and executes. The result gets fed back to the model for the next step.
How It Works​
User Input → Model Reasoning → "I need to call extract_pdf(url)"
↓
Your code executes extract_pdf()
↓
Result returned to model
↓
Model continues reasoning with result
The model sees a list of available tools with descriptions and parameter schemas. Based on the task, it decides which tool to call and what arguments to pass.
In Our Contract Workflow​
The contract agent has these tools available:
| Tool | Description | Parameters | Returns |
|---|---|---|---|
extract_pdf | Parse PDF and extract text | url: string | Structured text by section |
lookup_vendor | Get vendor history from CRM | vendor_name: string | Past contracts, disputes, ratings |
check_policy | Compare clause against approved templates | clause_text: string, clause_type: string | Match score, deviations |
calculate_exposure | Compute financial risk | terms: object | Total liability, penalty scenarios |
generate_report | Create formatted PDF report | analysis: object | Report URL |
The model doesn't use all tools on every contract. A straightforward renewal from a trusted vendor might only need extract_pdf and generate_report. A new vendor with unusual terms triggers all five.
Tool Design Principles​
Keep tools focused. One tool, one job. A tool called process_contract that does everything is impossible to debug. Five focused tools that each do one thing are easy to test and maintain.
Handle bad inputs. The model will sometimes send unexpected formats. A calculator tool should handle both 2^3 and 2**3. A date parser should handle "March 5" and "2026-03-05" and "03/05/2026".
Return structured data. Tools that return plain text force the model to parse it. Tools that return JSON let the model work with the data directly.
Include error messages. When a tool fails, return a clear error message. The model will often self-correct if it understands what went wrong.
Common Mistakes​
- Too many tools - Models get slower and less accurate with more than 10-15 tools. If you need more, split into subagents (Module 5).
- Vague descriptions - "Handles data processing" tells the model nothing. "Extracts clause text from a PDF and returns structured sections" tells it exactly when to use this tool.
- No error handling - A tool that throws an unhandled exception crashes the agent loop. Always return a result, even on failure.
What's Next​
Tool use gives the agent the ability to act. In Module 3: Prompt Chaining, we cover how to structure multi-step reasoning so the agent uses its tools effectively.
Tool Use Implementation Lab
Build five production tools for document processing with input validation, error handling, and structured output. Integrate with Bedrock's Converse API.