Describing tools to an LLM
As soon as an agent can reason using an LLM, it also needs a way to act. Tools are how an LLM-powered agent affects the outside world. This lesson exists to show how we make tools visible and understandable to a model, while keeping full control inside our Python program.
Tools in an LLM-powered agent
In this context, a tool is a concrete capability the agent can invoke. It might write a file, generate a page, load data, or perform a calculation.
From the LLM’s perspective, tools are not functions or code. They are options. The model does not execute anything itself. It only chooses which capability should be used next.
This separation is important. The LLM reasons and decides. Our program executes.
Describing tools in a model-readable format
An LLM cannot inspect Python objects. It can only work with text or structured data we provide.
That means each tool must be described explicitly. A description usually includes:
- a tool name
- a short explanation of what it does
- the inputs it expects
- the shape of the result it produces
These descriptions are typically represented as dictionaries or JSON-like structures that are easy to serialize and send with a request.
tools = [
{
"name": "generate_planet_page",
"description": "Generate an HTML page for a single planet",
"inputs": {
"planet_name": "string",
"moons": "list of strings"
}
}
]
The model never sees the function itself. It only sees this description.
Communicating tool capabilities and constraints
Tool descriptions should communicate what the tool can do, not how it works internally.
We want the model to understand the boundaries of each option. A tool that generates a single page is different from one that generates an entire site. That distinction should be clear from the description.
Constraints are just as important. If a tool only works on one planet at a time, that limitation belongs in the description. This helps the model make realistic choices.
Including tool descriptions in an LLM request
Tool descriptions are included alongside the prompt and state when calling the model.
Conceptually, we are saying: “Here is the current situation, and here are the actions you are allowed to choose from.”
response = client.responses.create(
model="gpt-4.1",
input="Decide what to do next",
tools=tools
)
The exact API shape may vary, but the idea stays the same. Tool descriptions travel with the request so the model can reason about them.
Framing tool choice as a decision problem
Once tools are described, choosing a tool becomes a decision-making task.
We are no longer asking the model to generate prose. We are asking it to select an action based on available capabilities and current state.
This framing keeps responsibility clear. The model proposes what to do. Our program decides whether and how that proposal is executed.
Conclusion
At this point, we know how to represent tools as explicit options an LLM can reason about. We can describe their purpose, inputs, and limits, and include those descriptions directly in a model request. With this foundation in place, an LLM can make informed, bounded decisions without ever escaping our program’s control.