How the SDK selects and invokes tools
In the previous lesson, we defined tools and registered them with an agent. That gives the agent capabilities, but not behavior. This lesson exists to explain what happens next: how those registered tools are exposed to the model, how a tool is selected, and how execution actually occurs.
Understanding this flow matters because tool use is where reasoning turns into real work. If we know how the SDK selects and invokes tools, we can design tools that are easy for the model to use and easy for us to control.
How the SDK presents available tools to the model
Once tools are registered, the SDK makes them visible to the model as part of the agent’s execution context. Each tool is described using its name, its inputs, and a short description of what it does.
From the model’s point of view, tools are not Python functions. They are structured capabilities it may choose to invoke. The SDK handles the translation between the Python function we wrote and the abstract description the model sees.
This separation lets the model reason about what can be done without knowing how it is implemented.
How the model chooses a tool
When the agent runs, the model receives the current input, the agent’s instructions, and the list of available tools. Based on that information, the model may decide that calling a tool is the best next step.
That decision is part of the model’s reasoning process. The model selects a tool by name and supplies values for the tool’s declared inputs. If no tool is appropriate, the model may instead produce a normal text response.
Tool selection is therefore probabilistic and contextual, not hard-coded. The SDK does not choose the tool; the model does.
How the SDK invokes the selected tool
When the model selects a tool, the SDK intercepts that decision before any user-facing output is produced. The SDK then locates the corresponding Python function and prepares to call it.
At this point, control returns briefly to our program. The SDK is responsible for invoking the function safely and in the correct environment. We do not call the function ourselves.
This handoff is what allows tools to produce side effects, such as writing files or generating output, while still being driven by model reasoning.
Passing arguments from the model to the tool
The arguments produced by the model are passed directly into the Python function as parameters. The SDK handles basic parsing and type alignment based on how the tool was declared.
From the function’s perspective, it looks like a normal Python call. Inputs arrive as ordinary values, and the function executes synchronously.
For example, a tool that generates an HTML page might receive a planet name and a filename:
def generate_planet_page(name: str, output_path: str) -> str:
html = f"<h1>{name}</h1>"
with open(output_path, "w") as file:
file.write(html)
return output_path
The model does not know how this function works. It only knows that calling it produces a result.
Returning tool results to the agent
When the function completes, its return value is captured by the SDK. That value is then fed back into the agent as tool output.
The model can use this result as new context. It may reference the returned data, make another tool call, or decide that the task is complete.
Tool results are therefore part of the agent’s evolving state, even though they originate from deterministic Python code.
Conclusion
By the end of this lesson, we have a clear picture of how tool use works inside the SDK. Tools are described to the model, selected through reasoning, invoked by the SDK, and then folded back into the agent’s execution flow.
With this mental model in place, we can reason about tool behavior confidently and design tools that work smoothly with model-driven decisions.