Interpreting tool instructions

This lesson exists because once an LLM is allowed to choose tools, our program must be able to understand what the model is asking us to do. The model does not execute code. It emits instructions, and our responsibility is to turn those instructions into safe, concrete Python function calls.

At this point in the syllabus, we already have tools defined as Python functions. Now we focus on the narrow but critical step of interpreting the model’s output correctly.

Receiving tool selection instructions from an LLM

When an LLM selects a tool, it typically responds with structured data rather than free-form prose. This structure is part of the contract we establish when we describe available tools.

A common pattern is for the model to return a small object that names the tool and supplies arguments. We treat this response as data to be inspected, not text to be displayed.

model_output = {
    "tool": "render_planet_page",
    "arguments": {
        "name": "Mars",
        "moons": ["Phobos", "Deimos"]
    }
}

At this stage, nothing has happened yet. We have only received instructions.

Extracting the selected tool name

The first piece of information we need is the tool name itself. This tells us which capability the model intends to use.

We typically extract this value directly from the parsed model output and store it in a variable.

tool_name = model_output["tool"]

This name is just a string. It has no power until we explicitly map it to a real function.

Extracting tool arguments from model output

Most tools require inputs. The model supplies these as arguments, usually grouped together in a dictionary-like structure.

We extract these arguments as a unit and pass them along unchanged, unless validation requires otherwise.

tool_args = model_output["arguments"]

At this point, tool_args should line up with the parameters expected by the target function.

Mapping model instructions to concrete function calls

To turn instructions into action, we map tool names to Python callables. This is usually done with a dictionary that defines the allowed tools.

TOOLS = {
    "render_planet_page": render_planet_page,
    "write_index_page": write_index_page
}

Once we have the tool name and arguments, we look up the function and invoke it explicitly.

tool_function = TOOLS[tool_name]
result = tool_function(**tool_args)

This is the moment where model intent becomes real program behavior.

Handling incomplete or ambiguous tool instructions

Model output is not guaranteed to be complete. A tool name may be present without arguments, or arguments may be missing required values.

Our job here is not to guess. If required information is absent, we treat the instruction as incomplete and stop before execution.

if "tool" not in model_output or "arguments" not in model_output:
    raise ValueError("Incomplete tool instruction")

By handling ambiguity explicitly, we keep control of the program and prevent unintended behavior.

Conclusion

At this point, we can reliably translate an LLM’s tool instructions into concrete Python function calls. We know how to extract the tool name, retrieve arguments, and invoke the correct function deliberately.

This completes the bridge between model reasoning and real execution, while keeping control firmly in our code.