Guarding tool calls

Once an LLM is allowed to select and invoke tools, it becomes part of your program’s control flow. That is powerful, but it also changes where mistakes can come from. This lesson exists to show how we keep control on the Python side, even when the model is making suggestions. The goal is not to distrust the model, but to ensure that only safe, intended actions ever run.

Validating tool names against allowed tools

An LLM may suggest a tool name that does not exist or that your program should not expose. The first guard is to check that the requested tool is one you explicitly allow.

A common pattern is to keep allowed tools in a dictionary, keyed by name. Any request outside that set is rejected before execution.

TOOLS = {
    "generate_planet_page": generate_planet_page,
    "write_index_page": write_index_page,
}

tool_name = model_output["tool"]

if tool_name not in TOOLS:
    raise ValueError("Requested tool is not allowed")

This simple check keeps the model from invoking arbitrary functions or reaching outside the intended surface area of your program.

Validating tool arguments before execution

Even when the tool name is valid, the arguments may not be. Arguments should be checked for shape, type, and basic constraints before the tool runs.

This validation happens in your code, not in the prompt.

args = model_output["args"]

if not isinstance(args.get("planet_name"), str):
    raise ValueError("planet_name must be a string")

if not isinstance(args.get("moons"), list):
    raise ValueError("moons must be a list")

The intent is not to exhaustively sanitize input, but to ensure the tool receives data it can safely work with.

Rejecting or correcting invalid tool requests

When validation fails, you can choose to reject the request outright or correct it into a safe form. Rejection is often the simplest and clearest option.

A rejected request can be reported back to the model as feedback or logged for inspection.

if tool_name not in TOOLS:
    result = {
        "status": "error",
        "reason": "Unknown tool requested"
    }

This keeps the program’s behavior predictable and avoids partial or unintended execution.

Preventing unintended side effects

Tools often perform side effects, such as writing files or modifying state. Guarding tool calls means ensuring those side effects happen only when inputs are valid and intentional.

One practical approach is to separate validation from execution.

def safe_call(tool_func, args):
    validate_args(args)
    return tool_func(**args)

By structuring calls this way, side effects only occur after all checks have passed.

Maintaining program control over execution

The most important principle is that the LLM never “runs” anything by itself. It proposes actions, and your program decides whether and how they happen.

The control flow always remains in Python.

tool_name = model_output["tool"]
args = model_output["args"]

tool = TOOLS.get(tool_name)

if tool is None:
    handle_invalid_request()
else:
    result = safe_call(tool, args)

This pattern makes it clear that the model is an advisor, not an executor.

Conclusion

In this lesson, we established how to guard tool execution when an LLM is involved. By validating tool names, checking arguments, rejecting invalid requests, and isolating side effects, we keep full control over what the program actually does. With these guards in place, LLM-driven tool selection becomes a safe extension of your existing agent architecture, not a loss of control.