Handling tool results and errors

As soon as an agent can use tools, those tools start influencing its behavior. Some calls succeed. Others fail. Either way, the agent needs to respond deliberately. This lesson exists to show how tool outcomes are handled when using the OpenAI Agents SDK, so we stay in control even when execution does not go as planned.

Receiving tool execution results

When the SDK invokes a tool, it captures the result and feeds it back into the agent’s execution flow. A successful tool call produces a structured result that the agent can reason about and incorporate into its next step.

The important point is that tool results are data, not side effects. They are surfaced explicitly, rather than being hidden inside function calls.

A simple tool might return a value like this:

def render_planet_page(planet_name):
    return {
        "status": "ok",
        "page": f"<h1>{planet_name}</h1>"
    }

The SDK delivers this result back to the agent so it can decide what to do next.

Handling tool failures reported by the SDK

Not every tool call succeeds. A tool might raise an exception or return a failure result. The SDK captures these failures and reports them as part of tool execution, instead of crashing the agent outright.

This means tool failure is treated as an expected outcome, not a fatal error. The agent remains running, with clear visibility into what went wrong.

A tool might report failure explicitly like this:

def write_page_to_disk(html):
    if not html:
        return {"status": "error", "reason": "No content to write"}

The SDK passes this failure back to the agent rather than stopping execution.

Distinguishing tool errors from model errors

Tool errors and model errors are different problems, and the SDK keeps them separate. A tool error means something went wrong while executing code. A model error means the model produced unusable or invalid output.

This distinction matters because the responses are different. Tool errors often call for retrying, skipping, or adjusting behavior. Model errors usually require revising instructions or constraining output.

By separating these cases, the SDK makes it easier to respond appropriately instead of treating all failures the same.

Updating agent behavior based on tool outcomes

Once a tool result is available, the agent can adapt its behavior. Success might allow the agent to move forward. Failure might cause it to choose a different tool, retry with different inputs, or stop the current task.

The key is that tool outcomes become part of the agent’s state and decision-making process. They are not ignored or swallowed.

A successful page render might lead to writing the page. A failed write might lead to choosing a fallback location or halting generation.

Preserving control and safety during tool execution

Even with SDK-managed execution, control remains with the developer. Tools do not act autonomously. They run only when selected, and their results are inspected before influencing behavior.

By treating tool results as structured signals, we keep execution safe and predictable. The agent reacts to what happened, rather than assuming everything worked.

At this point, we are no longer just calling functions. We are managing outcomes. That shift is what allows agents to behave robustly when interacting with the real world.