Parsing model output
Calling a language model is only useful if we can use what it returns. In an agent or AI-enabled program, model output is not something we read—it is something our code must handle, validate, and act on. This lesson focuses on treating model responses as program input that feeds directly into logic, decisions, and tools.
Treating model output as data, not prose
When we call the Responses API, the model returns text, but we should not think of that text as narrative. We should think of it as data produced by another component in our system.
That shift matters. It encourages us to ask the model for output that is easy to inspect and process, rather than output that merely sounds good to a human reader.
response = client.responses.create(
model="gpt-4.1-mini",
input="Return the planet name as JSON with a single 'name' field."
)
Here, we are already framing the output as something our program will consume.
Extracting structured information from responses
The Responses API returns a structured object, not just a string. Our first task is usually to extract the actual text content we care about.
Once extracted, we often treat that text as structured data—JSON, a keyword, or a small instruction set—rather than free-form language.
raw_text = response.output_text
data = json.loads(raw_text)
planet_name = data["name"]
This pattern makes the model feel less like a chatbot and more like a function that returns a value.
Validating model output before use
Model output should never be trusted blindly. Even when we ask for structured data, we must confirm that it matches our expectations before using it.
Validation can be simple. We check that required fields exist, that values have the right type, and that they fall within acceptable bounds.
if "name" not in data:
raise ValueError("Missing planet name in model output")
This step keeps control in our program, not in the model.
Handling unexpected or malformed responses
Sometimes the model output will not match what we asked for. When that happens, our program must decide how to respond.
The important idea is that unexpected output is a normal condition, not a special case. We plan for it by detecting it and handling it explicitly.
try:
data = json.loads(raw_text)
except json.JSONDecodeError:
data = {"name": "unknown"}
This keeps the rest of the program running in a known state.
Feeding parsed output into program logic
Once parsed and validated, model output becomes just another input to our system. We can pass it into functions, use it in conditionals, or store it as part of agent state.
html = f"<h1>{planet_name}</h1>"
with open("planet.html", "w") as file:
file.write(html)
At this point, the model’s role is complete. Its output has been converted into ordinary program data, and the rest of the system proceeds deterministically.
Conclusion
We now have a clear mental model for handling model output. Instead of reading responses, we extract, validate, and interpret them as data. With this approach, language models fit cleanly into Python programs as controlled, predictable components rather than mysterious sources of text.