Updating state from model output
This lesson exists because an LLM-driven agent is only useful if its decisions actually change what the program knows and does next. Calling a model is not enough. We need a clear way to turn model output into concrete updates to agent state so the rest of the system can keep operating predictably.
Interpreting model output as state-changing instructions
When we call an LLM, we treat its response as a description of what should happen next, not as conversational text. That description usually implies a change to state: selecting an action, updating a value, or recording a result.
A common pattern is to ask the model to respond using a simple, structured format that signals intent clearly.
model_output = {
"action": "generate_page",
"target": "mars.html"
}
At this point, the model is not changing state by itself. It is proposing a state change that our program can interpret.
Mapping model decisions to concrete state updates
Once we have interpreted the model’s intent, we translate it into explicit updates to our agent’s state. This keeps the boundary between reasoning and execution clear.
agent_state["last_action"] = model_output["action"]
agent_state["current_target"] = model_output["target"]
The important idea is that the model never mutates state directly. The program decides how a model’s output maps onto real variables and data structures.
Validating state changes before applying them
Before applying any update, we check that the proposed change makes sense in the current context. This protects the agent from drifting into an invalid or inconsistent state.
allowed_actions = {"generate_page", "skip_page"}
if model_output["action"] in allowed_actions:
agent_state["last_action"] = model_output["action"]
Validation keeps control in the program, not in the model. The LLM suggests. The code decides.
Recording outcomes of LLM-driven actions
After an LLM-driven action is carried out, we record what actually happened. This gives the agent a reliable history of outcomes, not just intentions.
agent_state["completed_pages"].append("mars.html")
This recorded outcome can later be supplied back to the model as context, grounding future decisions in what has already been done.
Keeping agent state consistent over time
As an agent runs over many iterations, state updates must accumulate without contradiction. Each update should move the agent forward, not rewrite or obscure what came before.
This means using clear, well-named state fields and updating them deliberately, one decision at a time. Consistency comes from discipline, not from model intelligence.
Conclusion
By interpreting model output as instructions, mapping those instructions to explicit state updates, validating changes, and recording outcomes, we turn LLM reasoning into durable agent behavior. At this point, the agent is no longer just calling a model. It is maintaining a coherent internal state that evolves reliably over time.