Representing decision state
As soon as we start building agents that decide what to do, raw program data stops being enough. Decision-making requires a clear view of what the agent knows, what matters right now, and what actions are possible. This lesson exists to help us shape program state into a form that supports predictable, deterministic decisions in long-running agent programs.
Identifying which parts of program state matter for decisions
Not all program data is relevant when choosing an action. Some values exist only to support implementation details, while others directly influence behavior.
For a simple agent that generates pages about planets, decision-relevant state might include which planet is currently being processed, whether output files already exist, or whether generation is complete. Other details, such as temporary strings or loop counters, usually do not matter for decisions.
The first step is recognizing which pieces of state influence what the agent should do next.
Representing decision-relevant state explicitly
Once decision-relevant information is identified, it should be represented clearly and directly. This often means grouping it together in a dictionary or small data structure instead of letting it remain scattered across variables.
Explicit state makes decisions easier to reason about because all relevant signals are visible in one place.
agent_state = {
"current_planet": "Mars",
"pages_generated": 3,
"generation_complete": False
}
This structure communicates intent: these values exist to drive decisions, not just to hold data.
Separating state used for decisions from incidental data
Programs often accumulate incidental data as they run. Mixing that data with decision-relevant state makes logic harder to follow and easier to break.
A useful practice is to keep decision state separate from working data such as cached strings, temporary lists, or intermediate results. This separation keeps decision logic focused and predictable.
Decision code should depend on state, not on side effects or transient variables.
Reading state to determine available actions
Once state is explicit, the agent can inspect it to determine what actions make sense. Decisions become simple checks against known values.
if not agent_state["generation_complete"]:
action = "generate_page"
else:
action = "stop"
Here, the available actions are derived directly from state, not inferred indirectly from program flow.
Preparing state as input to decision logic
Before decision logic runs, state should already be in a usable form. This may involve normalizing values, setting defaults, or ensuring required fields are present.
The goal is for decision logic to consume state, not clean it up. When state is prepared ahead of time, decision-making becomes a straightforward, readable step in the agent loop.
Conclusion
We now know how to identify decision-relevant state, represent it explicitly, and separate it from incidental program data. With state prepared in this way, deterministic decision logic becomes easier to write, easier to reason about, and easier to extend. At this point, the agent has a clear picture of its situation—and that is exactly what decision-making requires.