Providing state to an LLM

Once an LLM becomes part of an agent, it no longer reasons in a vacuum. Its decisions depend on what it knows about the current situation. This lesson exists to orient us to how agent state is selected, shaped, and passed into an LLM so its reasoning stays grounded in the program we are actually running.

Identifying relevant state for reasoning

Not all agent state is useful to an LLM. Some values exist only to support internal bookkeeping, while others directly affect what decision should be made next.

The goal is to identify the pieces of state that influence reasoning. These are typically things like the current task, known facts, recent outcomes, or constraints that limit what actions make sense.

Keeping this boundary clear helps prevent accidental leakage of irrelevant details into the model’s reasoning process.

Representing state in a compact structured form

State should be represented in a form that is easy for both programs and models to work with. In practice, this usually means simple structures like dictionaries and lists.

Compactness matters. A small, clearly named structure communicates intent far better than a large, loosely organized block of data. The structure should reflect meaning, not internal implementation details.

agent_state = {
    "current_planet": "Mars",
    "known_moons": ["Phobos", "Deimos"],
    "task": "generate_summary_page"
}

Supplying state as part of an LLM request

Once state is represented clearly, it can be included directly in the request sent to the LLM. The model does not infer state automatically; it reasons only over what we provide.

State is typically embedded alongside instructions or questions, either as structured data or as a clearly labeled section of input. What matters is that the model can see and use it when forming a response.

prompt = {
    "instructions": "Decide which page to generate next.",
    "state": agent_state
}

Balancing completeness versus conciseness

There is a constant tradeoff between giving the model enough information and giving it too much. Excessive state can distract the model or dilute what actually matters.

The guiding principle is relevance. If a piece of state does not affect the current decision, it likely does not belong in the request. Concise state improves reasoning reliability and reduces cost.

Using state to ground model decisions

Supplying state is what anchors LLM reasoning to reality. Instead of producing generic answers, the model can make decisions that align with the agent’s current situation.

When state is clear and relevant, the LLM’s output becomes easier to interpret, validate, and apply. This grounding is what allows LLM-driven agents to behave consistently over time.

Conclusion

We now have a clear mental model for providing state to an LLM. We know how to identify decision-relevant state, represent it compactly, and supply it as part of a request without overwhelming the model. With this orientation, we are ready to treat the LLM as a grounded reasoning component rather than an isolated text generator.