Using an LLM to decide what to do next
In earlier agent designs, we relied on explicit rules to decide what happens next. That works, but it becomes rigid as situations grow more varied. This lesson exists to show how an LLM can take over that decision point. Instead of encoding every branch ourselves, we let the model reason over context and state, then tell the program what action to take next.
Framing a decision problem for an LLM
An LLM does not “decide” in the abstract. It responds to a clearly framed question. In an agent, that question usually sounds like: given the current situation, what should happen next?
Framing the decision means expressing the choice space in plain terms. We describe the situation, make it clear that a choice is required, and avoid mixing in unrelated detail. The model’s job is to reason, not to guess what we want.
Supplying context and state to the model
An LLM can only reason over what we give it. Context is the surrounding information, while state is the current snapshot of what the agent knows or has already done.
In practice, we often supply a compact summary of state alongside the decision prompt. This might include progress so far, available actions, or relevant data already loaded. The goal is grounding the model’s reasoning in the same facts our program is using.
Asking the model to select an action
Once context and state are present, we explicitly ask the model to choose. We are not asking for prose or explanation, but for a decision.
This often means asking the model to return a small, structured answer, such as the name of an action to perform next. Keeping the output narrow makes it easier to integrate with code.
decision_prompt = """
The site generator has loaded planet data.
No pages have been generated yet.
Available actions: generate_index, generate_planet_pages.
Select the next action.
"""
Interpreting the model’s decision output
The model’s response is treated as data, not narrative. We read it, extract the selected action, and ignore anything else.
At this stage, we are not trusting the model to do the work. We are only using it to point to the next step. Interpretation is usually a simple mapping from the model’s output to a known action identifier.
Integrating LLM decisions into an agent loop
With an interpreted decision in hand, the agent loop continues as before. The program executes the chosen action, updates state, and then returns to the decision step.
The key difference is that the branching logic now lives outside the code. The loop remains deterministic in structure, while the decision point becomes flexible and context-sensitive.
Conclusion
By framing decisions clearly, supplying the right context, and treating outputs as structured signals, we can use an LLM to decide what an agent should do next. At this point, we are not replacing the agent loop, only enhancing its decision-making core. The result is an agent that stays under program control while gaining far more adaptable reasoning.