What an LLM contributes to an agent

Up to this point, our agents have relied on deterministic logic. We wrote rules, checked conditions, and selected actions in ways that were predictable and explicit. That approach works well, but it starts to strain when decisions depend on interpretation, nuance, or incomplete information. This lesson exists to introduce large language models as a new kind of reasoning component that can sit inside an agent loop and handle those harder decisions.

What an LLM is at a high level

A large language model, or LLM, is a program trained on vast amounts of text to predict and generate language. It does not store facts in the way a database does, and it does not execute rules like traditional code. Instead, it produces outputs by reasoning over patterns learned during training.

In an agent context, we treat an LLM as a callable component. We give it input, it produces an output, and our program decides what to do with that result.

How LLM reasoning differs from rule-based logic

Rule-based logic works by following explicit instructions we write ahead of time. Every decision path must be anticipated and encoded. An LLM reasons more flexibly, using language and context to arrive at an answer even when the situation was not precisely predefined.

This difference matters when decisions involve judgment, interpretation, or synthesis. Instead of checking dozens of conditional branches, we can ask the model to evaluate the situation and propose an action.

The role of an LLM inside an agent loop

An agent loop still senses, decides, acts, and updates state. The key change is where the decision happens. Rather than selecting an action through fixed rules, we delegate that step to the LLM.

The surrounding loop remains ordinary Python code. We gather state, pass relevant context to the model, receive a response, and then continue execution based on that response.

Using an LLM as a decision-making component

When we use an LLM for decisions, we are not asking it to run the program. We are asking it to choose or recommend what should happen next. The program remains in control.

At a high level, this looks like calling a function that returns a decision.

decision = ask_model(agent_state, available_actions)

The returned value is treated as data. We inspect it, validate it, and then decide how to act.

Situations where LLM reasoning is beneficial

LLM-based reasoning is most useful when decisions depend on context rather than strict rules. This includes choosing between multiple plausible actions, interpreting user intent, or deciding what information matters most right now.

In these situations, an LLM can reduce complexity and improve flexibility. It allows us to replace large rule sets with a single reasoning step, while still keeping the rest of the agent deterministic and controlled.

Conclusion

We introduced large language models as reasoning engines that complement, rather than replace, traditional agent logic. We saw how LLMs differ from rule-based decisions, where they fit inside an agent loop, and why they are valuable for complex or ambiguous choices. With this foundation, we are now oriented to use an LLM as a decision-making component inside an agent.