Determining completion
LLM-guided workflows only make sense if we know when to stop. Without a clear notion of completion, an agent can loop, overwork, or declare success too early. This lesson exists to orient us around ending an LLM-driven task cleanly and deliberately as part of a real Python program.
Defining completion criteria for a task
Every workflow needs an explicit definition of what “done” means. Completion criteria describe the observable conditions that must be true before a task can end.
In practice, these criteria usually live in program state. For example, a page-generation task might be complete only when all expected HTML files exist and have been written successfully.
The important idea is that completion is defined outside the model. The model can reason about progress, but the program decides what counts as finished.
task_state = {
"pages_expected": 3,
"pages_written": 3
}
is_complete = task_state["pages_written"] == task_state["pages_expected"]
Asking an LLM to assess task completion
An LLM can be useful for assessing progress, especially when completion depends on qualitative judgment. We can ask the model whether a goal appears satisfied based on the current state.
This assessment should be framed as a question, not a command. The model provides an opinion, not authority.
prompt = """
Goal: Generate HTML pages for all planets.
State: 3 pages written out of 3 expected.
Is the task complete? Answer yes or no.
"""
The model’s response becomes an input to our logic, not the final decision.
Verifying completion using program state
Model output must always be checked against concrete program state. This verification step prevents premature or incorrect completion.
We treat the model’s answer as a signal, then confirm it using our own data structures, files, or counters.
model_says_done = True
state_says_done = task_state["pages_written"] == task_state["pages_expected"]
task_complete = model_says_done and state_says_done
Completion only occurs when both agree.
Preventing premature or incorrect completion
Premature completion usually comes from vague goals or weak checks. The safeguard is simple: never rely on the model alone.
If state does not meet the defined criteria, the workflow continues, even if the model claims success. This keeps control firmly inside the program.
The agent may replan, retry, or request clarification, but it does not stop.
Ending workflows cleanly
Once completion is verified, the workflow should exit in a predictable way. This usually means stopping the planning loop, recording final state, and returning a result.
Clean endings make systems easier to reason about, restart, and debug.
if task_complete:
print("Task completed successfully.")
return
Conclusion
At this point, we know how to decide when an LLM-guided task is finished. We define completion explicitly, allow the model to assess progress, verify results using program state, and end workflows cleanly. With this in place, LLM-driven systems remain purposeful, bounded, and under control.