Supplying context and instructions

Calling the Responses API is only useful if the model understands what it is meant to do and what situation it is reasoning about. In real programs, especially agent-style systems, we rarely ask a model to respond to a single sentence in isolation. Instead, we supply instructions, context, and state so the model can make decisions that fit our program’s goals.

This lesson introduces the ways we shape model behavior by what we send in a request.

Providing instructions to guide model behavior

Instructions tell the model how it should behave, not what it should think about. They define role, tone, and scope.

In practice, instructions are just part of the input we send, written clearly and directly.

from openai import OpenAI

client = OpenAI()

response = client.responses.create(
    model="gpt-4.1-mini",
    input="You are an assistant that writes short HTML fragments.\nGenerate a <ul> list of the inner planets."
)

print(response.output_text)

This instruction narrows the model’s behavior without constraining the content too tightly.

Supplying contextual information as input

Context provides the situation the model should reason about. This might include background facts, recent events, or domain-specific details.

Context is typically supplied as plain text alongside instructions.

context = """
We are generating a static website about the solar system.
Each page describes a single planet.
"""

prompt = context + "\nWrite a short HTML paragraph describing Mars."

response = client.responses.create(
    model="gpt-4.1-mini",
    input=prompt
)

print(response.output_text)

The model now has enough background to stay aligned with the program’s purpose.

Including structured state in a request

Programs often have internal state that affects decisions. Instead of translating state into prose, we can include it in a structured form.

A common approach is to embed state as JSON-like text.

state = {
    "planet": "Jupiter",
    "include_moons": True,
    "max_moons": 4
}

prompt = f"""
You are generating HTML content.
Current state:
{state}
Produce an unordered list of moons.
"""

response = client.responses.create(
    model="gpt-4.1-mini",
    input=prompt
)

print(response.output_text)

This makes the relevant state explicit and visible to the model.

Controlling response length and format

We can influence how long and how structured a response should be by stating expectations directly.

This is often done through clear constraints in the instructions.

prompt = """
Generate a single HTML <li> element.
Limit output to one sentence.
Do not include explanations.
"""

response = client.responses.create(
    model="gpt-4.1-mini",
    input=prompt
)

print(response.output_text)

Explicit constraints reduce ambiguity and make responses easier to integrate into programs.

Using context to influence model decisions

Context is not just descriptive; it actively shapes decisions the model makes. By selecting what context to include, we decide what the model should consider relevant.

In agent systems, this is how past actions, goals, and environmental facts influence the next step.

prompt = """
Goal: generate a page index for the site.
Available pages: Mercury, Venus, Earth, Mars.
Exclude Earth.
Produce an HTML <ul> list.
"""

response = client.responses.create(
    model="gpt-4.1-mini",
    input=prompt
)

print(response.output_text)

The model’s output reflects the constraints implied by the supplied context.

Conclusion

At this point, we can reliably guide a model by supplying instructions, context, and structured state in a single request. We are no longer asking isolated questions, but framing decisions within a program’s broader situation.

This orientation is enough to start treating the model as a reasoning component that responds to carefully shaped inputs.