/v1/responses
input
# Input as a simple text
curl https://api.openai.com/v1/responses \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "gpt-5-nano",
"input": "Tell me a joke."
}'
# Multimodal input
curl https://api.openai.com/v1/responses \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4.1",
"input": [
{
"role": "user",
"content": [
{
"type": "input_text",
"text": "What is in this image?"
},
{
"type": "input_image",
"image_url": "https://example.com/cat.jpg"
}
]
}
]
}'
tools
- You can provide a list of tools that the model can decide to use
- This is specially useful for agents
- https://developers.openai.com/api/docs/guides/function-calling
- https://openai.com/index/function-calling-and-other-api-updates/ (released in June 13, 2023)
curl https://api.openai.com/v1/responses \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "gpt-5",
"input": [
{
"role": "user",
"content": "What is my horoscope? I am an Aquarius."
}
],
"tools": [
{
"type": "function",
"name": "get_horoscope",
"description": "Get todays horoscope for an astrological sign.",
"parameters": {
"type": "object",
"properties": {
"sign": {
"type": "string",
"description": "An astrological sign like Taurus or Aquarius"
}
},
"required": ["sign"]
}
}
]
}'
// The LLM tells which tool to use with which arguments
{
"output": [
{
"type": "function_call",
"name": "get_horoscope",
"call_id": "call_abc123",
"arguments": "{\"sign\":\"Aquarius\"}"
}
]
}
def call_function(name, args):
if name == "get_weather":
return get_weather(**args)
if name == "send_email":
return send_email(**args)
for tool_call in response.output:
if tool_call.type != "function_call":
continue
name = tool_call.name
args = json.loads(tool_call.arguments)
result = call_function(name, args)
input_messages.append({
"type": "function_call_output",
"call_id": tool_call.call_id,
"output": str(result)
})
# The agent can tell send back the result of the function call
curl https://api.openai.com/v1/responses \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-5",
"instructions": "Respond only with a horoscope generated by a tool.",
"input": [
{
"role": "user",
"content": "What is my horoscope? I am an Aquarius."
},
{
"type": "function_call",
"name": "get_horoscope",
"call_id": "call_abc123",
"arguments": "{\"sign\":\"Aquarius\"}"
},
{
"type": "function_call_output",
"call_id": "call_abc123",
"output": "{\"horoscope\":\"Aquarius: Next Tuesday you will befriend a baby otter.\"}"
}
],
"tools": [
{
"type": "function",
"name": "get_horoscope",
"description": "Get today'\''s horoscope for an astrological sign.",
"parameters": {
"type": "object",
"properties": {
"sign": { "type": "string" }
},
"required": ["sign"]
}
}
]
}'
// And the model responds with the final answer
{
"output_text": "Aquarius: Next Tuesday you will befriend a baby otter."
}
response_format
- https://developers.openai.com/api/docs/guides/structured-outputs
- The response format lets you control the shape of the model's output
- It's good to get reliable JSON to parse in your agent code
curl https://api.openai.com/v1/responses \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4.1",
"input": "Extract the user info: Hi, Im Sarah. Im 27 and live in Berlin. I like painting and cycling.",
"response_format": {
"type": "json_schema",
"json_schema": {
"name": "user_profile",
"schema": {
"type": "object",
"properties": {
"name": { "type": "string" },
"age": { "type": "number" },
"city": { "type": "string" },
"hobbies": {
"type": "array",
"items": { "type": "string" }
}
},
"required": ["name","age","city","hobbies"],
"additionalProperties": false
}
}
}
}'
// Model response
{
"name": "Sarah",
"age": 27,
"city": "Berlin",
"hobbies": ["painting","cycling"]
}
instructions
- Good to set the tone, role and guardrails
- It's a "system prompt"
curl https://api.openai.com/v1/responses \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4.1",
"instructions": "You are a concise assistant. Reply in under 20 words.",
"input": "Explain photosynthesis."
}'
max_output_tokens
- Prevents long answers and controls cost
curl https://api.openai.com/v1/responses \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4.1",
"input": "Explain quantum computing simply.",
"max_output_tokens": 50
}'
stream
- Tokens arrive as generated
curl https://api.openai.com/v1/responses \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4.1",
"input": "Tell a short story about space.",
"stream": true
}'
data: { "delta": "Once" }
data: { "delta": " upon" }
data: { "delta": " a time..." }
previous_response_id
- Reference a previous response
- No need to resend the full history
curl https://api.openai.com/v1/responses \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4.1",
"previous_response_id": "resp_abc123",
"input": "Explain that more simply."
}'
temperature
- Creativity control:
- Lower: predictable
-
Higher: creative
-
0.2 factual/consistent
- 0.7 balanced
- 1.0+ creative
curl https://api.openai.com/v1/responses \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4.1",
"input": "Write a tagline for a coffee shop.",
"temperature": 1.2
}'
top_p
- Alternative to temperature
- Lower value = safer outputs
curl https://api.openai.com/v1/responses \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4.1",
"input": "Write a tweet about AI.",
"top_p": 0.8
}'
stop
- Force generation to stop once a specific token is reached
curl https://api.openai.com/v1/responses \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4.1",
"input": "List 5 fruits:",
"stop": ["4."] # stops when it hits "4."
}'