Structured output using GPT-4o models#

This cookbook demonstrates how to obtain structured output using GPT-4o models. The OpenAI beta client SDK provides a parse helper that allows you to use your own Pydantic model, eliminating the need to define a JSON schema. This approach is recommended for supported models.

Currently, this feature is supported for:

  • gpt-4o-mini on OpenAI

  • gpt-4o-2024-08-06 on OpenAI

  • gpt-4o-2024-08-06 on Azure

Let’s define a simple message type that carries explanation and output for a Math problem

from pydantic import BaseModel


class MathReasoning(BaseModel):
    class Step(BaseModel):
        explanation: str
        output: str

    steps: list[Step]
    final_answer: str
import os

# Set the environment variable
os.environ["AZURE_OPENAI_ENDPOINT"] = "https://YOUR_ENDPOINT_DETAILS.openai.azure.com/"
os.environ["AZURE_OPENAI_API_KEY"] = "YOUR_API_KEY"
os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"] = "gpt-4o-2024-08-06"
os.environ["AZURE_OPENAI_API_VERSION"] = "2024-08-01-preview"
import json
import os
from typing import Optional

from autogen_core.models import UserMessage
from autogen_ext.models.openai import AzureOpenAIChatCompletionClient


# Function to get environment variable and ensure it is not None
def get_env_variable(name: str) -> str:
    value = os.getenv(name)
    if value is None:
        raise ValueError(f"Environment variable {name} is not set")
    return value


# Create the client with type-checked environment variables
client = AzureOpenAIChatCompletionClient(
    azure_deployment=get_env_variable("AZURE_OPENAI_DEPLOYMENT_NAME"),
    model=get_env_variable("AZURE_OPENAI_MODEL"),
    api_version=get_env_variable("AZURE_OPENAI_API_VERSION"),
    azure_endpoint=get_env_variable("AZURE_OPENAI_ENDPOINT"),
    api_key=get_env_variable("AZURE_OPENAI_API_KEY"),
)
# Define the user message
messages = [
    UserMessage(content="What is 16 + 32?", source="user"),
]

# Call the create method on the client, passing the messages and additional arguments
# The extra_create_args dictionary includes the response format as MathReasoning model we defined above
# Providing the response format and pydantic model will use the new parse method from beta SDK
response = await client.create(messages=messages, extra_create_args={"response_format": MathReasoning})

# Ensure the response content is a valid JSON string before loading it
response_content: Optional[str] = response.content if isinstance(response.content, str) else None
if response_content is None:
    raise ValueError("Response content is not a valid JSON string")

# Print the response content after loading it as JSON
print(json.loads(response_content))

# Validate the response content with the MathReasoning model
MathReasoning.model_validate(json.loads(response_content))
{'steps': [{'explanation': 'Start by aligning the numbers vertically.', 'output': '\n  16\n+ 32'}, {'explanation': 'Add the units digits: 6 + 2 = 8.', 'output': '\n  16\n+ 32\n   8'}, {'explanation': 'Add the tens digits: 1 + 3 = 4.', 'output': '\n  16\n+ 32\n  48'}], 'final_answer': '48'}
MathReasoning(steps=[Step(explanation='Start by aligning the numbers vertically.', output='\n  16\n+ 32'), Step(explanation='Add the units digits: 6 + 2 = 8.', output='\n  16\n+ 32\n   8'), Step(explanation='Add the tens digits: 1 + 3 = 4.', output='\n  16\n+ 32\n  48')], final_answer='48')