autogen_core.models#
- pydantic model AssistantMessage[source]#
Bases:
BaseModel
Assistant message are sampled from the language model.
Show JSON schema
{ "title": "AssistantMessage", "description": "Assistant message are sampled from the language model.", "type": "object", "properties": { "content": { "anyOf": [ { "type": "string" }, { "items": { "$ref": "#/$defs/FunctionCall" }, "type": "array" } ], "title": "Content" }, "source": { "title": "Source", "type": "string" }, "type": { "const": "AssistantMessage", "default": "AssistantMessage", "title": "Type", "type": "string" } }, "$defs": { "FunctionCall": { "properties": { "id": { "title": "Id", "type": "string" }, "arguments": { "title": "Arguments", "type": "string" }, "name": { "title": "Name", "type": "string" } }, "required": [ "id", "arguments", "name" ], "title": "FunctionCall", "type": "object" } }, "required": [ "content", "source" ] }
- Fields:
content (str | List[autogen_core._types.FunctionCall])
source (str)
type (Literal['AssistantMessage'])
- field content: str | List[FunctionCall] [Required]#
- class ChatCompletionClient[source]#
Bases:
ComponentBase
[BaseModel
],ABC
- abstract actual_usage() RequestUsage [source]#
- abstract property capabilities: ModelCapabilities#
- abstract count_tokens(messages: Sequence[Annotated[SystemMessage | UserMessage | AssistantMessage | FunctionExecutionResultMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]], *, tools: Sequence[Tool | ToolSchema] = []) int [source]#
- abstract async create(messages: Sequence[Annotated[SystemMessage | UserMessage | AssistantMessage | FunctionExecutionResultMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]], *, tools: Sequence[Tool | ToolSchema] = [], json_output: bool | None = None, extra_create_args: Mapping[str, Any] = {}, cancellation_token: CancellationToken | None = None) CreateResult [source]#
- abstract create_stream(messages: Sequence[Annotated[SystemMessage | UserMessage | AssistantMessage | FunctionExecutionResultMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]], *, tools: Sequence[Tool | ToolSchema] = [], json_output: bool | None = None, extra_create_args: Mapping[str, Any] = {}, cancellation_token: CancellationToken | None = None) AsyncGenerator[str | CreateResult, None] [source]#
- abstract remaining_tokens(messages: Sequence[Annotated[SystemMessage | UserMessage | AssistantMessage | FunctionExecutionResultMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]], *, tools: Sequence[Tool | ToolSchema] = []) int [source]#
- abstract total_usage() RequestUsage [source]#
- pydantic model ChatCompletionTokenLogprob[source]#
Bases:
BaseModel
Show JSON schema
{ "title": "ChatCompletionTokenLogprob", "type": "object", "properties": { "token": { "title": "Token", "type": "string" }, "logprob": { "title": "Logprob", "type": "number" }, "top_logprobs": { "anyOf": [ { "items": { "$ref": "#/$defs/TopLogprob" }, "type": "array" }, { "type": "null" } ], "default": null, "title": "Top Logprobs" }, "bytes": { "anyOf": [ { "items": { "type": "integer" }, "type": "array" }, { "type": "null" } ], "default": null, "title": "Bytes" } }, "$defs": { "TopLogprob": { "properties": { "logprob": { "title": "Logprob", "type": "number" }, "bytes": { "anyOf": [ { "items": { "type": "integer" }, "type": "array" }, { "type": "null" } ], "default": null, "title": "Bytes" } }, "required": [ "logprob" ], "title": "TopLogprob", "type": "object" } }, "required": [ "token", "logprob" ] }
- Fields:
bytes (List[int] | None)
logprob (float)
token (str)
top_logprobs (List[autogen_core.models._types.TopLogprob] | None)
- field top_logprobs: List[TopLogprob] | None = None#
- pydantic model CreateResult[source]#
Bases:
BaseModel
Create result contains the output of a model completion.
Show JSON schema
{ "title": "CreateResult", "description": "Create result contains the output of a model completion.", "type": "object", "properties": { "finish_reason": { "enum": [ "stop", "length", "function_calls", "content_filter", "unknown" ], "title": "Finish Reason", "type": "string" }, "content": { "anyOf": [ { "type": "string" }, { "items": { "$ref": "#/$defs/FunctionCall" }, "type": "array" } ], "title": "Content" }, "usage": { "$ref": "#/$defs/RequestUsage" }, "cached": { "title": "Cached", "type": "boolean" }, "logprobs": { "anyOf": [ { "items": { "$ref": "#/$defs/ChatCompletionTokenLogprob" }, "type": "array" }, { "type": "null" } ], "default": null, "title": "Logprobs" }, "thought": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "default": null, "title": "Thought" } }, "$defs": { "ChatCompletionTokenLogprob": { "properties": { "token": { "title": "Token", "type": "string" }, "logprob": { "title": "Logprob", "type": "number" }, "top_logprobs": { "anyOf": [ { "items": { "$ref": "#/$defs/TopLogprob" }, "type": "array" }, { "type": "null" } ], "default": null, "title": "Top Logprobs" }, "bytes": { "anyOf": [ { "items": { "type": "integer" }, "type": "array" }, { "type": "null" } ], "default": null, "title": "Bytes" } }, "required": [ "token", "logprob" ], "title": "ChatCompletionTokenLogprob", "type": "object" }, "FunctionCall": { "properties": { "id": { "title": "Id", "type": "string" }, "arguments": { "title": "Arguments", "type": "string" }, "name": { "title": "Name", "type": "string" } }, "required": [ "id", "arguments", "name" ], "title": "FunctionCall", "type": "object" }, "RequestUsage": { "properties": { "prompt_tokens": { "title": "Prompt Tokens", "type": "integer" }, "completion_tokens": { "title": "Completion Tokens", "type": "integer" } }, "required": [ "prompt_tokens", "completion_tokens" ], "title": "RequestUsage", "type": "object" }, "TopLogprob": { "properties": { "logprob": { "title": "Logprob", "type": "number" }, "bytes": { "anyOf": [ { "items": { "type": "integer" }, "type": "array" }, { "type": "null" } ], "default": null, "title": "Bytes" } }, "required": [ "logprob" ], "title": "TopLogprob", "type": "object" } }, "required": [ "finish_reason", "content", "usage", "cached" ] }
- Fields:
cached (bool)
content (str | List[autogen_core._types.FunctionCall])
finish_reason (Literal['stop', 'length', 'function_calls', 'content_filter', 'unknown'])
logprobs (List[autogen_core.models._types.ChatCompletionTokenLogprob] | None)
thought (str | None)
usage (autogen_core.models._types.RequestUsage)
- field content: str | List[FunctionCall] [Required]#
The output of the model completion.
- field finish_reason: Literal['stop', 'length', 'function_calls', 'content_filter', 'unknown'] [Required]#
The reason the model finished generating the completion.
- field logprobs: List[ChatCompletionTokenLogprob] | None = None#
The logprobs of the tokens in the completion.
- field thought: str | None = None#
The reasoning text for the completion if available. Used for reasoning models and additional text content besides function calls.
- field usage: RequestUsage [Required]#
The usage of tokens in the prompt and completion.
- pydantic model FunctionExecutionResult[source]#
Bases:
BaseModel
Function execution result contains the output of a function call.
Show JSON schema
{ "title": "FunctionExecutionResult", "description": "Function execution result contains the output of a function call.", "type": "object", "properties": { "content": { "title": "Content", "type": "string" }, "call_id": { "title": "Call Id", "type": "string" } }, "required": [ "content", "call_id" ] }
- Fields:
call_id (str)
content (str)
- pydantic model FunctionExecutionResultMessage[source]#
Bases:
BaseModel
Function execution result message contains the output of multiple function calls.
Show JSON schema
{ "title": "FunctionExecutionResultMessage", "description": "Function execution result message contains the output of multiple function calls.", "type": "object", "properties": { "content": { "items": { "$ref": "#/$defs/FunctionExecutionResult" }, "title": "Content", "type": "array" }, "type": { "const": "FunctionExecutionResultMessage", "default": "FunctionExecutionResultMessage", "title": "Type", "type": "string" } }, "$defs": { "FunctionExecutionResult": { "description": "Function execution result contains the output of a function call.", "properties": { "content": { "title": "Content", "type": "string" }, "call_id": { "title": "Call Id", "type": "string" } }, "required": [ "content", "call_id" ], "title": "FunctionExecutionResult", "type": "object" } }, "required": [ "content" ] }
- Fields:
content (List[autogen_core.models._types.FunctionExecutionResult])
type (Literal['FunctionExecutionResultMessage'])
- field content: List[FunctionExecutionResult] [Required]#
- class ModelFamily(*args: Any, **kwargs: Any)[source]#
Bases:
object
A model family is a group of models that share similar characteristics from a capabilities perspective. This is different to discrete supported features such as vision, function calling, and JSON output.
This namespace class holds constants for the model families that AutoGen understands. Other families definitely exist and can be represented by a string, however, AutoGen will treat them as unknown.
- ANY#
alias of
Literal
[‘gpt-4o’, ‘o1’, ‘o3’, ‘gpt-4’, ‘gpt-35’, ‘r1’, ‘gemini-1.5-flash’, ‘gemini-1.5-pro’, ‘gemini-2.0-flash’, ‘unknown’]
- GEMINI_1_5_FLASH = 'gemini-1.5-flash'#
- GEMINI_1_5_PRO = 'gemini-1.5-pro'#
- GEMINI_2_0_FLASH = 'gemini-2.0-flash'#
- GPT_35 = 'gpt-35'#
- GPT_4 = 'gpt-4'#
- GPT_4O = 'gpt-4o'#
- O1 = 'o1'#
- O3 = 'o3'#
- R1 = 'r1'#
- UNKNOWN = 'unknown'#
- class ModelInfo[source]#
Bases:
TypedDict
- family: Required[Literal['gpt-4o', 'o1', 'o3', 'gpt-4', 'gpt-35', 'r1', 'gemini-1.5-flash', 'gemini-1.5-pro', 'gemini-2.0-flash', 'unknown'] | str]#
Model family should be one of the constants from
ModelFamily
or a string representing an unknown model family.
- pydantic model SystemMessage[source]#
Bases:
BaseModel
System message contains instructions for the model coming from the developer.
Note
Open AI is moving away from using ‘system’ role in favor of ‘developer’ role. See Model Spec for more details. However, the ‘system’ role is still allowed in their API and will be automatically converted to ‘developer’ role on the server side. So, you can use SystemMessage for developer messages.
Show JSON schema
{ "title": "SystemMessage", "description": "System message contains instructions for the model coming from the developer.\n\n.. note::\n\n Open AI is moving away from using 'system' role in favor of 'developer' role.\n See `Model Spec <https://cdn.openai.com/spec/model-spec-2024-05-08.html#definitions>`_ for more details.\n However, the 'system' role is still allowed in their API and will be automatically converted to 'developer' role\n on the server side.\n So, you can use `SystemMessage` for developer messages.", "type": "object", "properties": { "content": { "title": "Content", "type": "string" }, "type": { "const": "SystemMessage", "default": "SystemMessage", "title": "Type", "type": "string" } }, "required": [ "content" ] }
- Fields:
content (str)
type (Literal['SystemMessage'])
- pydantic model UserMessage[source]#
Bases:
BaseModel
User message contains input from end users, or a catch-all for data provided to the model.
Show JSON schema
{ "title": "UserMessage", "description": "User message contains input from end users, or a catch-all for data provided to the model.", "type": "object", "properties": { "content": { "anyOf": [ { "type": "string" }, { "items": { "anyOf": [ { "type": "string" }, {} ] }, "type": "array" } ], "title": "Content" }, "source": { "title": "Source", "type": "string" }, "type": { "const": "UserMessage", "default": "UserMessage", "title": "Type", "type": "string" } }, "required": [ "content", "source" ] }
- Fields:
content (str | List[str | autogen_core._image.Image])
source (str)
type (Literal['UserMessage'])