autogen_ext.models.replay#
- class ReplayChatCompletionClient(chat_completions: Sequence[str | CreateResult], model_info: ModelInfo | None = None)[source]#
Bases:
ChatCompletionClient
,Component
[ReplayChatCompletionClientConfig
]A mock chat completion client that replays predefined responses using an index-based approach.
This class simulates a chat completion client by replaying a predefined list of responses. It supports both single completion and streaming responses. The responses can be either strings or CreateResult objects. The client now uses an index-based approach to access the responses, allowing for resetting the state.
Note
The responses can be either strings or CreateResult objects.
- Parameters:
chat_completions (Sequence[Union[str, CreateResult]]) – A list of predefined responses to replay.
- Raises:
ValueError("No more mock responses available") – If the list of provided outputs are exhausted.
Examples:
Simple chat completion client to return pre-defined responses.
from autogen_core.models import UserMessage from autogen_ext.models.replay import ReplayChatCompletionClient async def example(): chat_completions = [ "Hello, how can I assist you today?", "I'm happy to help with any questions you have.", "Is there anything else I can assist you with?", ] client = ReplayChatCompletionClient(chat_completions) messages = [UserMessage(content="What can you do?", source="user")] response = await client.create(messages) print(response.content) # Output: "Hello, how can I assist you today?"
Simple streaming chat completion client to return pre-defined responses
import asyncio from autogen_core.models import UserMessage from autogen_ext.models.replay import ReplayChatCompletionClient async def example(): chat_completions = [ "Hello, how can I assist you today?", "I'm happy to help with any questions you have.", "Is there anything else I can assist you with?", ] client = ReplayChatCompletionClient(chat_completions) messages = [UserMessage(content="What can you do?", source="user")] async for token in client.create_stream(messages): print(token, end="") # Output: "Hello, how can I assist you today?" async for token in client.create_stream(messages): print(token, end="") # Output: "I'm happy to help with any questions you have." asyncio.run(example())
Using .reset to reset the chat client state
import asyncio from autogen_core.models import UserMessage from autogen_ext.models.replay import ReplayChatCompletionClient async def example(): chat_completions = [ "Hello, how can I assist you today?", ] client = ReplayChatCompletionClient(chat_completions) messages = [UserMessage(content="What can you do?", source="user")] response = await client.create(messages) print(response.content) # Output: "Hello, how can I assist you today?" response = await client.create(messages) # Raises ValueError("No more mock responses available") client.reset() # Reset the client state (current index of message and token usages) response = await client.create(messages) print(response.content) # Output: "Hello, how can I assist you today?" again asyncio.run(example())
- classmethod _from_config(config: ReplayChatCompletionClientConfig) Self [source]#
Create a new instance of the component from a configuration object.
- Parameters:
config (T) – The configuration object.
- Returns:
Self – The new instance of the component.
- _to_config() ReplayChatCompletionClientConfig [source]#
Dump the configuration that would be requite to create a new instance of a component matching the configuration of this instance.
- Returns:
T – The configuration of the component.
- actual_usage() RequestUsage [source]#
- property capabilities: ModelCapabilities#
Return mock capabilities.
- component_config_schema#
alias of
ReplayChatCompletionClientConfig
- component_provider_override: ClassVar[str | None] = 'autogen_ext.models.replay.ReplayChatCompletionClient'#
Override the provider string for the component. This should be used to prevent internal module names being a part of the module name.
- component_type: ClassVar[ComponentType] = 'replay_chat_completion_client'#
The logical type of the component.
- count_tokens(messages: Sequence[Annotated[SystemMessage | UserMessage | AssistantMessage | FunctionExecutionResultMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]], *, tools: Sequence[Tool | ToolSchema] = []) int [source]#
- async create(messages: Sequence[Annotated[SystemMessage | UserMessage | AssistantMessage | FunctionExecutionResultMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]], *, tools: Sequence[Tool | ToolSchema] = [], json_output: bool | None = None, extra_create_args: Mapping[str, Any] = {}, cancellation_token: CancellationToken | None = None) CreateResult [source]#
Return the next completion from the list.
- property create_calls: List[Dict[str, Any]]#
Return the arguments of the calls made to the create method.
- async create_stream(messages: Sequence[Annotated[SystemMessage | UserMessage | AssistantMessage | FunctionExecutionResultMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]], *, tools: Sequence[Tool | ToolSchema] = [], json_output: bool | None = None, extra_create_args: Mapping[str, Any] = {}, cancellation_token: CancellationToken | None = None) AsyncGenerator[str | CreateResult, None] [source]#
Return the next completion as a stream.
- remaining_tokens(messages: Sequence[Annotated[SystemMessage | UserMessage | AssistantMessage | FunctionExecutionResultMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]], *, tools: Sequence[Tool | ToolSchema] = []) int [source]#
- total_usage() RequestUsage [source]#