autogen_ext.models.replay#
- class ReplayChatCompletionClient(chat_completions: Sequence[str | CreateResult])[source]#
- Bases: - ChatCompletionClient- A mock chat completion client that replays predefined responses using an index-based approach. - This class simulates a chat completion client by replaying a predefined list of responses. It supports both single completion and streaming responses. The responses can be either strings or CreateResult objects. The client now uses an index-based approach to access the responses, allowing for resetting the state. - Note - The responses can be either strings or CreateResult objects. - Parameters:
- chat_completions (Sequence[Union[str, CreateResult]]) – A list of predefined responses to replay. 
- Raises:
- ValueError("No more mock responses available") – If the list of provided outputs are exhausted. 
 - Examples: - Simple chat completion client to return pre-defined responses. - from autogen_ext.models.replay import ReplayChatCompletionClient from autogen_core.models import UserMessage async def example(): chat_completions = [ "Hello, how can I assist you today?", "I'm happy to help with any questions you have.", "Is there anything else I can assist you with?", ] client = ReplayChatCompletionClient(chat_completions) messages = [UserMessage(content="What can you do?", source="user")] response = await client.create(messages) print(response.content) # Output: "Hello, how can I assist you today?" - Simple streaming chat completion client to return pre-defined responses - import asyncio from autogen_ext.models.replay import ReplayChatCompletionClient from autogen_core.models import UserMessage async def example(): chat_completions = [ "Hello, how can I assist you today?", "I'm happy to help with any questions you have.", "Is there anything else I can assist you with?", ] client = ReplayChatCompletionClient(chat_completions) messages = [UserMessage(content="What can you do?", source="user")] async for token in client.create_stream(messages): print(token, end="") # Output: "Hello, how can I assist you today?" async for token in client.create_stream(messages): print(token, end="") # Output: "I'm happy to help with any questions you have." asyncio.run(example()) - Using .reset to reset the chat client state - import asyncio from autogen_ext.models.replay import ReplayChatCompletionClient from autogen_core.models import UserMessage async def example(): chat_completions = [ "Hello, how can I assist you today?", ] client = ReplayChatCompletionClient(chat_completions) messages = [UserMessage(content="What can you do?", source="user")] response = await client.create(messages) print(response.content) # Output: "Hello, how can I assist you today?" response = await client.create(messages) # Raises ValueError("No more mock responses available") client.reset() # Reset the client state (current index of message and token usages) response = await client.create(messages) print(response.content) # Output: "Hello, how can I assist you today?" again asyncio.run(example()) - actual_usage() RequestUsage[source]#
 - property capabilities: ModelCapabilities#
- Return mock capabilities. 
 - count_tokens(messages: Sequence[Annotated[SystemMessage | UserMessage | AssistantMessage | FunctionExecutionResultMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]], *, tools: Sequence[Tool | ToolSchema] = []) int[source]#
 - async create(messages: Sequence[Annotated[SystemMessage | UserMessage | AssistantMessage | FunctionExecutionResultMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]], *, tools: Sequence[Tool | ToolSchema] = [], json_output: bool | None = None, extra_create_args: Mapping[str, Any] = {}, cancellation_token: CancellationToken | None = None) CreateResult[source]#
- Return the next completion from the list. 
 - async create_stream(messages: Sequence[Annotated[SystemMessage | UserMessage | AssistantMessage | FunctionExecutionResultMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]], *, tools: Sequence[Tool | ToolSchema] = [], json_output: bool | None = None, extra_create_args: Mapping[str, Any] = {}, cancellation_token: CancellationToken | None = None) AsyncGenerator[str | CreateResult, None][source]#
- Return the next completion as a stream. 
 - remaining_tokens(messages: Sequence[Annotated[SystemMessage | UserMessage | AssistantMessage | FunctionExecutionResultMessage, FieldInfo(annotation=NoneType, required=True, discriminator='type')]], *, tools: Sequence[Tool | ToolSchema] = []) int[source]#
 - total_usage() RequestUsage[source]#