opto.utils.llm.AutoGenLLM.create#
- AutoGenLLM.create(**config: Any) autogen.ModelClient.ModelClientResponseProtocol [source]#
Make a completion for a given config using available clients. Besides the kwargs allowed in openai’s [or other] client, we allow the following additional kwargs. The config in each client will be overridden by the config.
- Parameters:
context (-) – The context to instantiate the prompt or messages. Default to None. It needs to contain keys that are used by the prompt template or the filter function. E.g., prompt=”Complete the following sentence: {prefix}, context={“prefix”: “Today I feel”}. The actual prompt will be: “Complete the following sentence: Today I feel”. More examples can be found at [templating](/docs/Use-Cases/enhanced_inference#templating).
cache (-) – A Cache object to use for response cache. Default to None. Note that the cache argument overrides the legacy cache_seed argument: if this argument is provided, then the cache_seed argument is ignored. If this argument is not provided or None, then the cache_seed argument is used.
agent (-) – The object responsible for creating a completion if an agent.
- (Legacy) cache_seed (int | None) – An integer cache_seed is useful when implementing “controlled randomness” for the completion. None for no caching. Note: this is a legacy argument. It is only used when the cache argument is not provided.
filter_func (-) – A function that takes in the context and the response and returns a boolean to indicate whether the response is valid. E.g.,
allow_format_str_template (-) – Whether to allow format string template in the config. Default to false.
api_version (-) – The api version. Default to None. E.g., “2024-02-01”.
Example
>>> # filter_func example: >>> def yes_or_no_filter(context, response): >>> return context.get("yes_or_no_choice", False) is False or any( >>> text in ["Yes.", "No."] for text in client.extract_text_or_completion_object(response) >>> )
- Raises:
- RuntimeError – If all declared custom model clients are not registered
- APIError – If any model client create call raises an APIError