autogen.agentchat.contrib.retrieve_user_proxy_agent
RetrieveUserProxyAgent Objects
class RetrieveUserProxyAgent(UserProxyAgent)
__init__
def __init__(name="RetrieveChatAgent",
is_termination_msg: Optional[Callable[
[Dict], bool]] = _is_termination_msg_retrievechat,
human_input_mode: Optional[str] = "ALWAYS",
retrieve_config: Optional[Dict] = None,
**kwargs)
Arguments:
namestr - name of the agent.human_input_modestr - whether to ask for human inputs every time a message is received. Possible values are "ALWAYS", "TERMINATE", "NEVER". (1) When "ALWAYS", the agent prompts for human input every time a message is received. Under this mode, the conversation stops when the human input is "exit", or when is_termination_msg is True and there is no human input. (2) When "TERMINATE", the agent only prompts for human input only when a termination message is received or the number of auto reply reaches the max_consecutive_auto_reply. (3) When "NEVER", the agent will never prompt for human input. Under this mode, the conversation stops when the number of auto reply reaches the max_consecutive_auto_reply or when is_termination_msg is True.retrieve_configdict or None - config for the retrieve agent. To use default config, set to None. Otherwise, set to a dictionary with the following keys:- task (Optional, str): the task of the retrieve chat. Possible values are "code", "qa" and "default". System
prompt will be different for different tasks. The default value is
default, which supports both code and qa. - client (Optional, chromadb.Client): the chromadb client.
If key not provided, a default client
chromadb.Client()will be used. - docs_path (Optional, str): the path to the docs directory. It can also be the path to a single file,
or the url to a single file. If key not provided, a default path
./docswill be used. - collection_name (Optional, str): the name of the collection.
If key not provided, a default name
flaml-docswill be used. - model (Optional, str): the model to use for the retrieve chat.
If key not provided, a default model
gpt-4will be used. - chunk_token_size (Optional, int): the chunk token size for the retrieve chat.
If key not provided, a default size
max_tokens * 0.4will be used. - context_max_tokens (Optional, int): the context max token size for the retrieve chat.
If key not provided, a default size
max_tokens * 0.8will be used. - chunk_mode (Optional, str): the chunk mode for the retrieve chat. Possible values are
"multi_lines" and "one_line". If key not provided, a default mode
multi_lineswill be used. - must_break_at_empty_line (Optional, bool): chunk will only break at empty line if True. Default is True. If chunk_mode is "one_line", this parameter will be ignored.
- embedding_model (Optional, str): the embedding model to use for the retrieve chat.
If key not provided, a default model
all-MiniLM-L6-v2will be used. All available models can be found athttps://www.sbert.net/docs/pretrained_models.html. The default model is a fast model. If you want to use a high performance model,all-mpnet-base-v2is recommended. - customized_prompt (Optional, str): the customized prompt for the retrieve chat. Default is None.
- task (Optional, str): the task of the retrieve chat. Possible values are "code", "qa" and "default". System
prompt will be different for different tasks. The default value is
**kwargsdict - other kwargs in UserProxyAgent.
generate_init_message
def generate_init_message(problem: str,
n_results: int = 20,
search_string: str = "")
Generate an initial message with the given problem and prompt.
Arguments:
problemstr - the problem to be solved.n_resultsint - the number of results to be retrieved.search_stringstr - only docs containing this string will be retrieved.
Returns:
str- the generated prompt ready to be sent to the assistant agent.