autogen.agentchat.contrib.retrieve_user_proxy_agent
RetrieveUserProxyAgent Objects
class RetrieveUserProxyAgent(UserProxyAgent)
__init__
def __init__(name="RetrieveChatAgent",
is_termination_msg: Optional[Callable[
[Dict], bool]] = _is_termination_msg_retrievechat,
human_input_mode: Optional[str] = "ALWAYS",
retrieve_config: Optional[Dict] = None,
**kwargs)
Arguments:
name
str - name of the agent.human_input_mode
str - whether to ask for human inputs every time a message is received. Possible values are "ALWAYS", "TERMINATE", "NEVER". (1) When "ALWAYS", the agent prompts for human input every time a message is received. Under this mode, the conversation stops when the human input is "exit", or when is_termination_msg is True and there is no human input. (2) When "TERMINATE", the agent only prompts for human input only when a termination message is received or the number of auto reply reaches the max_consecutive_auto_reply. (3) When "NEVER", the agent will never prompt for human input. Under this mode, the conversation stops when the number of auto reply reaches the max_consecutive_auto_reply or when is_termination_msg is True.retrieve_config
dict or None - config for the retrieve agent. To use default config, set to None. Otherwise, set to a dictionary with the following keys:- task (Optional, str): the task of the retrieve chat. Possible values are "code", "qa" and "default". System
prompt will be different for different tasks. The default value is
default
, which supports both code and qa. - client (Optional, chromadb.Client): the chromadb client.
If key not provided, a default client
chromadb.Client()
will be used. - docs_path (Optional, str): the path to the docs directory. It can also be the path to a single file,
or the url to a single file. If key not provided, a default path
./docs
will be used. - collection_name (Optional, str): the name of the collection.
If key not provided, a default name
flaml-docs
will be used. - model (Optional, str): the model to use for the retrieve chat.
If key not provided, a default model
gpt-4
will be used. - chunk_token_size (Optional, int): the chunk token size for the retrieve chat.
If key not provided, a default size
max_tokens * 0.4
will be used. - context_max_tokens (Optional, int): the context max token size for the retrieve chat.
If key not provided, a default size
max_tokens * 0.8
will be used. - chunk_mode (Optional, str): the chunk mode for the retrieve chat. Possible values are
"multi_lines" and "one_line". If key not provided, a default mode
multi_lines
will be used. - must_break_at_empty_line (Optional, bool): chunk will only break at empty line if True. Default is True. If chunk_mode is "one_line", this parameter will be ignored.
- embedding_model (Optional, str): the embedding model to use for the retrieve chat.
If key not provided, a default model
all-MiniLM-L6-v2
will be used. All available models can be found athttps://www.sbert.net/docs/pretrained_models.html
. The default model is a fast model. If you want to use a high performance model,all-mpnet-base-v2
is recommended. - customized_prompt (Optional, str): the customized prompt for the retrieve chat. Default is None.
- task (Optional, str): the task of the retrieve chat. Possible values are "code", "qa" and "default". System
prompt will be different for different tasks. The default value is
**kwargs
dict - other kwargs in UserProxyAgent.
generate_init_message
def generate_init_message(problem: str,
n_results: int = 20,
search_string: str = "")
Generate an initial message with the given problem and prompt.
Arguments:
problem
str - the problem to be solved.n_results
int - the number of results to be retrieved.search_string
str - only docs containing this string will be retrieved.
Returns:
str
- the generated prompt ready to be sent to the assistant agent.