AutoGen enables collaboration among multiple ChatGPTs for complex tasks.
TL;DR
OpenAI assistants are now integrated into AutoGen via GPTAssistantAgent
.
This enables multiple OpenAI assistants, which form the backend of the now popular GPTs, to collaborate and tackle complex tasks.
Checkout example notebooks for reference:
Introduction
Earlier last week, OpenAI introduced GPTs, giving users ability to create custom ChatGPTs tailored for them. But what if these individual GPTs could collaborate to do even more? Fortunately, because of AutoGen, this is now a reality! AutoGen has been pioneering agents and supporting multi-agent workflows since earlier this year, and now (starting with version 0.2.0b5) we are introducing compatibility with the Assistant API, which is currently in beta preview.
To accomplish this, we've added a new (experimental) agent called the GPTAssistantAgent
that
lets you seamlessly add these new OpenAI assistants into AutoGen-based multi-agent workflows.
This integration shows great potential and synergy, and we plan to continue enhancing it.
Installation
pip install autogen-agentchat~=0.2
Basic Example
Here's a basic example that uses a UserProxyAgent
to allow an interface
with the GPTAssistantAgent
.
First, import the new agent and setup config_list
:
from autogen import config_list_from_json
from autogen.agentchat.contrib.gpt_assistant_agent import GPTAssistantAgent
from autogen import UserProxyAgent
config_list = config_list_from_json("OAI_CONFIG_LIST")
Then simply define the OpenAI assistant agent and give it the task!
# creates new assistant using Assistant API
gpt_assistant = GPTAssistantAgent(
name="assistant",
llm_config={
"config_list": config_list,
"assistant_id": None
})
user_proxy = UserProxyAgent(name="user_proxy",
code_execution_config={
"work_dir": "coding"
},
human_input_mode="NEVER")
user_proxy.initiate_chat(gpt_assistant, message="Print hello world")
GPTAssistantAgent
supports both creating new OpenAI assistants or reusing existing assistants
(e.g, by providing an assistant_id
).
Code Interpreter Example
GPTAssistantAgent
allows you to specify an OpenAI tools
(e.g., function calls, code interpreter, etc). The example below enables an assistant
that can use OpenAI code interpreter to solve tasks.
# creates new assistant using Assistant API
gpt_assistant = GPTAssistantAgent(
name="assistant",
llm_config={
"config_list": config_list,
"assistant_id": None,
"tools": [
{
"type": "code_interpreter"
}
],
})
user_proxy = UserProxyAgent(name="user_proxy",
code_execution_config={
"work_dir": "coding"
},
human_input_mode="NEVER")
user_proxy.initiate_chat(gpt_assistant, message="Print hello world")
Checkout more examples here.
Limitations and Future Work
- Group chat managers using GPT assistant are pending.
- GPT assistants with multimodal capabilities haven't been released yet but we are committed to support them.
Acknowledgements
GPTAssistantAgent
was made possible through collaboration with
@IANTHEREAL,
Jiale Liu,
Yiran Wu,
Qingyun Wu,
Chi Wang, and many other AutoGen maintainers.