Task 02 - Create a multi-agent solution
Introduction
Now that Zava has seen a simple proof of concept application using a single-agent architecture, they would like to extend this application to support multiple agents. This will allow them to provide a more comprehensive shopping assistant experience for their customers.
Description
In this task, you will extend the proof of concept that you created in the prior task to support multiple agents. You will create these agents in Microsoft Foundry and integrate them into your chat application. You will then have an opportunity to test their capabilities and see how they can work together to provide a more comprehensive shopping assistant experience. You will also leverage the Model Context Protocol (MCP) to enable richer interactions between the agents and the tools they use.
Success Criteria
- You have created relevant agents in Microsoft Foundry.
- You have updated the chat application to support these agents.
Learning Resources
- Microsoft Foundry function calling
- What are tools in Microsoft Foundry Agent Service?
- Azure AI Projects client library for Python (version 2)
- Best practices for using tools in Microsoft Foundry AGent Service
- AIProjectClient Class
- Azure AI Projects samples
- OpenAI Function calling
Key Tasks
01: Create an agent
In this task, you will develop out multiple agents to satisfy the requirements of the Zava shopping assistant. This step will focus on creating one of the agents and cover the code in more detail. The subsequent agents will be created in a similar manner but will not have as much commentary.
Expand this section to view the solution
In the src/prompts directory, create a new file and call it CustomerLoyaltyAgentPrompt.txt. This file will contain the prompt that the customer loyalty agent will use to determine if a customer is eligible for any discounts based on their customer ID. Add the following text to the file:
Customer Loyalty Agent Guidelines
========================================
- Your task is assign discounts based on customers Loyalty information.Return the discount calculate from the calculate_discount tool as response.
- Check Customer ID in query when asked about discount, if not ask customer ID.
- Send customer_id as input to calculate_discount tool to calculate discount
- Write the response from tool in 1st person i.e (Congratulations! You are eligible for.. thankyou..) bla bla
- Always include smile emojis like š, š, or šļø to keep the tone light and celebratory.
- Example message(keep changing) : Hey there, Bruno! š \n Great newsāyou just scored an exclusive 20% off your order! \nTreat yourself and enjoy your special savings at checkout. Thanks for being awesome! š
- In your answer do not mention e.g. word instead use Example, such as or like based on the sentence.
- Return response in following json format
answer: your answer,
discount_percentage:keep discount percentage from the tool.
Customer Loyalty Agent Tool
-----
mcp_calculate_discount: Takes in customer_id, calculates discount as per tier and returns response.
Content Handling Guidelines
---------------------------
- Do not generate content summaries or remove any data.
This prompt provides the customer loyalty agent with guidelines on how to handle customer inquiries related to discounts and loyalty programs. It also specifies the format of the response that the agent should provide. In addition, it makes reference to a tool called calculate_discount that the agent will use to calculate discounts based on customer ID.
Next, create a new file in src/app/agents/ and call it customerLoyaltyAgent_initializer.py. This file will contain the code to create the customer loyalty agent in Microsoft Foundry. Add the following code to the top of the file:
import os
import sys
sys.path.append(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))
from azure.ai.projects import AIProjectClient
from azure.identity import DefaultAzureCredential
from dotenv import load_dotenv
from agent_processor import create_function_tool_for_agent
from agent_initializer import initialize_agent
load_dotenv()
These specify the necessary imports for the agent, as well as loading environment variables from the .env file.
Next, add the following code to read the prompt file that you just created:
CL_PROMPT_TARGET = os.path.join(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))), 'prompts', 'CustomerLoyaltyAgentPrompt.txt')
with open(CL_PROMPT_TARGET, 'r', encoding='utf-8') as file:
CL_PROMPT = file.read()
After that, add the following code to define the Azure AI project information and create the AI Project client:
project_endpoint = os.environ["FOUNDRY_ENDPOINT"]
project_client = AIProjectClient(
endpoint=project_endpoint,
credential=DefaultAzureCredential(),
)
From there, you will need to define the tool that the agent will use to calculate discounts. Add the following code:
# Define the set of user-defined callable functions to use as tools (from MCP client)
functions = create_function_tool_for_agent("customer_loyalty")
This makes reference to a function called create_function_tool_for_agent() in src/app/agents/agent_processor.py. This function takes in an agent type and defines the functions that particular agent will use. In this case, it defines a JSON schema describing the mcp_calculate_discount() function as a tool for the customer loyalty agent. This function calls get_customer_discount(), which finally calls calculate_discount() in src/app/tools/discountLogic.py. This function takes a customer ID and returns a discount percentage based on the customerās loyalty tier. You can review the code in this file to understand how it works. This particular tool is more complex than others because it communicates with the GPT model to determine the appropriate discount based on the customerās transaction history. It also simulates connecting to two separate databases to retrieve customer information.
Finally, add the following code to create the customer loyalty agent in Microsoft Foundry:
initialize_agent(
project_client=project_client,
model=os.environ["gpt_deployment"],
name="customer-loyalty",
description="Zava Customer Loyalty Agent",
instructions=CL_PROMPT,
tools=functions
)
This code initializes the agent with the specified model, name, instructions, and toolset. It then creates the agent in Microsoft Foundry and prints the agent ID to the console. We have prepopulated your .env file with the appropriate agent names, so be sure not to change this name.
02: Create remaining agents
Now that you have created the customer loyalty agent prompt file and initializer script, you will need to create the remaining agents. The process for creating these agents is similar to the one you just completed, but the prompts and tools will be different. The following five blocks include the prompt and code for each of the remaining agents.
Expand this section to view the interior design agent
In the src/prompts directory, create a new file and call it InteriorDesignAgentPrompt.txt. Add the following text to the file:
Interior Design Agent Guidelines
========================================
- You are a Interior Designer sales person working for Zava and help customers who need help in DIY Projects and other interior design queries
- Your main tasks are the following: recommending and upselling products, creating images
- You will get input in the form of a json, having:
[
{
"Conversation_history":the Conversation thats going on,
"image_url": Image based on which you need to recreate some image
"image_description": If there is an image attached, the description or it will be empty
"products_available": A list of products, from where you can give recommendations
"user_last_query": The last query from user
}
]
- You will always recommend product from the products_available.
- You will keep asking questions to the user and keep recommending.
- When you get an image, reply saying "I see you uploaded..."
- If asked to change/modify/style an object, only then use create_image, otherwise keep recommending and upselling as usual.
- In your answer do not mention e.g. word instead use Example, such as or like based on the sentence.
Return response in following json format
answer: your answer,
image_output: if there, otherwise empty
products: [
{
"id": "<ProductID>",
"name": "<ProductName>",
"type": "<Singular Category Name>",
"description": "<ProductDescription>",
"imageURL": "<ImageURL>",
"punchLine": "<ProductPunchLine>",
"price": "<FormattedPriceWithDollarSign>"
}, {..}
...
]
Interior Design Agent Tool
========================================
create_image: Can create image as per users requirement such as repainting a given room in a different color (make sure the path and prompt is shared as is) given a prompt and path.
Example Conversation
========================================
User: Want paint recommendation for my living room
You: Give some paints options, ask dimension, ask image
User: Gives dimensions, image (maybe)
You: Recommends based on the color, calculate how much paint maybe required, upsell for sprayer, tape (saying its good)
Content Handling Guidelines
========================================
- Do not generate content summaries or remove any data.
---
IMPORTANT: Your entire response must be a valid JSON array as described above. Do not include any other text or formatting.
Next, create a new file in src/app/agents/ and call it interiorDesignAgent_initializer.py. Add the following code to the file:
import os
import sys
sys.path.append(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))
from azure.ai.projects import AIProjectClient
from azure.identity import DefaultAzureCredential
from dotenv import load_dotenv
from agent_processor import create_function_tool_for_agent
from agent_initializer import initialize_agent
load_dotenv()
ID_PROMPT_TARGET = os.path.join(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))), 'prompts', 'InteriorDesignAgentPrompt.txt')
with open(ID_PROMPT_TARGET, 'r', encoding='utf-8') as file:
ID_PROMPT = file.read()
project_endpoint = os.environ["FOUNDRY_ENDPOINT"]
project_client = AIProjectClient(
endpoint=project_endpoint,
credential=DefaultAzureCredential(),
)
# Define the set of user-defined callable functions to use as tools (from MCP client)
functions = create_function_tool_for_agent("interior_designer")
initialize_agent(
project_client=project_client,
model=os.environ["gpt_deployment"],
name="interior-designer",
description="Zava Interior Design Agent",
instructions=ID_PROMPT,
tools=functions
)
Expand this section to view the inventory agent
In the src/prompts directory, create a new file and call it InventoryAgentPrompt.txt. Add the following text to the file:
Inventory Agent Guidelines
========================================
- Your task is check the inventory status
- When user ask to check the inventory for product, send the product name to inventory_check tool.
- Return response like inventory levels and status of inventory and the location.
Inventory Agent Tool
-----
inventory_check: Takes in product dictionary, return inventory level.
input formatting:
product_list = ['PROD0045', 'PROD1234']
Content Handling Guidelines
---------------------------
- Do not generate content summaries or remove any data.
Next, create a new file in src/app/agents/ and call it inventoryAgent_initializer.py. Add the following code to the file:
import os
import sys
sys.path.append(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))
from azure.ai.projects import AIProjectClient
from azure.identity import DefaultAzureCredential
from dotenv import load_dotenv
from agent_processor import create_function_tool_for_agent
from agent_initializer import initialize_agent
load_dotenv()
IA_PROMPT_TARGET = os.path.join(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))), 'prompts', 'InventoryAgentPrompt.txt')
with open(IA_PROMPT_TARGET, 'r', encoding='utf-8') as file:
IA_PROMPT = file.read()
project_endpoint = os.environ["FOUNDRY_ENDPOINT"]
project_client = AIProjectClient(
endpoint=project_endpoint,
credential=DefaultAzureCredential(),
)
# Define the set of user-defined callable functions to use as tools (from MCP client)
functions = create_function_tool_for_agent("inventory_agent")
initialize_agent(
project_client=project_client,
model=os.environ["gpt_deployment"],
name="inventory-agent",
description="Zava Inventory Agent",
instructions=IA_PROMPT,
tools=functions
)
Expand this section to view the shopper agent (Cora)
In the src/prompts directory, create a new file and call it ShopperAgentPrompt.txt. Add the following text to the file:
Shopper Agent Guidelines
========================================
- You are the public facing assistant of Zava
- Greet people and help them as needed
- Return response in following json format (image_output and products empty)
answer: your answer,
image_output: []
products: []
Shopper Agent Tool
-----
Content Handling Guidelines
---------------------------
- Do not generate content summaries or remove any data.
Next, create a new file in src/app/agents/ and call it shopperAgent_initializer.py. Add the following code to the file:
import os
import sys
sys.path.append(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))
from azure.ai.projects import AIProjectClient
from azure.identity import DefaultAzureCredential
from dotenv import load_dotenv
from agent_processor import create_function_tool_for_agent
from agent_initializer import initialize_agent
load_dotenv()
CORA_PROMPT_TARGET = os.path.join(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))), 'prompts', 'ShopperAgentPrompt.txt')
with open(CORA_PROMPT_TARGET, 'r', encoding='utf-8') as file:
CORA_PROMPT = file.read()
project_endpoint = os.environ["FOUNDRY_ENDPOINT"]
project_client = AIProjectClient(
endpoint=project_endpoint,
credential=DefaultAzureCredential(),
)
# Create function tools for cora agent
functions = create_function_tool_for_agent("cora")
initialize_agent(
project_client=project_client,
model=os.environ["gpt_deployment"],
name="cora",
description="Cora - Zava Shopping Assistant",
instructions=CORA_PROMPT,
tools=functions
)
Expand this section to view the cart manager agent
In the src/prompts directory, create a new file and call it CartManagerPrompt.txt. Add the following text to the file:
You are a Cart Manager Assistant for Zava, a home improvement and furniture retailer.
Your primary responsibilities:
1. CART MANAGEMENT
- Add products to the customer's shopping cart
- Remove products from the cart
- Update product quantities
- Clear the entire cart when requested
- Provide cart summaries and totals
2. CART OPERATIONS
When a customer mentions "cart", "add to cart", "remove", "checkout", or similar:
- Parse their request to understand what products they want to add/remove
- Update the cart state accordingly
- Confirm the action taken
- Show the updated cart contents
3. CART STATE MANAGEMENT
You will receive:
- RAW_IO_HISTORY: Complete conversation and cart state history
- Current cart state
- Customer's latest request
You must return:
- Updated cart as a JSON array
- Conversational confirmation message
- Any relevant product recommendations
4. PRODUCT RECOMMENDATIONS
Based on cart contents, suggest:
- Complementary products (e.g., if they added paint, suggest brushes, tape, drop cloths)
- Related items frequently bought together
- Products that complete a project
5. RESPONSE FORMAT
Always respond in valid JSON format:
{
"answer": "Friendly confirmation message about what was added/removed",
"cart": [
{
"product_id": "PROD-123",
"name": "Product Name",
"quantity": 2,
"price": 29.99,
"total": 59.98
}
],
"products": "Optional: Suggest related products here",
"discount_percentage": "",
"additional_data": ""
}
6. CONVERSATION STYLE
- Be friendly and helpful
- Confirm actions clearly ("I've added 2 gallons of paint to your cart")
- Provide cart summaries when asked
- Suggest next steps ("Would you like to proceed to checkout?")
- If unclear, ask for clarification
7. SPECIAL INSTRUCTIONS
- If customer asks about cart but it's empty, acknowledge and suggest browsing products
- If removing items, confirm which items were removed
- If updating quantities, confirm the new quantity
- Always maintain accurate cart state based on conversation history
- Extract product information from the conversation context
- On checkout, display the cart contents and tell the customer that they may pick up their products from the closest Zava retail outlet, located in Miami, Florida. Only give this information when the customer requests to check out.
Example interactions:
Customer: "Add the blue paint to my cart"
Response: {
"answer": "I've added the blue paint to your cart! Would you also like to add paint brushes or painter's tape?",
"cart": [{"product_id": "PAINT-BLUE-001", "name": "Blue Interior Paint", "quantity": 1, "price": 34.99, "total": 34.99}],
"products": "Based on your paint selection, you might also need: Paint Brushes ($8.99), Painter's Tape ($5.99), Drop Cloth ($12.99)"
}
Customer: "What's in my cart?"
Response: {
"answer": "You currently have 1 item in your cart: Blue Interior Paint (1 gallon) for $34.99. Your cart total is $34.99.",
"cart": [{"product_id": "PAINT-BLUE-001", "name": "Blue Interior Paint", "quantity": 1, "price": 34.99, "total": 34.99}],
"products": ""
}
Remember: Your goal is to make cart management seamless and helpful for customers!
Next, create a new file in src/app/agents/ and call it cartManagerAgent_initializer.py. Add the following code to the file:
import os
import sys
sys.path.append(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))
from azure.ai.projects import AIProjectClient
from azure.identity import DefaultAzureCredential
from dotenv import load_dotenv
from agent_processor import create_function_tool_for_agent
from agent_initializer import initialize_agent
load_dotenv()
CART_PROMPT_PATH = os.path.join(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))), 'prompts', 'CartManagerPrompt.txt')
with open(CART_PROMPT_PATH, 'r', encoding='utf-8') as file:
CART_MANAGER_PROMPT = file.read()
project_endpoint = os.environ["FOUNDRY_ENDPOINT"]
project_client = AIProjectClient(
endpoint=project_endpoint,
credential=DefaultAzureCredential(),
)
# Create function tools for cart_manager agent
functions = create_function_tool_for_agent("cart_manager")
initialize_agent(
project_client=project_client,
model=os.environ["gpt_deployment"],
name="cart-manager",
description="Zava Cart Manager Agent",
instructions=CART_MANAGER_PROMPT,
tools=functions
)
Expand this section to view the handoff service agent
In the src/prompts directory, create a new file and call it HandoffAgentPrompt.txt. Add the following text to the file:
You are an intent classifier for Zava shopping assistant.
Available domains:
1. cora: General shopping, product browsing, general questions
2. interior_designer: Room design, decorating, color schemes, furniture recommendations, image creation
3. inventory_agent: Product availability, stock checks, inventory questions
4. customer_loyalty: Discounts, promotions, loyalty programs, customer benefits
5. cart_manager: Shopping cart operations (add/remove items, view cart, checkout)
Analyze the user's message and determine:
1. Which domain it belongs to
2. Whether it's a domain change from the current context
You will receive a message with the current domain and the user message. It will be in the format:
Current domain: {current_domain}
User message: {user_message}
Respond with JSON:
domain
Rules:
- If user mentions "cart", "add to cart", "remove from cart", "checkout", "view cart" -> cart_manager domain
- If uncertain, default to current domain with low confidence
- Detect explicit requests to "talk to someone else" or "get help with X" as domain changes
- Consider context: if discussing design, stay in interior_designer unless user explicitly changes topic
- Default to 'cora' for general/ambiguous queries
Next, create a new file in src/app/agents/ and call it handoffAgent_initializer.py. Add the following code to the file:
import os
import sys
sys.path.append(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))
from azure.ai.projects import AIProjectClient
from azure.ai.projects.models import (
PromptAgentDefinition,
PromptAgentDefinitionText,
ResponseTextFormatConfigurationJsonSchema
)
from azure.identity import DefaultAzureCredential
from dotenv import load_dotenv
from services.handoff_service import IntentClassification
load_dotenv()
HANDOFF_AGENT_PROMPT_PATH = os.path.join(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))), 'prompts', 'HandoffAgentPrompt.txt')
with open(HANDOFF_AGENT_PROMPT_PATH, 'r', encoding='utf-8') as file:
HANDOFF_AGENT_PROMPT = file.read()
project_endpoint = os.environ["FOUNDRY_ENDPOINT"]
project_client = AIProjectClient(
endpoint=project_endpoint,
credential=DefaultAzureCredential(),
)
project_client=project_client
model=os.environ["gpt_deployment"]
name="handoff-service"
description="Zava Handoff Service Agent"
instructions=HANDOFF_AGENT_PROMPT
with project_client:
agent = project_client.agents.create_version(
agent_name=name,
description=description,
definition=PromptAgentDefinition(
model=model,
text=PromptAgentDefinitionText(
format=ResponseTextFormatConfigurationJsonSchema(
name="IntentClassification", schema=IntentClassification.model_json_schema()
)
),
instructions=instructions
)
)
print(f"Created {name} agent, ID: {agent.id}")
This code is somewhat different from the other agents because it forces a JSON output in the format of the IntentClassification class. The definition for this class is in src/services/handoff_service.py on lines 24-40.
03: Create agents in Microsoft Foundry
The next step in this task is to create six agents inside of Microsoft Foundry:
- Cora Agent: This agent will handle customer inquiries and provide personalized shopping assistance.
- Inventory Agent: This agent will manage product inventory and availability information.
- Customer Loyalty Agent: This agent will handle customer loyalty programs and rewards, such as discounts on purchases.
- Interior Designer Agent: This agent will provide personalized interior design recommendations.
- Cart Manager Agent: This agent will manage the shopping cart operations.
- Handoff Service Agent: This agent will decide, based on the last running agent and the intent of the user message, which agent to activate next.
The prompts for these agents are available in src/prompts/. The code to deploy the agents is in src/app/agents.
Expand this section to view the solution
First, navigate to Microsoft Foundry and select the AI project associated with this training.
Then, navigate to the Build menu and select the Agents tab from the left-hand menu.
Next, return to your Visual Studio Code terminal and navigate to the src/app/agents directory. Each agent has an initializer script that will create the appropriate agent in Microsoft Foundry. Run the following commands to create each of the six agents.
python customerLoyaltyAgent_initializer.py
python inventoryAgent_initializer.py
python interiorDesignAgent_initializer.py
python shopperAgent_initializer.py
python cartManagerAgent_initializer.py
python handoffAgent_initializer.py
You may receive an error reading, in part, Message: The principal
{YOUR_PRINCIPAL_ID}lacks the required data actionMicrosoft.CognitiveServices/accounts/AIServices/agents/write. If you receive this error message, return to your resource group and choose the Microsoft Foundry resource associated with this training (that is, not the project). Navigate to Access control (IAM) from the left-hand menu. Select the + Add button and then choose the Add role assignment option. In the Role dropdown, select the Azure AI User role. In the Assign access to dropdown, select + Select members. In the Select members list, choose your name. After that, select the Select button at the bottom of the pane. Finally, select Review + assign twice to grant the Azure AI User role to your account. Then, re-run the command.
As you create each agent, the script will output an Agent ID. Make a note of these IDs. This includes the agent name as well as the current version number. After creating these agents, confirm that the agent names match their corresponding entries in the .env file, specifically in the āAgent IDsā section. An example of an Agent ID is cart-manager:1, which would be version 1 of the Cart Manager agent. In the .env file, you should see cart-manager, without the version number. The shopper agentās output should go into the ācoraā entry, and the rest should go into their respective entries.
After you have created all six agents, return to the Microsoft Foundry portal and verify that the agents have been created successfully. You should see all six agents listed in the Agents tab of the Build menu once you refresh the page.
04: Update the chat application
Now that you have created the agents in Microsoft Foundry, you will need to update the chat application to support these agents. This involves updating the code to route user queries to the appropriate agent based on the context of the conversation.
To do so, first, comment out lines 45 and 250-256, which deal with the single-agent example. A keyboard shortcut to comment out multiple lines in Visual Studio Code is to select the lines you want to comment (or uncomment) and then press CTRL + / on Windows or CMD + / on Mac.
Then, uncomment the relevant sections of code in chat_app.py that relate to the multi-agent architecture and restart the application. The relevant lines of code are 46-49 (import statements), lines 117-118 (mounting the MCP server to your current application), lines 131-138 (setting up the handoff service), and lines 258-600 (handling service requests). The following block provides a somewhat detailed explanation of how the service request code works.
Expand this section to view solution
The first time a user connects to the chat application, a customer loyalty task is initiated. This task runs in the background and calls the customer loyalty agent to determine if the user is eligible for any discounts based on their customer ID. The discount information is stored in a session variable and is used later in the conversation.
When the user sends a message, the chat application activates the handoff service (whose class definition is in src/services/handoff_service.py), calling the classify_intent() method. This method calls the GPT model and requests a structured output. The response includes a few attributes, including the domain of the userās query (e.g., interior design, inventory, cart management) and the reasoning behind the classification.
Based on the domain returned by the handoff service, the application routes the userās query to the appropriate agent. If the domain is āinterior design,ā the query is sent to the interior designer agent. If the domain is āinventory,ā it is sent to the inventory agent. If the domain is ācustomer loyalty,ā it is sent to the customer loyalty agent. If the domain is ācart management,ā it is sent to the cart manager agent. If the domain is ācora,ā it is sent to the Cora agent. For each agent, the next step is to process an image if one is included in the userās message. If an agent needs to make product recommendations, the next call is to the get_product_recommendations() function, which queries Cosmos DB to retrieve relevant products based on the userās message.
After that, lines 408-450 handle image creation, which is not officially covered in this training because the necessary model is only available upon request. However, if you have access to the gpt-image-1 model, you can review this section of code to understand how image creation works.
Then, lines 461-474 prepare the input for specific agents that have different needs. The cart manager agent needs to have the interaction history in order to define what is in the userās cart. The cora agent needs to have the conversation history for contextual dialogue. Finally, lines 486-496 perform the actual call to the relevant agent.
In particular, lines 487-496 retrieve and call the agent processor for the specific agent in question. The AgentProcessor class is defined in src/app/agents/agent_processor.py. This class handles the communication between the chat application and the Microsoft Foundry agents using the Model Context Protocol (MCP). The processor includes a method called run_conversation_with_text_stream(), which communicates with the MCP server and streams the response back to the chat application via the MCP client. It calls _run_conversation_sync(), which operates synchronously to communicate with your language model. This function checks to see if an existing conversation is available and re-uses it if so. Otherwise, it creates a new conversation. Then, it sends the userās message. In the event that the agent needs to call a function, lines 236-259 loop through the message outputs, looking for any items of type function_call. For each of these, we determine which function to call, call the function, and append its results to an input list. Then, we send the input list back to the language model for final processing and conversion to text. If there were no function calls, the code extracts the output text and returns it to the caller.
From there, lines 510-569 in chat_app.py handle the response from the agent, including parsing the response, updating the current cart state, cleaning up the conversation history, saving any discount state, and sending the response back to the user.
05: Demonstrate application behavior
Now that you have created the necessary agents and updated the application code, ensure that all files are saved and then restart the application. From there, you can interact with the chat application and see how the agents work together to provide a comprehensive shopping assistant experience. Use the following prompts to test the applicationās behavior.
Expand this section to view solution
In order to restart the application, stop the currently running instance by pressing CTRL+C in the terminal where the application is running. Then, ensure that you are in the correct directory (/src) and that your virtual environment is active. Finally, restart the application using the same command you used to start it initially: uvicorn chat_app:app --host 0.0.0.0 --port 8000.
Connect to the chat application (or refresh an existing chat application window) and enter the following prompts to see how the agents interact. Note that the context of the conversation will differ each time you run through this exercise, so the agentsā responses may vary. For that reason, you may wish to modify the questions slightly to fit the context of your conversation.
- What colors of green paint do you have?
- I think Iām interested in Deep Forest. How many gallons would I need to paint a medium sized bedroom?
- How much of PROD0018 do you have in stock?
- Letās add two gallons to the cart, please.
- Please also add one paint tray and two of your All-Purpose Wall Paint Brushes.
- What items are in my cart right now?
- Please apply the discount that you calculated before.
- Iād like to check out now.
Over the course of this chat conversation, you will interact with the interior designer agent, the Cora agent, the customer loyalty agent, and the inventory agent at different points.
The application also includes functionality to generate images based on user prompts. However, this functionality is not covered in this exercise because the necessary
gpt-image-1model is only available upon request. If you have access to this model, you can review the filesrc/app/tools/imageCreationTool.pyto understand how the image generation works. You can then test this functionality by sending prompts to the interior designer agent that request image creation or modification, although you will need to create a deployment for thegpt-image-1model and update the.envfile accordingly.