Task 02 - Create a multi-agent solution
Introduction
Now that Zava has seen a simple proof of concept application using a single-agent architecture, they would like to extend this application to support multiple agents. This will allow them to provide a more comprehensive shopping assistant experience for their customers.
Description
In this task, you will extend the proof of concept that you created in the prior task to support multiple agents. You will create these agents in Microsoft Foundry and integrate them into your chat application. You will then have an opportunity to test their capabilities and see how they can work together to provide a more comprehensive shopping assistant experience. You will also leverage the Model Context Protocol (MCP) to enable richer interactions between the agents and the tools they use.
Success Criteria
- You have created relevant agents in Microsoft Foundry.
- You have updated the chat application to support these agents.
Learning Resources
- Microsoft Foundry function calling
- What are tools in Microsoft Foundry Agent Service?
- Azure AI Projects client library for Python (version 2)
- Best practices for using tools in Microsoft Foundry AGent Service
- AIProjectClient Class
- Azure AI Projects samples
- OpenAI Function calling
Key Tasks
01: Create an agent
In this task, you will develop out multiple agents to satisfy the requirements of the Zava shopping assistant. This step will focus on creating one of the agents and cover the code in more detail. The subsequent agents will be created in a similar manner but will not have as much commentary.
Expand this section to view the solution
In the src/prompts directory, create a new file and call it CustomerLoyaltyAgentPrompt.txt. This file will contain the prompt that the customer loyalty agent will use to determine if a customer is eligible for any discounts based on their customer ID. Add the following text to the file:
Customer Loyalty Agent Guidelines
========================================
- Your task is assign discounts based on customers Loyalty information.Return the discount calculate from the calculate_discount tool as response.
- Check Customer ID in query when asked about discount, if not ask customer ID.
- Send customer_id as input to calculate_discount tool to calculate discount
- Write the response from tool in 1st person i.e (Congratulations! You are eligible for.. thankyou..) bla bla
- Always include smile emojis like š, š, or šļø to keep the tone light and celebratory.
- Example message(keep changing) : Hey there, Bruno! š \n Great newsāyou just scored an exclusive 20% off your order! \nTreat yourself and enjoy your special savings at checkout. Thanks for being awesome! š
- In your answer do not mention e.g. word instead use Example, such as or like based on the sentence.
- Return response in following json format
answer: your answer,
discount_percentage:keep discount percentage from the tool.
Customer Loyalty Agent Tool
-----
mcp_calculate_discount: Takes in customer_id, calculates discount as per tier and returns response.
Content Handling Guidelines
---------------------------
- Do not generate content summaries or remove any data.
This prompt provides the customer loyalty agent with guidelines on how to handle customer inquiries related to discounts and loyalty programs. It also specifies the format of the response that the agent should provide. In addition, it makes reference to a tool called calculate_discount that the agent will use to calculate discounts based on customer ID.
Next, create a new file in src/app/agents/ and call it customerLoyaltyAgent_initializer.py. This file will contain the code to create the customer loyalty agent in Microsoft Foundry. Add the following code to the top of the file:
import os
import sys
sys.path.append(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))
from azure.ai.projects import AIProjectClient
from azure.identity import DefaultAzureCredential
from dotenv import load_dotenv
from tool_definitions import get_tools_for_agent
from agent_initializer import initialize_agent
import asyncio
load_dotenv()
These specify the necessary imports for the agent, as well as loading environment variables from the .env file.
Next, add the following code to read the prompt file that you just created:
CL_PROMPT_TARGET = os.path.join(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))), 'prompts', 'CustomerLoyaltyAgentPrompt.txt')
with open(CL_PROMPT_TARGET, 'r', encoding='utf-8') as file:
CL_PROMPT = file.read()
After that, add the following code to define the Azure AI project information and create the AI Project client:
project_endpoint = os.environ["FOUNDRY_ENDPOINT"]
project_client = AIProjectClient(
endpoint=project_endpoint,
credential=DefaultAzureCredential(),
)
From there, you will need to define the tool that the agent will use to calculate discounts. Add the following code:
# Define the set of user-defined callable functions to use as tools (from MCP client)
functions = asyncio.run(get_tools_for_agent("customer_loyalty"))
This makes reference to a function called get_tools_for_agent() in src/app/agents/tool_definitions.py. This async function discovers the available tools from the MCP server and returns the FunctionTool objects that the specified agent type needs. In this case, it returns the mcp_calculate_discount tool for the customer loyalty agent. When invoked by the agent, mcp_calculate_discount() calls get_customer_discount() on the MCP server, which in turn calls calculate_discount() in src/app/tools/discountLogic.py. This function takes a customer ID and returns a discount percentage based on the customerās loyalty tier. You can review the code in this file to understand how it works. This particular tool is more complex than others because it communicates with the GPT model to determine the appropriate discount based on the customerās transaction history. It also simulates connecting to two separate databases to retrieve customer information.
Finally, add the following code to create the customer loyalty agent in Microsoft Foundry:
initialize_agent(
project_client=project_client,
model=os.environ["gpt_deployment"],
name="customer-loyalty",
description="Zava Customer Loyalty Agent",
instructions=CL_PROMPT,
tools=functions
)
This code initializes the agent with the specified model, name, instructions, and toolset. It then creates the agent in Microsoft Foundry and prints the agent ID to the console. We have prepopulated your .env file with the appropriate agent names, so be sure not to change this name.
02: Create remaining agents
Now that you have created the customer loyalty agent prompt file and initializer script, you will need to create the remaining agents. The process for creating these agents is similar to the one you just completed, but the prompts and tools will be different. The following five blocks include the prompt and code for each of the remaining agents.
Expand this section to view the interior design agent
In the src/prompts directory, create a new file and call it InteriorDesignAgentPrompt.txt. Add the following text to the file:
Interior Design Agent Guidelines
========================================
- You are a Interior Designer sales person working for Zava and help customers who need help in DIY Projects and other interior design queries
- Your main tasks are the following: recommending and upselling products, creating images
- You will get input in the form of a json, having:
[
{
"Conversation_history":the Conversation thats going on,
"image_url": Image based on which you need to recreate some image
"image_description": If there is an image attached, the description or it will be empty
"products_available": A list of products, from where you can give recommendations
"user_last_query": The last query from user
}
]
- You will always recommend product from the products_available.
- You will keep asking questions to the user and keep recommending.
- When you get an image, reply saying "I see you uploaded..."
- If asked to change/modify/style an object, only then use create_image, otherwise keep recommending and upselling as usual.
- In your answer do not mention e.g. word instead use Example, such as or like based on the sentence.
Return response in following json format
answer: your answer,
image_output: if there, otherwise empty
products: [
{
"id": "<ProductID>",
"name": "<ProductName>",
"type": "<Singular Category Name>",
"description": "<ProductDescription>",
"imageURL": "<ImageURL>",
"punchLine": "<ProductPunchLine>",
"price": "<FormattedPriceWithDollarSign>"
}, {..}
...
]
Interior Design Agent Tool
========================================
create_image: Can create image as per users requirement such as repainting a given room in a different color (make sure the path and prompt is shared as is) given a prompt and path.
Example Conversation
========================================
User: Want paint recommendation for my living room
You: Give some paints options, ask dimension, ask image
User: Gives dimensions, image (maybe)
You: Recommends based on the color, calculate how much paint maybe required, upsell for sprayer, tape (saying its good)
Content Handling Guidelines
========================================
- Do not generate content summaries or remove any data.
---
IMPORTANT: Your entire response must be a valid JSON array as described above. Do not include any other text or formatting.
Next, create a new file in src/app/agents/ and call it interiorDesignAgent_initializer.py. Add the following code to the file:
import os
import sys
sys.path.append(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))
from azure.ai.projects import AIProjectClient
from azure.identity import DefaultAzureCredential
from dotenv import load_dotenv
from tool_definitions import get_tools_for_agent
from agent_initializer import initialize_agent
import asyncio
load_dotenv()
ID_PROMPT_TARGET = os.path.join(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))), 'prompts', 'InteriorDesignAgentPrompt.txt')
with open(ID_PROMPT_TARGET, 'r', encoding='utf-8') as file:
ID_PROMPT = file.read()
project_endpoint = os.environ["FOUNDRY_ENDPOINT"]
project_client = AIProjectClient(
endpoint=project_endpoint,
credential=DefaultAzureCredential(),
)
# Define the set of user-defined callable functions to use as tools (from MCP client)
functions = asyncio.run(get_tools_for_agent("interior_designer"))
initialize_agent(
project_client=project_client,
model=os.environ["gpt_deployment"],
name="interior-designer",
description="Zava Interior Design Agent",
instructions=ID_PROMPT,
tools=functions
)
Expand this section to view the inventory agent
In the src/prompts directory, create a new file and call it InventoryAgentPrompt.txt. Add the following text to the file:
Inventory Agent Guidelines
========================================
- Your task is check the inventory status
- When user ask to check the inventory for product, send the product name to inventory_check tool.
- Return response like inventory levels and status of inventory and the location.
Inventory Agent Tool
-----
inventory_check: Takes in product dictionary, return inventory level.
input formatting:
product_list = ['PROD0045', 'PROD1234']
Content Handling Guidelines
---------------------------
- Do not generate content summaries or remove any data.
Next, create a new file in src/app/agents/ and call it inventoryAgent_initializer.py. Add the following code to the file:
import os
import sys
sys.path.append(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))
from azure.ai.projects import AIProjectClient
from azure.identity import DefaultAzureCredential
from dotenv import load_dotenv
from tool_definitions import get_tools_for_agent
from agent_initializer import initialize_agent
import asyncio
load_dotenv()
IA_PROMPT_TARGET = os.path.join(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))), 'prompts', 'InventoryAgentPrompt.txt')
with open(IA_PROMPT_TARGET, 'r', encoding='utf-8') as file:
IA_PROMPT = file.read()
project_endpoint = os.environ["FOUNDRY_ENDPOINT"]
project_client = AIProjectClient(
endpoint=project_endpoint,
credential=DefaultAzureCredential(),
)
# Define the set of user-defined callable functions to use as tools (from MCP client)
functions = asyncio.run(get_tools_for_agent("inventory_agent"))
initialize_agent(
project_client=project_client,
model=os.environ["gpt_deployment"],
name="inventory-agent",
description="Zava Inventory Agent",
instructions=IA_PROMPT,
tools=functions
)
Expand this section to view the shopper agent (Cora)
In the src/prompts directory, create a new file and call it ShopperAgentPrompt.txt. Add the following text to the file:
Shopper Agent Guidelines
========================================
- You are the public facing assistant of Zava
- Greet people and help them as needed
- Return response in following json format (image_output and products empty)
answer: your answer,
image_output: []
products: []
Shopper Agent Tool
-----
Content Handling Guidelines
---------------------------
- Do not generate content summaries or remove any data.
Next, create a new file in src/app/agents/ and call it shopperAgent_initializer.py. Add the following code to the file:
import os
import sys
sys.path.append(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))
from azure.ai.projects import AIProjectClient
from azure.identity import DefaultAzureCredential
from dotenv import load_dotenv
from tool_definitions import get_tools_for_agent
from agent_initializer import initialize_agent
import asyncio
load_dotenv()
CORA_PROMPT_TARGET = os.path.join(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))), 'prompts', 'ShopperAgentPrompt.txt')
with open(CORA_PROMPT_TARGET, 'r', encoding='utf-8') as file:
CORA_PROMPT = file.read()
project_endpoint = os.environ["FOUNDRY_ENDPOINT"]
project_client = AIProjectClient(
endpoint=project_endpoint,
credential=DefaultAzureCredential(),
)
# Create function tools for cora agent
functions = asyncio.run(get_tools_for_agent("cora"))
initialize_agent(
project_client=project_client,
model=os.environ["gpt_deployment"],
name="cora",
description="Cora - Zava Shopping Assistant",
instructions=CORA_PROMPT,
tools=functions
)
Expand this section to view the cart manager agent
In the src/prompts directory, create a new file and call it CartManagerPrompt.txt. Add the following text to the file:
You are a Cart Manager Assistant for Zava, a home improvement and furniture retailer.
Your primary responsibilities:
1. CART MANAGEMENT
- Add products to the customer's shopping cart
- Remove products from the cart
- Update product quantities
- Clear the entire cart when requested
- Provide cart summaries and totals
2. CART OPERATIONS
When a customer mentions "cart", "add to cart", "remove", "checkout", or similar:
- Parse their request to understand what products they want to add/remove
- Update the cart state accordingly
- Confirm the action taken
- Show the updated cart contents
3. CART STATE MANAGEMENT
You will receive:
- RAW_IO_HISTORY: Complete conversation and cart state history
- Current cart state
- Customer's latest request
You must return:
- Updated cart as a JSON array
- Conversational confirmation message
- Any relevant product recommendations
4. PRODUCT RECOMMENDATIONS
Based on cart contents, suggest:
- Complementary products (e.g., if they added paint, suggest brushes, tape, drop cloths)
- Related items frequently bought together
- Products that complete a project
5. RESPONSE FORMAT
Always respond in valid JSON format:
{
"answer": "Friendly confirmation message about what was added/removed",
"cart": [
{
"product_id": "PROD-123",
"name": "Product Name",
"quantity": 2,
"price": 29.99,
"total": 59.98
}
],
"products": "Optional: Suggest related products here",
"discount_percentage": "",
"additional_data": ""
}
6. CONVERSATION STYLE
- Be friendly and helpful
- Confirm actions clearly ("I've added 2 gallons of paint to your cart")
- Provide cart summaries when asked
- Suggest next steps ("Would you like to proceed to checkout?")
- If unclear, ask for clarification
7. SPECIAL INSTRUCTIONS
- If customer asks about cart but it's empty, acknowledge and suggest browsing products
- If removing items, confirm which items were removed
- If updating quantities, confirm the new quantity
- Always maintain accurate cart state based on conversation history
- Extract product information from the conversation context
- On checkout, display the cart contents and tell the customer that they may pick up their products from the closest Zava retail outlet, located in Miami, Florida. Only give this information when the customer requests to check out.
Example interactions:
Customer: "Add the blue paint to my cart"
Response: {
"answer": "I've added the blue paint to your cart! Would you also like to add paint brushes or painter's tape?",
"cart": [{"product_id": "PAINT-BLUE-001", "name": "Blue Interior Paint", "quantity": 1, "price": 34.99, "total": 34.99}],
"products": "Based on your paint selection, you might also need: Paint Brushes ($8.99), Painter's Tape ($5.99), Drop Cloth ($12.99)"
}
Customer: "What's in my cart?"
Response: {
"answer": "You currently have 1 item in your cart: Blue Interior Paint (1 gallon) for $34.99. Your cart total is $34.99.",
"cart": [{"product_id": "PAINT-BLUE-001", "name": "Blue Interior Paint", "quantity": 1, "price": 34.99, "total": 34.99}],
"products": ""
}
Remember: Your goal is to make cart management seamless and helpful for customers!
Next, create a new file in src/app/agents/ and call it cartManagerAgent_initializer.py. Add the following code to the file:
import os
import sys
sys.path.append(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))
from azure.ai.projects import AIProjectClient
from azure.identity import DefaultAzureCredential
from dotenv import load_dotenv
from tool_definitions import get_tools_for_agent
from agent_initializer import initialize_agent
import asyncio
load_dotenv()
CART_PROMPT_PATH = os.path.join(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))), 'prompts', 'CartManagerPrompt.txt')
with open(CART_PROMPT_PATH, 'r', encoding='utf-8') as file:
CART_MANAGER_PROMPT = file.read()
project_endpoint = os.environ["FOUNDRY_ENDPOINT"]
project_client = AIProjectClient(
endpoint=project_endpoint,
credential=DefaultAzureCredential(),
)
# Create function tools for cart_manager agent
functions = asyncio.run(get_tools_for_agent("cart_manager"))
initialize_agent(
project_client=project_client,
model=os.environ["gpt_deployment"],
name="cart-manager",
description="Zava Cart Manager Agent",
instructions=CART_MANAGER_PROMPT,
tools=functions
)
Expand this section to view the handoff service agent
In the src/prompts directory, create a new file and call it HandoffAgentPrompt.txt. Add the following text to the file:
You are an intent classifier for Zava shopping assistant.
Available domains:
1. cora: General shopping, product browsing, general questions
2. interior_designer: Room design, decorating, color schemes, furniture recommendations, image creation
3. inventory_agent: Product availability, stock checks, inventory questions
4. customer_loyalty: Discounts, promotions, loyalty programs, customer benefits
5. cart_manager: Shopping cart operations (add/remove items, view cart, checkout)
Analyze the user's message and determine:
1. Which domain it belongs to
2. Whether it's a domain change from the current context
You will receive a message with the current domain and the user message. It will be in the format:
Current domain: {current_domain}
User message: {user_message}
Respond with JSON:
domain
Rules:
- If user mentions "cart", "add to cart", "remove from cart", "checkout", "view cart" -> cart_manager domain
- If uncertain, default to current domain with low confidence
- Detect explicit requests to "talk to someone else" or "get help with X" as domain changes
- Consider context: if discussing design, stay in interior_designer unless user explicitly changes topic
- Default to 'cora' for general/ambiguous queries
Next, create a new file in src/app/agents/ and call it handoffAgent_initializer.py. Add the following code to the file:
import os
import sys
sys.path.append(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))
from azure.ai.projects import AIProjectClient
from azure.ai.projects.models import (
PromptAgentDefinition,
PromptAgentDefinitionTextOptions,
TextResponseFormatJsonSchema
)
from azure.identity import DefaultAzureCredential
from dotenv import load_dotenv
from services.handoff_service import IntentClassification
load_dotenv()
HANDOFF_AGENT_PROMPT_PATH = os.path.join(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))), 'prompts', 'HandoffAgentPrompt.txt')
with open(HANDOFF_AGENT_PROMPT_PATH, 'r', encoding='utf-8') as file:
HANDOFF_AGENT_PROMPT = file.read()
project_endpoint = os.environ["FOUNDRY_ENDPOINT"]
project_client = AIProjectClient(
endpoint=project_endpoint,
credential=DefaultAzureCredential(),
)
project_client=project_client
model=os.environ["gpt_deployment"]
name="handoff-service"
description="Zava Handoff Service Agent"
instructions=HANDOFF_AGENT_PROMPT
with project_client:
agent = project_client.agents.create_version(
agent_name=name,
description=description,
definition=PromptAgentDefinition(
model=model,
text=PromptAgentDefinitionTextOptions(
format=TextResponseFormatJsonSchema(
name="IntentClassification", schema=IntentClassification.model_json_schema()
)
),
instructions=instructions
)
)
print(f"Created {name} agent, ID: {agent.id}")
This code is somewhat different from the other agents because it forces a JSON output in the format of the IntentClassification class. The definition for this class is in src/services/handoff_service.py on lines 24-40.
03: Create agents in Microsoft Foundry
The next step in this task is to create six agents inside of Microsoft Foundry:
- Cora Agent: This agent will handle customer inquiries and provide personalized shopping assistance.
- Inventory Agent: This agent will manage product inventory and availability information.
- Customer Loyalty Agent: This agent will handle customer loyalty programs and rewards, such as discounts on purchases.
- Interior Designer Agent: This agent will provide personalized interior design recommendations.
- Cart Manager Agent: This agent will manage the shopping cart operations.
- Handoff Service Agent: This agent will decide, based on the last running agent and the intent of the user message, which agent to activate next.
The prompts for these agents are available in src/prompts/. The code to deploy the agents is in src/app/agents.
Expand this section to view the solution
First, navigate to Microsoft Foundry and select the AI project associated with this training.
Then, navigate to the Build menu and select the Agents tab from the left-hand menu.
Next, return to your Visual Studio Code terminal and navigate to the src/app/agents directory. Each agent has an initializer script that will create the appropriate agent in Microsoft Foundry. Run the following commands to create each of the six agents.
python customerLoyaltyAgent_initializer.py
python inventoryAgent_initializer.py
python interiorDesignAgent_initializer.py
python shopperAgent_initializer.py
python cartManagerAgent_initializer.py
python handoffAgent_initializer.py
You may receive an error reading, in part,
ERROR:asyncio:an error occurred during closing of asynchronous generator <async_generator object stdio_client at 0x7a7ecf69aa40>. This is a known teardown-ordering problem in the MCP Python SDK and will not be an issue for this training. As long as you receive a message at the bottom that reads something like,Created customer-loyalty agent, ID: customer-loyalty:1, you can safely ignore the error and continue.
You may receive an error reading, in part, Message: The principal
{YOUR_PRINCIPAL_ID}lacks the required data actionMicrosoft.CognitiveServices/accounts/AIServices/agents/write. If you receive this error message, return to your resource group and choose the Microsoft Foundry resource associated with this training (that is, not the project). Navigate to Access control (IAM) from the left-hand menu. Select the + Add button and then choose the Add role assignment option. In the Role dropdown, select the Azure AI User role. In the Assign access to dropdown, select + Select members. In the Select members list, choose your name. After that, select the Select button at the bottom of the pane. Finally, select Review + assign twice to grant the Azure AI User role to your account. Then, re-run the command.
As you create each agent, the script will output an Agent ID. Make a note of these IDs. This includes the agent name as well as the current version number. After creating these agents, confirm that the agent names match their corresponding entries in the .env file, specifically in the āAgent IDsā section. An example of an Agent ID is cart-manager:1, which would be version 1 of the Cart Manager agent. In the .env file, you should see cart-manager, without the version number. The shopper agentās output should go into the ācoraā entry, and the rest should go into their respective entries.
After you have created all six agents, return to the Microsoft Foundry portal and verify that the agents have been created successfully. You should see all six agents listed in the Agents tab of the Build menu once you refresh the page.
04: Update the chat application
Now that you have created the agents in Microsoft Foundry, you will need to update the chat application to support these agents. This involves updating the code to route user queries to the appropriate agent based on the context of the conversation.
To do so, first, comment out line 45 (the single-agent import) and line 254 (the handle_single_agent call). A keyboard shortcut to comment out multiple lines in Visual Studio Code is to select the lines you want to comment (or uncomment) and then press CTRL + / on Windows or CMD + / on Mac.
Then, uncomment the relevant sections of code in chat_app.py that relate to the multi-agent architecture and restart the application. The relevant lines of code are 46-50 (import statements for the multi-agent handlers and the handoff service), lines 128-139 (setting up the handoff service), and lines 264-349 (the multi-agent pipeline steps). The following block provides a somewhat detailed explanation of how the multi-agent code works.
Expand this section to view solution
The multi-agent pipeline is implemented across several focused modules. The code you uncomment in chat_app.py calls helper functions from src/handlers/multi_agent_handler.py, which keeps the main file relatively short and concise.
Step 1: Customer loyalty (background task). The first time a user connects, a customer loyalty task runs in the background. It calls the customer loyalty agent to determine if the user is eligible for any discounts based on their customer ID. The discount information is stored in a session variable and used later.
Step 2: Intent classification. When the user sends a message, classify_intent() activates the handoff service (defined in src/services/handoff_service.py). This calls the GPT model with a structured output request. The response includes the domain of the userās query (e.g., interior design, inventory, cart management) and a confidence score. Based on the domain, the application routes the userās query to the appropriate agent.
Step 3: Context enrichment and agent execution. The enrich_context() function adds image descriptions and product recommendations to the userās message before it reaches the agent. Then execute_agent() calls the AgentProcessor (defined in src/app/agents/agent_processor.py) for the selected agent. The processor manages the conversation lifecycle with Microsoft Foundry: it creates or continues a conversation thread, sends the message, and handles any function calls the agent requests.
When the agent requests a function call, the processor dispatches it via the MCP tool wrappers in src/app/agents/mcp_tools.py. These wrappers call the corresponding tools on the MCP inventory server (defined in src/app/servers/mcp_inventory_server.py) through a persistent stdio connection managed by the MCP client in src/app/servers/mcp_inventory_client.py. The tool results are sent back to the agent for final processing.
The handle_image_creation() function handles the special case of image generation, which is not covered in this training because the gpt-image-1 model is only available upon request.
Step 4: Response processing. The process_response() function parses the agentās structured JSON output, updates the cart state and discount persistence, and prepares the response for the user.
05: Demonstrate application behavior
Now that you have created the necessary agents and updated the application code, ensure that all files are saved and then restart the application. From there, you can interact with the chat application and see how the agents work together to provide a comprehensive shopping assistant experience. Use the following prompts to test the applicationās behavior.
Expand this section to view solution
In order to restart the application, stop the currently running instance by pressing CTRL+C in the terminal where the application is running. Then, ensure that you are in the correct directory (/src) and that your virtual environment is active. Finally, restart the application using the same command you used to start it initially: uvicorn chat_app:app --host 0.0.0.0 --port 8000.
Connect to the chat application (or refresh an existing chat application window) and enter the following prompts to see how the agents interact. Note that the context of the conversation will differ each time you run through this exercise, so the agentsā responses may vary. For that reason, you may wish to modify the questions slightly to fit the context of your conversation.
- What colors of green paint do you have?
- I think Iām interested in Deep Forest. How many gallons would I need to paint a medium sized bedroom?
- How much of PROD0018 do you have in stock?
- Letās add two gallons to the cart, please.
- Please also add one paint tray and two of your All-Purpose Wall Paint Brushes.
- What items are in my cart right now?
- Iād like to check out now.
Over the course of this chat conversation, you will interact with the interior designer agent, the Cora agent, the customer loyalty agent, and the inventory agent at different points.
The application also includes functionality to generate images based on user prompts. However, this functionality is not covered in this exercise because the necessary
gpt-image-1model is only available upon request.