Skip to content

Setup: Self-Guided Learners

These instructions are for self-guided learners who are not part of the AI Tour and do not have access to a pre-configured lab environment. Follow these steps to set up your environment and begin the workshop.

Introduction

This workshop is designed to teach you about the Azure AI Agents Service and the associated Python SDK. It consists of multiple labs, each highlighting a specific feature of the Azure AI Agents Service. The labs are meant to be completed in order, as each one builds on the knowledge and work from the previous lab.

Prerequisites

  1. Access to an Azure subscription. If you don't have an Azure subscription, create a free account before you begin.
  2. You need a GitHub account. If you don’t have one, create it at GitHub.

Open the Workshop

The preferred way to run this workshop is using GitHub Codespaces. This option provides a pre-configured environment with all the tools and resources needed to complete the workshop. Alternatively, you can open the workshop locally using a Visual Studio Code Dev Container.

Select Open in GitHub Codespaces to open the project in GitHub Codespaces.

Open in GitHub Codespaces

Building the Codespace will take several minutes. You can continue reading the instructions while it builds.

Apple Silicon Users

The automated deployment script you’ll be running soon isn’t supported on Apple Silicon. Please run the deployment script from Codespaces or from macOS instead of the Dev Container.

Alternatively, you can open the project locally using a Visual Studio Code Dev Container, which will open the project in your local VS Code development environment using the Dev Containers extension.

  1. Start Docker Desktop (install it if not already installed)
  2. Select Dev Containers Open to open the project in a VS Code Dev Container.

    Open in Dev Containers

The process of building the Dev Container, which involves downloading and setting it up on your local system, will take several minutes. During this time, you can continue reading the instructions.

Lab Structure

Each lab in this workshop includes:

  • An Introduction: Explains the relevant concepts.
  • An Exercise: Guides you through the process of implementing the feature.

Project Structure

The workshop’s source code is located in the src/workshop folder. Be sure to familiarize yourself with the key subfolders and files you’ll be working with throughout the workshop.

  1. The files folder: Contains the files created by the agent app. The files folder is created during agent execution and is not checked into source control. As a result, you will NOT see this folder in your forked repository - but you will see it during runtime.
  2. The instructions folder: Contains the instructions passed to the LLM.
  3. The main.py file: The entry point for the app, containing its main logic.
  4. The sales_data.py file: The function logic to execute dynamic SQL queries against the SQLite database.
  5. The stream_event_handler.py file: Contains the event handler logic for token streaming.

Lab folder structure

Authenticate with Azure

You need to authenticate with Azure so the agent app can access the Azure AI Agents Service and models. Follow these steps:

  1. Ensure the Codespace has been created.
  2. In the Codespace, open a new terminal window by selecting Terminal > New Terminal from the VS Code menu.
  3. Run the following command to authenticate with Azure:

    az login --use-device-code
    

    Note

    You'll be prompted to open a browser link and log in to your Azure account. Be sure to copy the authentication code first.

    1. A browser window will open automatically, select your account type and click Next.
    2. Sign in with your Azure subscription Username and Password.
    3. Paste the authentication code.
    4. Select OK, then Done.

    Warning

    If you have multiple Azure tenants, then you will need to select the appropriate tenant when authenticating.

    az login --use-device-code --tenant <tenant_id>
    
  4. Next, select the appropriate subscription from the command line.

  5. Leave the terminal window open for the next steps.

Deploy the Azure Resources

The following resources will be created in the rg-contoso-agent-workshop resource group in your Azure subscription.

  • An Azure AI Foundry hub named agent-wksp
  • An Azure AI Foundry project named Agent Service Workshop
  • A Serverless (pay-as-you-go) GPT-4o model deployment named gpt-4o (Global 2024-08-06). See pricing details here.
  • A Grounding with Bing Search resource. See the documentation and pricing for details.

You will need 140K TPM quota availability for the gpt-4o Global Standard SKU, not because the agent uses lots of tokens, but due to the frequency of calls made by the agent to the model. Review your quota availability in the AI Foundry Management Center.

We have provided a bash script to automate the deployment of the resources required for the workshop. Alternatively, you may deploy resources manually using Azure AI Foundry studio. Select the desired tab.

The script deploy.sh deploys to the eastus2 region by default; edit the file to change the region or resource names. To run the script, open the VS Code terminal and run the following command:

cd infra && ./deploy.sh

Workshop Configuration File

The deploy script generates the src/workshop/.env file, which contains the project connection string, model deployment name, and Bing connection name.

Your .env file should look similar to this but with your project connection string.

MODEL_DEPLOYMENT_NAME="gpt-4o"
BING_CONNECTION_NAME="groundingwithbingsearch"
PROJECT_CONNECTION_STRING="<your_project_connection_string>"

Alternatively, if you prefer not to use the deploy.sh script you can deploy the resources manually using the Azure AI Foundry portal as follows:

  1. Navigate to the Azure AI Foundry web portal using your browser and sign in with your account.
  2. Select + Create project.

    • Name the project

      agent-workshop
      
    • Create a new hub named

      agent-workshop-hub
      
    • Select Create and wait for the project to be created.

  3. From My assets, select Models + endpoints.

    • Select gpt-4o, then Confirm.
    • Name the deployment

      gpt-4o
      

    Select Deploy Model / Deploy Base Model.

    • Deployment type: Select Global Standard.
    • Select Customize.
    • Model version: Select 2024-08-06.
    • Tokens Per Minute Rate Limit: Select 140k.
    • Select Deploy.

Note

A specific version of GPT-4o may be required depending on your the region where you deployed your project. See Models: Assistants (Preview) for details.

Workshop Configuration File

Create the workshop configuration file with the following command:

cp src/workshop/.env.sample src/workshop/.env

Then edit the file src/workshop/.env to provide the Project Connection String. You can find this string in the AI Foundry portal in the Overview page for your Project agent-project (look in the Project details section).