# Teams AI Library - C# Documentation (Complete) > Microsoft Teams AI Library (v2) - A comprehensive framework for building AI-powered Teams applications using C#. Using this Library, you can easily build and integrate a variety of features in Microsoft Teams by building Agents or Tools. The documentation here helps by giving background information and code samples on how best to do this. IMPORTANT: This Library is NOT based off of BotFramework (which the _previous_ version of the Teams AI Library was based on). This Library is a completely new framework. ## Main Documentation ### Teams CLI # Teams CLI The Teams CLI was created with the intent of supporting developers by making common actions simple to implement with just a command line. The CLI overarching features are: | Feature | Description | |---------|-------------| | `new` | Create a new Teams AI v2 agent by choosing a template that will be ready to run with one command line. | | `config` | Add Microsoft 365 Agents Toolkit configuration files to your existing Teams AI v2 agent project. | | `environment` | Manage multiple environments (e.g. dev, prod) and their keys for your agent. | :::tip With the CLI installed, you can enter `teams --help` at any command level to access information about the command, tokens, or required arguments. ::: ## Installation Install the Teams CLI globally using npm: ```sh npm install -g @microsoft/teams.cli@preview ``` :::tip If you prefer not to install globally, all commands below can replace `teams` with npx: `npx @microsoft/teams.cli@preview ` ::: ## Create an agent with one command line ```sh teams new ``` The `new` token will create a brand new agent with `app-name` applied as the directory name and project name. :::note The name you choose may have case changes when applied; for example, "My App" would become "my-app' due to the requirements for `package.json` files. ::: ### Optional parameters :::tip Use command line `teams new --help` to see the latest options for all optional params. ::: | Parameter | Description | |-----------|-------------| | `--template` | Ready-to-run templates that serve as a starting point depending on your scenario. Template examples include `ai`, `echo`, `graph`, and more. | | `--start` | Run the agent immediately upon finishing the creation of the project. | | `--toolkit` or `--atk` | Include the configuration files required to run the agent in the debugger via the [Microsoft 365 Agents Toolkit](https://github.com/OfficeDev/teams-toolkit) extension. Options include `basic`, `embed`, and `oauth`, and more may be added in the future. | | `--client-id` | The app client id, if you already have deployed a resource. This will be added to the root `.env` file of the project. | | `--client-secret` | The app client secret, if you already have deployed a resource. This will be added to the root `.env` file of the project. | ## Add Microsoft 365 Agents Toolkit configuration to existing agent An existing project may also have the appropriate Microsoft 365 Agents Toolkit configuration files added by configuration name. ```bash teams config add ``` | Configuration | Description | |--------------|-------------| | `atk.basic` | Basic Microsoft 365 Agents Toolkit configuration | | `atk.embed` | Configuration for embedded Teams applications | | `atk.oauth` | Configuration for OAuth-enabled applications | Using this command will include - `env`: folders for managing multiple environments - `infra`: files for deployment and provisioning - `.yml` files for tasks, launch, deployment, etc. ## Remove Agents Toolkit configuration files ```bash teams config remove ``` --- ### Teams Core Concepts # Teams Core Concepts When you run your agent on Teams using Microsoft 365 Agents Toolkit, several Teams-specific processes happen behind the scenes. Understanding these components will help you better debug and deploy your agents. Obviously, all these processes can be done manually, but Agents Toolkit automates them for you. ## Basic Flow ```mermaid flowchart LR %% Main actors User([User]) %% Teams section subgraph Teams ["Teams"] TeamsClient["Teams Client"] TeamsBackend["Teams Backend"] end %% Azure section subgraph Azure ["Azure"] AppReg["App Registration"] AzureBot["Azure Bot"] end %% Local Server section subgraph LocalServer ["Local Server"] DevTunnel["DevTunnel"] LocalApp["Your local application"] end %% Deployed Server section subgraph DeployedServer ["Deployed Server"] DeployedApp["Your deployed application"] end %% Define connections User <--> TeamsClient TeamsClient <--> TeamsBackend TeamsBackend <--> AppReg AppReg <--> AzureBot AzureBot --> LocalServer AzureBot --> DeployedServer ``` **Teams** - Teams Client: User-facing agent that interacts with the user. - Teams Backend: Part of your app package; includes a manifest with your app’s client ID. **Azure** - App Registration: Contains a unique client ID and secret for your app. - Azure Bot: Connects your app to Teams; contains a pointer to your HTTPS URL. **Local Server** - Dev Tunnel: Public-facing HTTPS tunnel to expose your local machine. - Local App: Your application running locally; handles events from Teams and sends responses. **Deployed Server** - Deployed App: Your app deployed to the cloud with a public HTTPS endpoint; also interacts with Teams. ## Core Concepts When working with Teams, these are the key concepts. Keep in mind, this is a simplified view of the architecture. - Teams Client: This is the Teams application where users interact with your agent. This can be the desktop app, web app, or mobile app. - Teams Backend: This service handles all the Teams-related operations, including keeping a record of your manifest, and routing messages from your agent to the Azure bot service. - App Registration: This is the registration of your agent in Azure. This Application Registration issues a unique client ID for your application and a client secret. This is used to authenticate your agent application with the Teams backend and other Azure services (including Graph if you are using it). - Azure Bot Service: This is the service that handles all the bot-related operations, including routing messages from Teams to your agent and vice versa. This holds the URL to your agent application. - DevTunnel: This is a service that creates a public facing URL to your locally running application. Azure Bot Service requires that you have a public facing https URL to your agent application. - Local Agent Application: This is your agent application running on your local machine. - Deployed Agent Application: This is your deployed agent which probably has a public facing URL. ## DevTunnel [DevTunnel](https://learn.microsoft.com/en-us/azure/developer/dev-tunnels/overview) is a critical component that makes your locally running agent accessible to Teams. When you :::info DevTunnel is only one way of exposing your localling running service to the internet. Other tools like ngrok can also accomplish the same thing. ::: - Creates a secure public HTTPS endpoint that forwards to your local server - Manages SSL certificates automatically - Routes Teams messages and events to your local agent ## Teams App Provisioning Before your agent can interact with Teams, it needs to be properly registered and configured. This step handles creating or updating the App Registration and creating or registering the Azure Bot instance in Azure. ### App Registration - Creates an App ID (i.e. Client ID) in the Teams platform - Sets up a bot registration with the Bot Framework - Creates a client secret that your agent can use to authenticate to be able to send and receive messages. Agents Toolkit will automatically get this value and store it in the `.env` file for you. ### Azure Bot - Creates an Azure Bot resource - Associates the bot with your App Registration - Configures the messaging endpoint to point to your DevTunnel (or public URL if deployed) ## Sideloading Process Sideloading is the process of installing your agent in Teams. You are able to pass in the manifest and icons (zipped up) to the Teams client. Sideloading an application automatically makes that application available to you. You are also able to sideload the application in a Team or a Group chat. In this case, the application will be available to all members of that Team or Group chat. :::warning Sideloading needs to be enabled in your tenant. If this is not the case, then you will need to contact your Teams administrator to enable it. ::: ## Provisioning and Deployment To test your app in Teams, you will at minimum need to have a provisioned Azure bot. You are likely to have other provisionied resources such as storage. Please see the Microsoft Learn [Provision cloud resources](https://learn.microsoft.com/en-us/microsoftteams/platform/toolkit/provision) documentation for provisioning and deployment using Visual Studio Code and to a container service. --- ### Teams Manifest # Teams Manifest Every app or agent installed on Teams requires an app manifest json file, which provides important information and permissions to that app. When sideloading the app, you are required to provide the app manifest via zip which also includes the icons for the app. ## Manifest There are many permissions and details that an app manifest may have added to the `manifest.json`, including the app ID, url, and much more. Please review the comprehensive documentation on the [manifest schema](https://learn.microsoft.com/en-us/microsoftteams/platform/resources/schema/manifest-schema). ## Sideloading Sideloading is the ability to install and test your app before it is published to your organization's Teams App management page. To sideload, please see the official [Sideloading Microsoft Learn documentation](https://learn.microsoft.com/en-us/microsoftteams/platform/concepts/deploy-and-publish/apps-upload). To sideload, the manifest mentioned above must have all information (such as app id, tenant information, permissions, etc.) filled out, and be placed in a zip with the icons, but the zip should **NOT** include a containing folder of those files. For convenient assistance with managing your manifest and automating important functionality like sideloading, deployment, and provisioning, we recommend the [Microsoft 365 Agents Toolkit extension](https://learn.microsoft.com/en-us/microsoftteams/platform/toolkit/install-teams-toolkit)) and [CLI](https://learn.microsoft.com/en-us/microsoftteams/platform/toolkit/microsoft-365-agents-toolkit-cli). Please continue to the [Toolkit documentation](./agents-toolkit) to learn more. --- ### Microsoft 365 Agents Toolkit # Microsoft 365 Agents Toolkit Agents Toolkit is a powerful extension and CLI app that helps automate important tasks like manifest management, sideloading, deployment, and provisioning - if you encounter any issues while using it (such as problems with the extension, running apps, deployment and provisioning, or debugging via F5), please file them in the [Agents Toolkit GitHub repository](https://github.com/OfficeDev/microsoft-365-agents-toolkit). ## Installing Agents Toolkit Agents Toolkit can be installed as an extension and CLI. Please see the documentation linked below. - [Installing Agents Toolkit extension](https://learn.microsoft.com/en-us/microsoftteams/platform/toolkit/install-teams-toolkit) - [Installing Agents Toolkit CLI](https://learn.microsoft.com/en-us/microsoftteams/platform/toolkit/microsoft-365-agents-toolkit-cli) :::note * [Teams AI v2 CLI](../developer-tools/cli) - helper for the new v2 library. It scaffolds agents, wires in deep Teams features (Adaptive Cards, Conversation History, Memory...etc), and adds all the config files you need while you're coding. * Agents Toolkit CLI - app deployment helper. It sideloads, provisions Azure resources, handles manfiest/tenant plumbing, and keeps your dev, test, and prod environments in sync. ::: ## Official documentation - Official [Agents Toolkit documentation](https://learn.microsoft.com/en-us/microsoft-365/developer/overview-m365-agents-toolkit?toc=%2Fmicrosoftteams%2Fplatform%2Ftoc.json&bc=%2Fmicrosoftteams%2Fplatform%2Fbreadcrumb%2Ftoc.json) ## Deployment and provisioning Generally, you can use the toolkit to add required resources to Azure based on your app manifest setup. Agents Toolkit documents that in their documentation. - [Add cloud resources and API connection](https://learn.microsoft.com/en-us/microsoftteams/platform/toolkit/add-resource) ## Resources - [Agents Toolkit Overview](https://learn.microsoft.com/en-us/microsoftteams/platform/toolkit/teams-toolkit-fundamentals) - these extensive docs cover many topics related to Agents Toolkit, so please explore their documentation at your convenience. - [Teams AI v2 CLI documentation](../developer-tools/cli) - includes instructions on adding toolkit configurations to your Teams AI v2 agent. --- ### Teams Integration # Teams Integration This section describes Teams-specific features and components of the SDK, helping you understand how your agent integrates with the Microsoft Teams platform. ## Core Concepts When working with Teams, several key components come into play: - **DevTunnel**: Enables local development by creating secure public endpoints - **App Provisioning**: Handles registration and configuration in Teams - **Environment Setup**: Manages Teams-specific configuration files - **App Packaging**: Bundles your agent for Teams deployment ## In This Section 1. [Running Your Agent](#) - Understanding the Teams deployment process 2. [Teams Manifest](teams-manifest.txt) - Configuring your agent's Teams presence 3. [Microsoft 365 Agents Toolkit](microsoft-365-agents-toolkit.txt) - Using the Agents Toolkit extension for sideloading, deployment, and provisioning. Each guide provides detailed information about specific aspects of Teams integration, from local development to production deployment. --- ### Developer Tools # Developer Tools One of the main motivations for Teams AI (v2) Library is to provide excellent tools that simplify and speed up building and testing agents. Because of this, we created the CLI for speedy agent initiation and project management, and DevTools as an accessible way to test your agent's behavior without jumping through deployment hoops. DevTools also provides crucial insight on activity payloads on the Activities page. Learn more about the developer tools that come with Teams AI (v2) Library. 1. [Teams CLI](./cli) 2. [DevTools](./devtools) --- ## Getting Started ### 🚀 Getting Started # 🚀 Getting Started This guide will help you set up your first Teams AI Library application. You'll learn the basics of creating an application, understanding its structure, and running it locally. By the end of this guide, you'll have a solid foundation to build upon as you explore more advanced features and capabilities of the SDK. --- ### Quickstart # Quickstart Get started with Teams AI Library (v2) quickly using the Teams CLI. ## Set up a new project ### Prerequisites - **.NET** v.8 or higher. Install or upgrade from [dotnet.microsoft.com](https://dotnet.microsoft.com/en-us/download). :::note If you are using LLMs to aid you in using this library, consider using the [llms.txt files](./LLMs.md) to provide context about the library to your coding assistant. ::: ## Instructions ### Install the Teams CLI Use your terminal to install the Teams CLI globally using npm: ```sh npm install -g @microsoft/teams.cli@preview ``` :::info _The [Teams CLI](/developer-tools/cli) is a command-line tool that helps you create and manage Teams applications. It provides a set of commands to simplify the development process._

After installation, you can run `teams --version` to verify the installation. ::: ## Creating Your First Agent Let's create a simple echo agent that responds to messages. Run: ```sh teams new csharp quote-agent --template echo ``` This command: 1. Creates a new directory called `Quote.Agent`. 2. Bootstraps the echo agent template files into your project directory. 3. Creates your agent's manifest files, including a `manifest.json` file and placeholder icons in the `Quote.Agent/appPackage` directory. The Teams [app manifest](https://learn.microsoft.com/en-us/microsoftteams/platform/resources/schema/manifest-schema) is required for [sideloading](https://learn.microsoft.com/en-us/microsoftteams/platform/concepts/deploy-and-publish/apps-upload) the app into Teams. > The `echo` template creates a basic agent that repeats back any message it receives - perfect for learning the fundamentals. ## Running your agent Navigate to your new agent's directory: ```sh cd Quote.Agent/Quote.Agent ``` Install the dependencies: ```sh dotnet restore ``` Start the development server: ```sh dotnet run ``` In the console, you should see a similar output: ```sh [INFO] Microsoft.Hosting.Lifetime Now listening on: http://localhost:3978 [WARN] Echo.Microsoft.Teams.Plugins.AspNetCore.DevTools ⚠️ Devtools are not secure and should not be used production environments ⚠️ [INFO] Echo.Microsoft.Teams.Plugins.AspNetCore.DevTools Available at http://localhost:3978/devtools [INFO] Microsoft.Hosting.Lifetime Application started. Press Ctrl+C to shut down. [INFO] Microsoft.Hosting.Lifetime Hosting environment: Development ``` When the application starts, you'll see: 1. An http server starting up (on port 3978). This is the main server which handles incoming requests and serves the agent application. 2. A devtools server starting up. This is a developer server that provides a web interface for debugging and testing your agent quickly, without having to deploy it to Teams. Let's navigate to the devtools server. Open your browser and head to [http://localhost:3978/devtools](http://localhost:3978/devtools). You should see a simple interface where you can interact with your agent. Send it a message! ![devtools](/screenshots/devtools-echo-chat.png) ## Next steps Now that you have your first agent running, learn about [the code basics](code-basics.txt) to understand its components and structure. Otherwise, if you want to run your agent in Teams, check out the [Running in Teams](running-in-teams.txt) guide. ## Resources - [Teams CLI documentation](/developer-tools/cli) - [Teams DevTools documentation](/developer-tools/devtools) - [Teams manifest schema](https://learn.microsoft.com/en-us/microsoftteams/platform/resources/schema/manifest-schema) - [Teams sideloading](https://learn.microsoft.com/en-us/microsoftteams/platform/concepts/deploy-and-publish/apps-upload) --- ### Code Basics # Code Basics After creating your first Teams application, let's understand its structure and key components. This will help you build more complex applications as you progress. ## Project Structure When you create a new Teams application, it generates a directory with this basic structure: ``` Quote.Agent/ |── appPackage/ # Teams app package files ├── Program.cs # Main application startup code ├── MainController.cs # Main activity handling code ``` - **appPackage/**: Contains the Teams app package files, including the `manifest.json` file and icons. This is required for [sideloading](https://learn.microsoft.com/en-us/microsoftteams/platform/concepts/deploy-and-publish/apps-upload) the app into Teams for testing. The app manifest defines the app's metadata, capabilities, and permissions. ## Core Components Let's break down the simple application we created in the [quickstart](quickstart.txt) into its core components.: ### The App Class The heart of your application is the `App` class. This class handles all incoming activities and manages your application's lifecycle. It also acts as a way to host your application service. ```csharp title="Program.cs" using Microsoft.Teams.Plugins.AspNetCore.DevTools.Extensions; using Microsoft.Teams.Plugins.AspNetCore.Extensions; using Quote.Agent; var builder = WebApplication.CreateBuilder(args); builder.AddTeams(); builder.AddTeamsDevTools(); builder.Services.AddTransient(); var app = builder.Build(); app.UseTeams(); app.Run(); ``` The app configuration includes a variety of options that allow you to customize its behavior, including controlling the underlying server, authentication, and other settings. For simplicity's sake, let's focus on plugins. ### Plugins Plugins are a core part of the Teams AI v2 SDK. They allow you to hook into various lifecycles of the application. The lifecycles include server events (start, stop, initialize etc.), and also Teams Activity events (onActivity, onActivitySent, etc.). In fact, the [DevTools](/developer-tools/devtools) application you already have running is a plugin too. It allows you to inspect and debug your application in real-time. :::warning DevTools is a plugin that should only be used in development mode. It should not be used in production applications since it offers no authentication and allows your application to be accessed by anyone.\ **Be sure to remove the DevTools plugin from your production code.** ::: ### Message Handling Teams applications respond to various types of activities. The most basic is handling messages: ```csharp title="MainController.cs" [TeamsController("main")] public class MainController { [Message] public async Task OnMessage([Context] MessageActivity activity, [Context] IContext.Client client) { await client.Typing(); await client.Send($"you said \"\""); } } ``` ```csharp title="Program.cs" app.OnMessage(async context => { await context.Typing(); await context.Send($"you said \"\""); }); ``` This code: 1. Listens for all incoming messages using `[Message]` attribute. 2. Sends a typing indicator, which renders as an animated ellipsis (…) in the chat. 3. Responds by echoing back the received message. :::info Each activity type has both an attribute and a functional method for type safety/simplicity of routing logic! ::: ### Application Lifecycle Your application starts when you run: ```csharp var app = builder.Build(); app.UseTeams(); app.Run(); ``` This part initializes your application server and, when configured for Teams, also authenticates it to be ready for sending and receiving messages. ## Next Steps Now that you understand the basic structure of your Teams application, you're ready to [run it in Teams](running-in-teams.txt). You will learn about Microsoft 365 Agents Toolkit and other important tools that help you with deployment and testing your application. After that, you can: - Add more activity handlers for different types of interactions. See [Listening to Activities](../essentials/on-activity) for more details. - Integrate with external services using the [API Client](../essentials/api). - Add interactive [cards](../in-depth-guides/adaptive-cards) and [dialogs](../in-depth-guides/dialogs). See and for more information. - Implement [AI](../in-depth-guides/ai). Continue on to the next page to learn about these advanced features. ## Other Resources - [Essentials](../essentials) - [Teams concepts](/teams) - [Teams developer tools](/developer-tools) --- ### Running In Teams # Running In Teams Now that your agent is running locally, let's deploy it to Microsoft Teams for testing. This guide will walk you through the process. ## Microsoft 365 Agents Toolkit Agents Toolkit is a powerful tool that simplifies deploying and debugging Teams applications. It automates tasks like managing the Teams app manifest, configuring authentication, provisioning, and deployment. If you'd like to learn about these concepts, check out [Teams core concepts](/teams/core-concepts). ### Install Agents Toolkit First, you'll need to install the Agents Toolkit IDE extension: - Visit the [Agents Toolkit installation guide](https://learn.microsoft.com/en-us/microsoftteams/platform/toolkit/install-teams-toolkit) to install on your preferred IDE. ## Adding Teams configuration files via `teams` CLI To configure your agent for Teams, run the following command in the terminal inside your quote-agent folder: :::tip (if you have `teams` CLI installed globally, use `teams` instead of `npx`) ::: ```bash npx @microsoft/teams.cli config add atk.basic ``` :::tip The `atk.basic` configuration is a basic setup for Agents Toolkit. It includes the necessary files and configuration to get started with Teams development.
Explore more advanced configurations as needed with teams config --help.
::: This [CLI](/developer-tools/cli) command adds configuration files required by Agents Toolkit, including: - Environment setup in the `env` folder and root `.env` file - Teams app manifest in the `appPackage` folder (if not already present) - Debug instructions in `.vscode/launch.json` and `.vscode/tasks.json` - ATK automation files to your project (e.g. `teamsapp.local.yml`) | Cmd name | CLI name | Description | | ---------- | ------------- | -------------------------------------------------------------------------------------------------------------------------------------------------- | | `teams` | Teams AI v2 | A tool for setting up and utilizing the Teams AI v2 library including integration with ATK, if desired. | | `atk` | Agents Toolkit | A tool for managing provisioning, deployment, and in-client debugging for Teams. | ## Debugging in Teams After installing Agents Toolkit and adding the configuration: 1. **Open** your agent's project in your IDE. 2. **Open the Agents Toolkit extension panel** (usually on the left sidebar). The extension icon is the Teams logo. 3. **Log in** to your Microsoft 365 and Azure accounts in the Agents Toolkit extension. 4. **Select "Local"** under Environment Settings of the Agents Toolkit extension. 5. **Click on Debug (Chrome) or Debug (Edge)** to start debugging via the 'play' button. ![Agents Toolkit local environment UI](/screenshots/agents-toolkit.png) When debugging starts, the Agents Toolkit will: - **Build** your application - **Start a [devtunnel](/teams/core-concepts#devtunnel)** which will assign a temporary public URL to your local server - **Provision the Teams app** for your tenant so that it can be installed and be authenticated on Teams - **Set up the local variables** necessary for your agent to run in Teams in `env/.env.local` and `env/env.local.user`. This includes propagating the app manifest with your newly provisioned resources. - **Start** the local server. - **Package your app manifest** into a Teams application zip package and the manifest json with variables inserted in `appPackage/build`. - **Launch Teams** in an incognito window your browser. - **Upload the package** to Teams and signal it to sideload the app (fancy word for installing this app just for your use) If you set up Agents Toolkit via the Teams AI CLI, you should see something like the following in your terminal: ```sh [INFO] Microsoft.Hosting.Lifetime Now listening on: http://localhost:3978 [WARN] Echo.Microsoft.Teams.Plugins.AspNetCore.DevTools ⚠️ Devtools are not secure and should not be used production environments ⚠️ [INFO] Echo.Microsoft.Teams.Plugins.AspNetCore.DevTools Available at http://localhost:3978/devtools [INFO] Microsoft.Hosting.Lifetime Application started. Press Ctrl+C to shut down. [INFO] Microsoft.Hosting.Lifetime Hosting environment: Development ``` ## Testing your agent After the debugging session starts: 1. Teams will open in your browser 2. You'll be prompted to sign in (if not already) 3. Teams will ask permission to install the app 4. Once installed, you can start chatting with your agent! ![Agent running on Teams](/screenshots/example-on-teams.png) Congratulations! Now you have a fully functional agent running in Microsoft Teams. Interact with it just like any other Teams app and explore the rest of the documentation to build more complex agents. :::tip If you want to monitor the activities and events in your app, you can still use the [DevTools plugin](/developer-tools/devtools)! Note that the DevTools server is running on port 3978. You can open it in your browser to interact with your agent and monitor activities in real time. ::: ## Troubleshooting For deployment and resource management we recommend the Microsoft 365 Agents Toolkit. If you prefer to set everything up by hand, follow the standard [Teams app documentation](https://learn.microsoft.com/en-us/microsoftteams/platform/concepts/deploy-and-publish/apps-publish-overview). The Teams AI library itself doesn't handle deployment or Azure resources, so you'll need to rely on the general [Microsoft Teams deployment documentation](https://learn.microsoft.com/en-us/microsoftteams/deploy-overview). ## Next steps Now that your agent is running in Teams, you can learn more [essential concepts](../essentials) to understand how to build more complex agents. Explore the [in-depth guides](../in-depth-guides) for advanced topics like authentication, message extensions, and more. ## Resources - [Teams CLI documentation](/developer-tools/cli) - [Agents Toolkit documentation](https://learn.microsoft.com/en-us/microsoft-365/developer/overview-m365-agents-toolkit?toc=%2Fmicrosoftteams%2Fplatform%2Ftoc.json&bc=%2Fmicrosoftteams%2Fplatform%2Fbreadcrumb%2Ftoc.json) - [Agents Toolkit CLI documentation](https://learn.microsoft.com/en-us/microsoftteams/platform/toolkit/microsoft-365-agents-toolkit-cli) - [Teams CLI GitHub repository](https://github.com/OfficeDev/Teams-Toolkit) - [Microsoft Teams deployment documentation](https://learn.microsoft.com/en-us/microsoftteams/deploy-overview) --- ### LLMs.txt # LLMs.txt Using Coding Assistants is a common practice now to help speed up development. To aid with this, you can provide your coding assistant sufficient context about this library by linking it to its llms.txt files: Small: [llms_csharp.txt](https://microsoft.github.io/teams-ai/llms_docs/llms_csharp.txt) - This file contains an index of the various pages in the C# documentation. The agent needs to selectively read the relevant pages to answer questions and help with development. Large: [llms_csharp_full.txt](https://microsoft.github.io/teams-ai/llms_docs/llms_csharp_full.txt) - This file contains the full content of the C# documentation, including all pages and code snippets. The agent can keep the entire documentation in memory to answer questions and help with development. --- ## Essentials ### App Basics # App Basics The `App` class is the main entry point for your agent. It is responsible for: 1. Hosting and running the server (via plugins) 2. Serving incoming requests and routing them to your handlers 3. Handling authentication for your agent to the Teams backend 4. Providing helpful utilities which simplify the ability for your application to interact with the Teams platform 5. Managing plugins which can extend the functionality of your agent ```mermaid flowchart LR %% Layout Definitions direction LR Teams subgraph AppClass CorePlugins["Plugins"] Events["Events"] subgraph AppResponsbilities direction TB ActivityRouting["Activity Routing"] Utilities["Utilities"] Auth["Auth"] end Plugins2["Plugins"] end ApplicationLogic["Application Logic"] %% Connections Teams --> CorePlugins CorePlugins --> Events Events --> ActivityRouting ActivityRouting --> Plugins2 Plugins2 --> ApplicationLogic Auth --> ApplicationLogic Utilities --> ApplicationLogic %% Styling style Teams fill:#2E86AB,stroke:#1B4F72,stroke-width:2px,color:#ffffff style ApplicationLogic fill:#28B463,stroke:#1D8348,stroke-width:2px,color:#ffffff ``` ## Core Components **Plugins** - Can be used to set up the server - Can listen to messages or send messages out **Events** - Listens to events from core plugins - Emit interesting events to the application **Activity Routing** - Routes activities to appropriate handlers **Utilities** - Provides utility functions for convenience (like sending replies or proactive messages) **Auth** - Handles authenticating your agent with Teams, Graph, etc. - Simplifies the process of authenticating your app or user for your app **Plugins (Secondary)** - Can hook into activity handlers or proactive scenarios - Can modify or update agent activity events ## Plugins You'll notice that plugins are present in the front, which exposes your application as a server, and also in the back after the app does some processing to the incoming message. The plugin architecture allows the application to be built in an extremely modular way. Each plugin can be swapped out to change or augment the functionality of the application. The plugins can listen to various events that happen (e.g. the server starting or ending, an error occuring, etc), activities being sent to or from the application and more. This allows the application to be extremely flexible and extensible. --- ### Proactive Messaging # Proactive Messaging In [Sending Messages](./), we show how we can respond to an event when it happens. However, there are times when you want to send a message to the user without them sending a message first. This is called proactive messaging. You can do this by using the `send` method in the `app` instance. This is useful for sending notifications or reminders to the user. The main thing to note is that you need to have the `conversationId` of the chat or channel you want to send the message to. It's a good idea to store this value somewhere from an activity handler so you can use it for proactive messaging later. ```csharp // Installation is just one place to get the conversation id. All activities // have the conversation id, so you can use any activity to get it. [Install] public async Task OnInstall([Context] InstallUpdateActivity activity, [Context] IContext.Client client, [Context] IStorage storage) ``` ```csharp app.OnInstall(async context => ); ``` Then, when you want to send a proactive message, you can retrieve the `conversationId` from storage and use it to send the message. ```csharp public static class Notifications { public static async Task SendProactive(string userId) } ``` :::tip In this example, we show that we get the conversation id using one of the activity handlers. This is a good place to store the conversation id, but you can also do this in other places like when the user installs the app or when they sign in. The important thing is that you have the conversation id stored somewhere so you can use it later. ::: --- ### Essentials # Essentials At its core, an application that hosts an agent on Microsoft Teams exists to do three things well: listen to events, handle the ones that matter, and respond efficiently. Whether a user sends a message, opens a task module, or clicks a button — your app is there to interpret the event and act on it. With Teams AI Library v2, we’ve made it easier than ever to build this kind of reactive, conversational logic. The library introduces a few simple but powerful paradigms to help you connect to Teams, register handlers, and build intelligent agent behaviors quickly. Before diving in, let’s define a few key terms: • Event: Anything interesting that happens on Teams — or within your application as a result of handling an earlier event. • Activity: A special type of Teams-specific event. Activities include things like messages, reactions, and adaptive card actions. • InvokeActivity: A specific kind of activity triggered by user interaction (like submitting a form), which may or may not require a response. • Handler: The logic in your application that reacts to events or activities. Handlers decide what to do, when, and how to respond. ```mermaid flowchart LR Teams["Teams"] Server["App Server"] AppEventHandlers["Event Handler (app.OnEvent())"] AppRouter["Activity Event Router"] AppActivityHandlers["Activity Handlers (app.OnActivity())"] Teams --> |Activity| Server Teams --> |Signed In| Server Teams --> |...other
incoming events| Server Server --> |ActivityEvent
or InvokeEvent| AppRouter Server ---> |incoming
events| AppEventHandlers Server ---> |outgoing
events
| AppEventHandlers AppRouter --> |message activity| AppActivityHandlers AppRouter --> |card activity| AppActivityHandlers AppRouter --> |installation activity| AppActivityHandlers AppRouter --> |...other activities| AppActivityHandlers linkStyle 0,3 stroke:#66fdf3,stroke-width:1px,color:Tomato linkStyle 1,2,4,5 stroke:#66fdf3,stroke-width:1px linkStyle 6,7,8,9 color:Tomato ``` This section will walk you through the foundational pieces needed to build responsive, intelligent agents using the SDK. --- ### Listening To Events # Listening To Events An **event** is a foundational concept in building agents — it represents something noteworthy happening either on Microsoft Teams or within your application. These events can originate from the user (e.g. installing or uninstalling your app, sending a message, submitting a form), or from your application server (e.g. startup, error in a handler). ```mermaid flowchart LR Teams["Teams"]:::less-interesting Server["App Server"]:::interesting AppEventHandlers["Event Handler (app.OnEvent())"]:::interesting Teams --> |Activity| Server Teams --> |Signed In| Server Teams --> |...other
incoming events| Server Server ---> |incoming
events| AppEventHandlers Server ---> |outgoing
events
| AppEventHandlers linkStyle 0,1,2,3,4 stroke:#b1650f,stroke-width:1px classDef interesting fill:#b1650f,stroke:#333,stroke-width:4px; ``` The Teams AI Library v2 makes it easy to subscribe to these events and respond appropriately. You can register event handlers to take custom actions when specific events occur — such as logging errors, triggering workflows, or sending follow-up messages. Here are the events that you can start building handlers for: | **Event Name** | **Description** | | ------------------- | ------------------------------------------------------------------------------ | | `start` | Triggered when your application starts. Useful for setup or boot-time logging. | | `signin` | Triggered during a sign-in flow via Teams. | | `error` | Triggered when an unhandled error occurs in your app. Great for diagnostics. | | `activity` | A catch-all for incoming Teams activities (messages, commands, etc.). | | `activity.response` | Triggered when your app sends a response to an activity. Useful for logging. | | `activity.sent` | Triggered when an activity is sent (not necessarily in response). | ### Example 1 We can subscribe to errors that occur in the app. ```csharp app.OnError((sender, @event) => ); ``` ### Example 2 When an activity is received, log its `JSON` payload. ```typescript app.OnActivity((sender, @event) => ); ``` --- ### Listening To Activities # Listening To Activities An **Activity** is the Teams‑specific payload that flows between the user and your bot. Where _events_ describe high‑level happenings inside your app, _activities_ are the raw Teams messages such as chat text, card actions, installs, or invoke calls. The Teams AI Library v2 exposes a fluent router so you can subscribe to these activities with `app.OnActivity(...)`, or you can use controllers/attributes. ```mermaid flowchart LR Teams["Teams"]:::less-interesting Server["App Server"]:::interesting ActivityRouter["Activity Router (app.OnActivity())"]:::interesting Handlers["Your Activity Handlers"]:::interesting Teams --> |Events| Server Server --> |Activity Event| ActivityRouter ActivityRouter --> |handler invoked| Handlers classDef interesting fill:#b1650f,stroke:#333,stroke-width:4px; classDef less-interesting fill:#666,stroke:#333,stroke-width:4px; ``` Here is an example of a basic message handler: ```csharp [TeamsController] public class MainController { [Message] public async Task OnMessage([Context] MessageActivity activity, [Context] IContext.Client client) { await client.Send($"you said: "); } } ``` ```csharp app.OnMessage(async context => { await context.Send($"you said: "); }); ``` In the above example, the `activity` parameter is of type `MessageActivity`, which has a `Text` property. You'll notice that the handler here does not return anything, but instead handles it by `send`ing a message back. For message activities, Teams does not expect your application to return anything (though it's usually a good idea to send some sort of friendly acknowledgment!). ## Middleware pattern The `OnActivity` activity handlers (and attributes) follow a [middleware](https://www.patterns.dev/vanilla/mediator-pattern/) pattern similar to how `dotnet` middlewares work. This means that for each activity handler, a `Next` function is passed in which can be called to pass control to the next handler. This allows you to build a chain of handlers that can process the same activity in different ways. ```csharp [Message] public void OnMessage([Context] MessageActivity activity, [Context] ILogger logger, [Context] IContext.Next next) ``` ```csharp app.OnMessage(async context => ); ``` ```csharp [Message] public async Task OnMessage(IContext context) { if (context.Activity.Text == "/help") // Conditionally pass control to the next handler context.Next(); } ``` ```csharp app.OnMessage(async context => { if (context.Activity.Text == "/help") // Conditionally pass control to the next handler context.Next(); }); ``` ```csharp [Message] public async Task OnMessage(IContext context) { // Fallthrough to the final handler await context.Send($"Hello! you said "); } ``` ```csharp app.OnMessage(async context => { // Fallthrough to the final handler await context.Send($"Hello! you said "); }); ``` :::info Just like other middlewares, if you stop the chain by not calling `next()`, the activity will not be passed to the next handler. ::: --- ### Sending Messages # Sending Messages Sending messages is a core part of an agent's functionality. With all activity handlers, a `Send` method is provided which allows your handlers to send a message back to the user to the relevant conversation. ```csharp [Message] public async Task OnMessage([Context] MessageActivity activity, [Context] IContext.Client client) { await client.Send($"you said: "); } ``` ```csharp app.OnMessage(async context => { await context.Send($"you said: "); }); ``` In the above example, the handler gets a `message` activity, and uses the `send` method to send a reply to the user. ```csharp [SignIn.VerifyState] public async Task OnVerifyState([Context] SignIn.VerifyStateActivity activity, [Context] IContext.Client client) ``` ```csharp app.OnVerifyState(async context => ); ``` You are not restricted to only replying to `message` activities. In the above example, the handler is listening to `SignIn.VerifyState` events, which are sent when a user successfully signs in. :::tip This shows an example of sending a text message. Additionally, you are able to send back things like [adaptive cards](../../in-depth-guides/adaptive-cards) by using the same `Send` method. Look at the [adaptive card](../../in-depth-guides/adaptive-cards) section for more details. ::: ## Streaming You may also stream messages to the user which can be useful for long messages, or AI generated messages. The library makes this simple for you by providing a `Stream` function which you can use to send messages in chunks. ```csharp [Message] public void OnMessage([Context] MessageActivity activity, [Context] IStreamer stream) ``` ```csharp app.OnMessage(async context => ); ``` :::note Streaming is currently only supported in 1:1 conversations, not group chats or channels ::: ![Streaming Example](/screenshots/streaming-chat.gif) ## @Mention Sending a message at `@mentions` a user is as simple including the details of the user using the `AddMention` method ```csharp [Message] public async Task OnMessage([Context] MessageActivity activity, [Context] IContext.Client client) ``` ```csharp app.OnMessage(async context => ); ``` --- ### Teams API Client # Teams API Client Teams has a number of areas that your application has access to via its API. These are all available via the `app.Api` object. Here is a short summary of the different areas: | Area | Description | |------|-------------| | `Conversations` | Gives your application the ability to perform activities on conversations (send, update, delete messages, etc.), or create conversations (like 1:1 chat with a user) | | `Meetings` | Gives your application access to meeting details | | `Teams` | Gives your application access to team or channel details | An instance of the Api Client is passed to handlers that can be used to fetch details: ## Example In this example, we use the api client to fetch the members in a conversation. The `Api` object is passed to the activity handler in this case. ```csharp [Message] public async Task OnMessage([Context] MessageActivity activity, [Context] ApiClient api) ``` ```csharp app.OnMessage(async context => ); ``` ## Proactive API It's also possible to access the api client from outside a handler via the app instance. Here we have the same example as above, but we're access the api client via the app instance. ```csharp const members = await app.Api.Conversations.Members.Get("..."); ``` --- ### Graph API Client # Graph API Client [Microsoft Graph](https://docs.microsoft.com/en-us/graph/overview) gives you access to the wider Microsoft 365 ecosystem. You can enrich your application with data from across Microsoft 365. The library gives your application easy access to the Microsoft Graph API via the `Microsoft.Graph` package. Microsoft Graph can be accessed by your application using its own application token, or by using the user's token. If you need access to resources that your application may not have, but your user does, you will need to use the user's scoped graph client. To grant explicit consent for your application to access resources on behalf of a user, follow the [auth guide](../in-depth-guides/user-authentication). To access the graph using the Graph using the app, you may use the `app.Graph` object. ```csharp // Equivalent of https://learn.microsoft.com/en-us/graph/api/user-get // Gets the details of the bot-user var user = app.Graph.Me.GetAsync().GetAwaiter().GetResult(); Console.WriteLine($"User ID: "); Console.WriteLine($"User Display Name: "); Console.WriteLine($"User Email: "); Console.WriteLine($"User Job Title: "); ``` To access the graph using the user's token, you need to do this as part of a message handler: ```csharp [Message] public async Task OnMessage([Context] MessageActivity activity, [Context] GraphClient userGraph) { var user = await userGraph.Me.GetAsync(); Console.WriteLine($"User ID: "); Console.WriteLine($"User Display Name: "); Console.WriteLine($"User Email: "); Console.WriteLine($"User Job Title: "); } ``` ```csharp app.OnMessage(async context => { var user = await context.UserGraph.Me.GetAsync(); Console.WriteLine($"User ID: "); Console.WriteLine($"User Display Name: "); Console.WriteLine($"User Email: "); Console.WriteLine($"User Job Title: "); }); ``` Here, the `userGraph` object is a scoped graph client for the user that sent the message. :::tip You also have access to the `appGraph` object in the activity handler. This is equivalent to `app.Graph`. ::: --- ## In-Depth Guides ### Action commands # Action commands Action commands allow you to present your users with a modal pop-up called a dialog in Teams. The dialog collects or displays information, processes the interaction, and sends the information back to Teams compose box. ## Action command invocation locations There are three different areas action commands can be invoked from: 1. Compose Area 2. Compose Box 3. Message ### Compose Area and Box ![compose area and box](/screenshots/compose-area.png) ### Message action command ![message action command](/screenshots/message.png) :::tip See the [Invoke Locations](https://learn.microsoft.com/en-us/microsoftteams/platform/messaging-extensions/how-to/action-commands/define-action-command?tabs=Teams-toolkit%2Cdotnet#select-action-command-invoke-locations) guide to learn more about the different entry points for action commands. ::: ## Setting up your Teams app manifest To use action commands you have define them in the Teams app manifest. Here is an example: ```json "composeExtensions": [ { "botId": "${BOT_ID}", "commands": [ { "id": "createCard", "type": "action", "context": [ "compose", "commandBox" ], "description": "Command to run action to create a card from the compose box.", "title": "Create Card", "parameters": [ "name": "title", "title": "Card title", "description": "Title for the card", "inputType": "text" , "name": "subTitle", "title": "Subtitle", "description": "Subtitle for the card", "inputType": "text" , "name": "text", "title": "Text", "description": "Text for the card", "inputType": "textarea" ] }, , , ] } ] ``` Here we are defining three different commands: 1. `createCard` - that can be invoked from either the `compose` or `commandBox` areas. Upon invocation a dialog will popup asking the user to fill the `title`, `subTitle`, and `text`. ![Parameters](/screenshots/parameters.png) 2. `getMessageDetails` - It is invoked from the `message` overflow menu. Upon invocation the message payload will be sent to the app which will then return the details like `createdDate`...etc. ![Get Message Details Command](/screenshots/message-command.png) 3. `fetchConversationMembers` - It is invoked from the `compose` area. Upon invocation the app will return an adaptive card in the form of a dialog with the conversation roster. ![Fetch conversation members](/screenshots/fetch-conversation-members.png) ## Handle submission Handle submission when the `createCard` or `getMessageDetails` actions commands are invoked. ```typescript app.on('message.ext.submit', async ( activity ) => { const commandId = activity.value; let card: IAdaptiveCard; if (commandId === 'createCard') else if ( commandId === 'getMessageDetails' && activity.value.messagePayload ) else { throw new Error(`Unknown commandId: $commandId`); } return { composeExtension: , }; }); ``` `createCard()` function ```typescript interface IFormData title: string; subtitle: string; text: string; export function createCard(data: IFormData) { return new AdaptiveCard( new Image(IMAGE_URL), new TextBlock(data.title, size: 'Large', weight: 'Bolder', color: 'Accent', style: 'heading', ), new TextBlock(data.subtitle, size: 'Small', weight: 'Lighter', color: 'Good', ), new TextBlock(data.text, wrap: true, spacing: 'Medium', ) ); } ``` `createMessageDetailsCard()` function ```typescript export function createMessageDetailsCard(messagePayload: Message) { const cardElements: CardElement[] = [ new TextBlock('Message Details', size: 'Large', weight: 'Bolder', color: 'Accent', style: 'heading', ), ]; if (messagePayload?.body?.content) { cardElements.push( new TextBlock('Content', size: 'Medium', weight: 'Bolder', spacing: 'Medium', ), new TextBlock(messagePayload.body.content) ); } if (messagePayload?.attachments?.length) { cardElements.push( new TextBlock('Attachments', size: 'Medium', weight: 'Bolder', spacing: 'Medium', ), new TextBlock( `Number of attachments: $`, wrap: true, spacing: 'Small', ) ); } if (messagePayload?.createdDateTime) { cardElements.push( new TextBlock('Created Date', size: 'Medium', weight: 'Bolder', spacing: 'Medium', ), new TextBlock(messagePayload.createdDateTime, wrap: true, spacing: 'Small', ) ); } if (messagePayload?.linkToMessage) { cardElements.push( new TextBlock('Message Link', size: 'Medium', weight: 'Bolder', spacing: 'Medium', ), new ActionSet( new OpenUrlAction(messagePayload.linkToMessage, title: 'Go to message', ) ) ); } return new AdaptiveCard(...cardElements); } ``` ## Handle opening adaptive card dialog Handle opening adaptive card dialog when the `fetchConversationMembers` command is invoked. ```typescript app.on('message.ext.open', async ( activity, api ) => { const conversationId = activity.conversation.id; const members = await api.conversations.members(conversationId).get(); const card = createConversationMembersCard(members); return { task: { type: 'continue', value: , }, }; }); ``` `createConversationMembersCard()` function ```typescript export function createConversationMembersCard(members: Account[]) { const membersList = members.map((member) => member.name).join(', '); return new AdaptiveCard( new TextBlock('Conversation members', size: 'Medium', weight: 'Bolder', color: 'Accent', style: 'heading', ), new TextBlock(membersList, wrap: true, spacing: 'Small', ) ); } ``` ## Resources - [Action commands](https://learn.microsoft.com/en-us/microsoftteams/platform/messaging-extensions/how-to/action-commands/define-action-command?tabs=Teams-toolkit%2Cdotnet) - [Returning Adaptive Card Previews in Task Modules](https://learn.microsoft.com/en-us/microsoftteams/platform/messaging-extensions/how-to/action-commands/respond-to-task-module-submit?tabs=dotnet%2Cdotnet-1#bot-response-with-adaptive-card) --- ### Adaptive Cards # Adaptive Cards Adaptive Cards provide a flexible, cross-platform content format for creating rich, interactive experiences. They consist of a customizable body of card elements combined with optional action sets, all fully serializable for delivery to clients. Through a powerful combination of text, graphics, and interactive buttons, Adaptive Cards enable compelling user experiences across various platforms. The Adaptive Card framework is widely implemented throughout Microsoft's ecosystem, with significant integration in Microsoft Teams. Within Teams, Adaptive Cards power numerous key scenarios including: - Rich interactive messages - Dialogs - Message Extensions - Link Unfurling - Configuration forms - And many more application contexts Mastering Adaptive Cards is essential for creating sophisticated, engaging experiences that leverage the full capabilities of the Teams platform. This guide will help you learn how to use them in this SDK. For a more comprehensive guide on Adaptive Cards, see the [official documentation](https://adaptivecards.microsoft.com/). --- ### Building Adaptive Cards # Building Adaptive Cards Adaptive Cards are JSON payloads that describe rich, interactive UI fragments. With `@microsoft/teams.cards` you can build these cards entirely in TypeScript / JavaScript while enjoying full IntelliSense and compiler safety. --- ## The Builder Pattern `@microsoft/teams.cards` exposes small **builder helpers** (`Card`, `TextBlock`, `ToggleInput`, `ExecuteAction`, _etc._). Each helper wraps raw JSON and provides fluent, chainable methods that keep your code concise and readable. ```typescript /** import AdaptiveCard, TextBlock, ToggleInput, ExecuteAction, ActionSet, from "@microsoft/teams.cards"; */ const card = new AdaptiveCard( new TextBlock('Hello world', wrap: true, weight: 'Bolder' ), new ToggleInput('Notify me').withId('notify'), new ActionSet( new ExecuteAction( title: 'Submit' ) .withData( action: 'submit_basic' ) .withAssociatedInputs('auto') ) ); ``` Benefits: | Benefit | Description | | ----------- | ----------------------------------------------------------------------------- | | Readability | No deep JSON trees—just chain simple methods. | | Re‑use | Extract snippets to functions or classes and share across cards. | | Safety | Builders validate every property against the Adaptive Card schema (see next). | > Source code lives in `teams.ts/packages/cards/src/`. Feel free to inspect or extend the helpers for your own needs. --- ## Type‑safe Authoring & IntelliSense The package bundles the **Adaptive Card v1.5 schema** as strict TypeScript types. While coding you get: - **Autocomplete** for every element and attribute. - **In‑editor validation**—invalid enum values or missing required properties produce build errors. - Automatic upgrades when the schema evolves; simply update the package. ```typescript // @ts-expect-error: "huge" is not a valid size for TextBlock const textBlock = new TextBlock('Valid', size: 'huge' ); ``` --- ## The Visual Designer Prefer a drag‑and‑drop approach? Use [Microsoft's Adaptive Card Designer](https://adaptivecards.microsoft.com/designer.html): 1. Add elements visually until the card looks right. 2. Copy the JSON payload from the editor pane. 3. Paste the JSON into your project **or** convert it to builder calls: ```typescript const cardJson = /* copied JSON */; const card = new AdaptiveCard().withBody(cardJson); ``` ```typescript const rawCard: IAdaptiveCard = { type: 'AdaptiveCard', body: [ , { columns: [ { width: 'stretch', items: [ { choices: [ title: 'Call of Duty', value: 'call_of_duty' , title: 'Death\'s Door', value: 'deaths_door' , title: 'Grand Theft Auto V', value: 'grand_theft' , title: 'Minecraft', value: 'minecraft' , ], style: 'filtered', placeholder: 'Search for a game', id: 'choiceGameSingle', type: 'Input.ChoiceSet', label: 'Game:', }, ], type: 'Column', }, ], type: 'ColumnSet', }, ], actions: [ { title: 'Request purchase', type: 'Action.Execute', data: action: 'purchase_item' , }, ], version: '1.5', }; ``` This method leverages the full Adaptive Card schema and ensures that the payload adheres strictly to `IAdaptiveCard`. :::tip You can use a combination of raw JSON and builder helpers depending on whatever you find easier. ::: --- ## End‑to‑end Example – Task Form Card Below is a complete example showing a task management form. Notice how the builder pattern keeps the file readable and maintainable: ```typescript app.on('message', async ( send, activity ) => { await send( type: 'typing' ); const card = new AdaptiveCard( new TextBlock('Create New Task', size: 'Large', weight: 'Bolder', ), new TextInput( id: 'title' ) .withLabel('Task Title') .withPlaceholder('Enter task title'), new TextInput( id: 'description' ) .withLabel('Description') .withPlaceholder('Enter task details') .withIsMultiline(true), new ChoiceSetInput( title: 'High', value: 'high' , title: 'Medium', value: 'medium' , title: 'Low', value: 'low' ) .withId('priority') .withLabel('Priority') .withValue('medium'), new DateInput( id: 'due_date' ) .withLabel('Due Date') .withValue(new Date().toISOString().split('T')[0]), new ActionSet( new ExecuteAction( title: 'Create Task' ) .withData( action: 'create_task' ) .withAssociatedInputs('auto') .withStyle('positive') ) ); await send(card); // Or build a complex activity out that includes the card: // const message = new MessageActivity('Enter this form').addCard('adaptive', card); // await send(message); }); ``` --- ## Additional Resources - [**Official Adaptive Card Documentation**](https://adaptivecards.microsoft.com/) - [**Adaptive Cards Designer**](https://adaptivecards.microsoft.com/designer.html) --- ### Summary - Use **builder helpers** for readable, maintainable card code. - Enjoy **full type safety** and IDE assistance. - Prototype quickly in the **visual designer** and refine with builders. Happy card building! 🎉 --- ### Creating Dialogs # Creating Dialogs :::tip If you're not familiar with how to build Adaptive Cards, check out [the cards guide](../adaptive-cards). Understanding their basics is a prerequisite for this guide. ::: ## Entry Point To open a dialog, you need to supply a special type of action as to the Adaptive Card. Once this button is clicked, the dialog will open and ask the application what to show. ```typescript app.on('message', async ( send ) => { await send( type: 'typing' ); // Create the launcher adaptive card const card: IAdaptiveCard = new AdaptiveCard( type: 'TextBlock', text: 'Select the examples you want to see!', size: 'Large', weight: 'Bolder', ).withActions( // raw action { type: 'Action.Submit', title: 'Simple form test', data: { msteams: type: 'task/fetch', , opendialogtype: 'simple_form', }, }, // Special type of action to open a dialog new TaskFetchAction({}) .withTitle('Webpage Dialog') // This data will be passed back in an event so we can // handle what to show in the dialog .withValue(new TaskFetchData( opendialogtype: 'webpage_dialog' )), new TaskFetchAction({}) .withTitle('Multi-step Form') .withValue(new TaskFetchData( opendialogtype: 'multi_step_form' )), new TaskFetchAction({}) .withTitle('Mixed Example') .withValue(new TaskFetchData( opendialogtype: 'mixed_example' )) ); // Send the card as an attachment await send(new MessageActivity('Enter this form').addCard('adaptive', card)); }); ``` ## Handling Dialog Open Events Once an action is executed to open a dialog, the Teams client will send an event to the agent to request what the content of the dialog should be. Here is how to handle this event: ```typescript app.on('dialog.open', async ( activity ) => { const card: IAdaptiveCard = new AdaptiveCard()... // Return an object with the task value that renders a card return { task: { type: 'continue', value: , }, }; } ``` ### Rendering A Card You can render an Adaptive Card in a dialog by returning a card response. ```typescript if (dialogType === 'simple_form') { const dialogCard = new AdaptiveCard( type: 'TextBlock', text: 'This is a simple form', size: 'Large', weight: 'Bolder', , new TextInput() .withLabel('Name') .withIsRequired() .withId('name') .withPlaceholder('Enter your name') ) // Inside the dialog, the card actions for submitting the card must be // of type Action.Submit .withActions( new SubmitAction() .withTitle('Submit') .withData( submissiondialogtype: 'simple_form' ) ); // Return an object with the task value that renders a card return { task: { type: 'continue', value: , }, }; } ``` :::info The action type for submitting a dialog must be `Action.Submit`. This is a requirement of the Teams client. If you use a different action type, the dialog will not be submitted and the agent will not receive the submission event. ::: ### Rendering A Webpage You can render a webpage in a dialog as well. There are some security requirements to be aware of: 1. The webpage must be hosted on a domain that is allow-listed as `validDomains` in the Teams app [manifest](/teams/manifest) for the agent 2. The webpage must also host the [teams-js client library](https://www.npmjs.com/package/@microsoft/teams-js). The reason for this is that for security purposes, the Teams client will not render arbitrary webpages. As such, the webpage must explicitly opt-in to being rendered in the Teams client. Setting up the teams-js client library handles this for you. ```typescript return { task: { type: 'continue', value: { title: 'Webpage Dialog', // Here we are using a webpage that is hosted in the same // server as the agent. This server needs to be publicly accessible, // needs to set up teams.js client library (https://www.npmjs.com/package/@microsoft/teams-js) // and needs to be registered in the manifest. url: `$/tabs/dialog-form`, width: 1000, height: 800, }, }, }; ``` --- ### Getting started # Getting started To use this package, you can either set up a new project using the Teams CLI, or add it to an existing tab app project. ## Setting up a new project The Teams CLI contains a Microsoft 365 Agents Toolkit configuration and a template to easily scaffold a new tab app with a callable remote function. To set this up, first install the Teams CLI as outlined in the [Quickstart](../../getting-started/quickstart.md) guide. Then, create the app by running: ```sh teams new my-first-tab-app --tk embed --template tab ``` When the app is created, you can use the Agents Toolkit to run and debug it inside of Teams from your local machine, same as for any other Agents Toolkit tab app. ## Adding to an existing project This package is set up to integrate well with existing Tab apps. The main consideration is that the AAD app must be configured to support Nested App Authentication (NAA). Otherwise it will not be possible to acquire the bearer token needed to call Microsoft Graph APIs or remote agent functions. After verifying that the app is configured for NAA, simply use your package manager to add a dependency on `@microsoft/teams.client` and then proceed with [Starting the app](./using-the-app.md). If you're already using a current version of TeamsJS, that's fine. This package works well with TeamsJS. If you're already using Microsoft Authentication Library (MSAL) in an NAA enabled app, that's great! The [App options](./app-options.md) page shows how you can use a single common MSAL instance. ## Resources - [Running and debugging local apps in Agents Toolkit](https://learn.microsoft.com/en-us/microsoftteams/platform/toolkit/debug-local?tabs=Windows) - [Configuring an app for Nested App Authentication](https://learn.microsoft.com/en-us/microsoftteams/platform/concepts/authentication/nested-authentication#configure-naa) --- ### MCP Server # MCP Server You are able to convert any `App` into an MCP server by using the `McpPlugin` from the `@microsoft/teams.mcp` package. This plugin adds the necessary endpoints to your application to serve as an MCP server. The plugin allows you to define tools, resources, and prompts that can be exposed to other MCP applications. Your plugin can be configured as follows: ```typescript const mcpServerPlugin = new McpPlugin().tool( // Describe the tools with helpful names and descriptions 'echo', 'echos back whatever you said', , readOnlyHint: true, idempotentHint: true , async ( input ) => { return { content: [ { type: 'text', text: `you said "$input"`, }, ], }; } ); ``` :::note > By default, the MCP server will be available at `/mcp` on your application. You can change this by setting the `transport.path` property in the plugin configuration. ::: And included in the app like any other plugin: ```typescript const app = new App(); ``` :::tip Enabling mcp request inspection and the `DevtoolsPlugin` allows you to see all the requests and responses to and from your MCP server (similar to how the **Activities** tab works). ::: ![MCP Server in Devtools](/screenshots/mcp-devtools.gif) ## Piping messages to the user Since your agent is provisioned to work on Teams, one very helpful feature is to use this server as a way to send messages to the user. This can be helpful in various scenarios: 1. Human in the loop - if the server or an MCP client needs to confirm something with the user, it is able to do so. 2. Notifications - the server can be used as a way to send notifications to the user. Here is an example of how to do this. Configure your plugin so that: 1. It can validate if the incoming request is allowed to send messages to the user 2. It fetches the correct conversation ID for the given user. 3. It sends a proactive message to the user. See [Proactive Messaging](../../../essentials/sending-messages/proactive-messaging) for more details. ```typescript // Keep a store of the user to the conversation id // In a production app, you probably would want to use a // persistent store like a database const userToConversationId = new Map(); // Add a an MCP server tool mcpServerPlugin.tool( 'alertUser', 'alerts the user about something important', , readOnlyHint: true, idempotentHint: true , async ( input, userAadObjectId , authInfo ) => { if (!isAuthValid(authInfo)) const conversationId = userToConversationId.get(userAadObjectId); if (!conversationId) { console.log('Current conversation map', userToConversationId); return { content: [ { type: 'text', text: `user $userAadObjectId is not in a conversation`, }, ], }; } // Leverage the app's proactive messaging capabilities to send a mesage to // correct conversation id. await app.send(conversationId, `Notification: $input`); return { content: [ type: 'text', text: 'User was notified', , ], }; } ); ``` ```typescript app.on('message', async ( send, activity ) => { await send( type: 'typing' ); await send(`you said "$"`); if (activity.from.aadObjectId && !userToConversationId.has(activity.from.aadObjectId)) { userToConversationId.set(activity.from.aadObjectId, activity.conversation.id); app.log.info( `Just added user $ to conversation $` ); } }); ``` --- ### Middleware # Middleware Middleware is a useful tool for logging, validation, and more. You can easily register your own middleware using the `app.use` method. Below is an example of a middleware that will log the elapse time of all handers that come after it. ```typescript app.use(async ( log, next ) => ); ``` --- ### Quickstart # Quickstart In this section we will walk through creating an app that can access the [Microsoft Graph APIs](https://learn.microsoft.com/en-us/graph/overview) on behalf of the user by authenticating them with the [Microsoft Entra ID](https://www.microsoft.com/en-us/security/business/identity-access/microsoft-entra-id) oauth provider. :::info It is possible to authenticate the user into [other auth providers](https://learn.microsoft.com/en-us/azure/bot-service/bot-builder-concept-identity-providers?view=azure-bot-service-4.0&tabs=adv2%2Cga2#other-identity-providers) like Facebook, Github, Google, Dropbox, and so on. ::: :::info This is an advanced guide. It is highly recommended that you are familiar with [creating an app](https://microsoft.github.io/teams-ai/2.getting-started/1.quickstart.html) and [running it in Teams](https://microsoft.github.io/teams-ai/2.getting-started/3.running-in-teams.html) before attempting to follow this guide. ::: :::warning User authentication does not work with the developer tools setup. You have to run the app in Teams. Follow these [instructions](../../getting-started/running-in-teams#debugging-in-teams) to run your app in Teams. ::: ## Setup Instructions ### Create an app with the `graph` template :::tip Skip this step if you want to add the auth configurations to an existing app. ::: :::note In this template, `graph` is the default name of the OAuth connection, but you can change that by supplying `defaultOauthConnectionName` in the `app`. ::: Use your terminal to run the following command: ```sh teams new oauth-app --template graph ``` This command: 1. Creates a new directory called `oauth-app`. 2. Bootstraps the graph agent template files into it under `oauth-app/src`. 3. Creates your agent's manifest files, including a `manifest.json` file and placeholder icons in the `oauth-app/appPackage` directory. The Teams [app manifest](https://learn.microsoft.com/en-us/microsoftteams/platform/resources/schema/manifest-schema) is required for [sideloading](https://learn.microsoft.com/en-us/microsoftteams/platform/concepts/deploy-and-publish/apps-upload) the app into Teams. ### Add Agents Toolkit auth configuration Open your terminal with the `oauth-app/` folder set as the current working directory and run the following command: ```sh teams config add atk.oauth ``` This will add relevant Agents Toolkit files to your project. :::tip See [App Setup](./setup#using-m365-agents-toolkit-with-the-teams-cli) to learn more about what this command does. ::: ## Interacting with the app in Teams Once you have successfully sideloaded the app into Teams you can now interact with it and sign the user in. ### Signing the user in :::note This is the Single Sign-On (SSO) authentication flow. To learn more about all the available flows and their differences see the [How Auth Works](how-auth-works.txt) guide. ::: When the user sends a message to the user a consent form will popup: ![Consent popup](/screenshots/auth-consent-popup.png) This will ask the user to consent to the `User.ReadBasic.All` Microsoft Graph scope: :::note The `atk.oauth` configuration explicitly requests the `User.ReadBasic.All` permission. It is possible to request other permissions by modifying the App Registration for the bot on Azure. ::: ![Entra ID signin](/screenshots/auth-entra-id-signin.png) Once the user signs in and grants the app access, they will be redirected back to the Teams client and the app will send back the user's information as retrieved from the graph client: ![Graph message](/screenshots/auth-graph-message.png) The user can then signout by sending the `signout` command to the app: ![Signout message](/screenshots/auth-signout-message.png) --- ### Setup & Prerequisites # Setup & Prerequisites There are a few prerequisites to getting started with integrating LLMs into your application: - LLM API Key - To generate messages using an LLM, you will need to have an API Key for the LLM you are using. - [Azure OpenAI](https://azure.microsoft.com/en-us/products/ai-services/openai-service) - [OpenAI](https://platform.openai.com/) - In your application, you should include your keys in a secure way. We recommend putting it in an .env file at the root level of your project ``` my-app/ |── appPackage/ # Teams app package files ├── src/ │ └── index.ts # Main application code |── .env # Environment variables ``` ### Azure OpenAI You will need to deploy a model in Azure OpenAI. [Here](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/create-resource?pivots=web-portal#deploy-a-model 'Azure OpenAI Model Deployment Guide') is a guide on how to do this. Once you have deployed a model, include the following key/values in your `.env` file: ```env AZURE_OPENAI_API_KEY=your-azure-openai-api-key AZURE_OPENAI_MODEL_DEPLOYMENT_NAME=your-azure-openai-model AZURE_OPENAI_ENDPOINT=you-azure-openai-endpoint AZURE_OPENAI_API_VERSION=your-azure-openai-api-version ``` :::info The `AZURE_OPENAI_API_VERSION` is different from the model version. This is a common point of confusion. Look for the API Version [here](https://learn.microsoft.com/en-us/azure/ai-services/openai/reference?WT.mc_id=AZ-MVP-5004796 'Azure OpenAI API Reference') ::: ### OpenAI You will need to create an OpenAI account and get an API key. [Here](https://platform.openai.com/docs/quickstart/build-your-application 'OpenAI Quickstart Guide') is a guide on how to do this. Once you have your API key, include the following key/values in your `.env` file: ```env OPENAI_API_KEY=sk-your-openai-api-key ``` --- ### 💬 Chat Generation # 💬 Chat Generation Before going through this guide, please make sure you have completed the [setup and prerequisites](./setup-and-prereqs.md) guide. # Setup The basic setup involves creating a `ChatPrompt` and giving it the `Model` you want to use. ```mermaid flowchart LR Prompt subgraph Application Send --> Prompt UserMessage["User Message
Hi how are you?"] --> Send Send --> Content["Content
I am doing great! How can I help you?"] subgraph Setup Messages --> Prompt Instructions --> Prompt Options["Other options..."] --> Prompt Prompt --> Model end end subgraph LLMProvider Model --> AOAI["Azure Open AI"] Model --> OAI["Open AI"] Model --> Anthropic["Claude"] Model --> OtherModels["..."] end ``` ## Simple chat generation Chat generation is the the most basic way of interacting with an LLM model. It involves setting up your ChatPrompt, the Model, and sending it the message. Import the relevant objects: ```typescript import OpenAIChatModel from '@microsoft/teams.openai'; ``` ```typescript app.on('message', async ( send, activity, next ) => { const model = new OpenAIChatModel(); const prompt = new ChatPrompt( instructions: 'You are a friendly assistant who talks like a pirate', model, ); const response = await prompt.send(activity.text); if (response.content) }); ``` :::note The current `OpenAIChatModel` implementation uses chat-completions API. The responses API is coming soon. ::: ## Streaming chat responses LLMs can take a while to generate a response, so often streaming the response leads to a better, more responsive user experience. :::warning Streaming is only currently supported for single 1:1 chats, and not for groups or channels. ::: ```typescript app.on('message', async ( stream, send, activity, next ) => { // const query = activity.text; const prompt = new ChatPrompt( instructions: 'You are a friendly assistant who responds in terse language', model, ); // Notice that we don't `send` the final response back, but // `stream` the chunks as they come in const response = await prompt.send(query, { onChunk: (chunk) => , }); if (activity.conversation.isGroup) else }); ``` ![Streaming the response](/screenshots/streaming-chat.gif) --- ### 🔍 Search commands # 🔍 Search commands Message extension search commands allow users to search external systems and insert the results of that search into a message in the form of a card. ## Search command invocation locations There are three different areas search commands can be invoked from: 1. Compose Area 2. Compose Box ### Compose Area and Box ![compose area and box](/screenshots/compose-area.png) ## Setting up your Teams app manifest To use search commands you have define them in the Teams app manifest. Here is an example: ```json "composeExtensions": [ { "botId": "${BOT_ID}", "commands": [ { "id": "searchQuery", "context": [ "compose", "commandBox" ], "description": "Test command to run query", "title": "Search query", "type": "query", "parameters": [ "name": "searchQuery", "title": "Search Query", "description": "Your search query", "inputType": "text" ] } ] } ] ``` Here we are defining the `searchQuery` search (or query) command. ## Handle submission Handle opening adaptive card dialog when the `searchQuery` query is submitted. ```typescript app.on('message.ext.query', async ( activity ) => { const commandId = activity.value; const searchQuery = activity.value.parameters![0].value; if (commandId == 'searchQuery') { const cards = await createDummyCards(searchQuery); const attachments = cards.map(( card, thumbnail ) => { return ; }); return { composeExtension: type: 'result', attachmentLayout: 'list', attachments: attachments, , }; } return status: 400 ; }); ``` `createDummyCards()` function ```typescript export async function createDummyCards(searchQuery: string) { const dummyItems = [ { title: 'Item 1', description: `This is the first item and this is your search query: $searchQuery`, }, title: 'Item 2', description: 'This is the second item' , title: 'Item 3', description: 'This is the third item' , title: 'Item 4', description: 'This is the fourth item' , title: 'Item 5', description: 'This is the fifth item' , ]; const cards = dummyItems.map((item) => { return { card: new AdaptiveCard( new TextBlock(item.title, size: 'Large', weight: 'Bolder', color: 'Accent', style: 'heading', ), new TextBlock(item.description, wrap: true, spacing: 'Medium', ) ), thumbnail: { title: item.title, text: item.description, // When a user clicks on a list item in Teams: // - If the thumbnail has a `tap` property: Teams will trigger the `message.ext.select-item` activity // - If no `tap` property: Teams will insert the full adaptive card into the compose box // tap: { // type: "invoke", // title: item.title, // value: // "option": index, // , // }, } satisfies ThumbnailCard, }; }); return cards; } ``` The search results include both a full adaptive card and a preview card. The preview card appears as a list item in the search command area: ![Search command preview card](/screenshots/preview-card.png) When a user clicks on a list item the dummy adaptive card is added to the compose box: ![Card in compose box](/screenshots/card-in-compose.png) To implement custom actions when a user clicks on a search result item, you can add the `tap` property to the preview card. This allows you to handle the click event with custom logic: ```typescript app.on('message.ext.select-item', async ( activity, send ) => { const option = activity.value; await send(`Selected item: $option`); return status: 200, ; }); ``` ## Resources - [Search command](https://learn.microsoft.com/en-us/microsoftteams/platform/messaging-extensions/how-to/search-commands/define-search-command?tabs=Teams-toolkit%2Cdotnet) - [Just-In-Time Install](https://learn.microsoft.com/en-us/microsoftteams/platform/messaging-extensions/how-to/search-commands/universal-actions-for-search-based-message-extensions#just-in-time-install) --- ### 🗃️ Custom Logger # 🗃️ Custom Logger The `App` will provide a default logger, but you can also provide your own. The default `Logger` instance will be set to `ConsoleLogger` from the `@microsoft/teams.common` package. ```typescript // initialize app with custom console logger // set to debug log level const app = new App({ logger: new ConsoleLogger('echo', level: 'debug' ), }); app.on('message', async ( send, activity, log ) => { log.debug(activity); await send( type: 'typing' ); await send(`you said "$"`); }); (async () => )(); ``` --- ### Dialogs (Task Modules) # Dialogs (Task Modules) Dialogs are a helpful paradigm in Teams which improve interactions between your agent and users. When dialogs are **invoked**, they pop open a window for a user in the Teams client. The content of the dialog can be supplied by the agent application. ## Key benefits 1. Dialogs pop open for a user in the Teams client. This means in group-settings, dialog actions are not visible to other users in the channel, reducing clutter. 2. Interactions like filling out complex forms, or multi-step forms where each step depends on the previous step are excellent use cases for dialogs. 3. The content for the dialog can be hard-coded in, or fetched at runtime. This makes them extremely flexible and powerful. ## Resources - [Task Modules](https://learn.microsoft.com/en-us/microsoftteams/platform/task-modules-and-cards/what-are-task-modules) - [Invoking Task Modules](https://learn.microsoft.com/en-us/microsoftteams/platform/task-modules-and-cards/task-modules/invoking-task-modules) --- ### Executing Actions # Executing Actions Adaptive Cards support interactive elements through **actions**—buttons, links, and input submission triggers that respond to user interaction. You can use these to collect form input, trigger workflows, show task modules, open URLs, and more. --- ## Action Types The Teams AI Library supports several action types for different interaction patterns: | Action Type | Purpose | Description | | ------------------------- | ---------------------- | ---------------------------------------------------------------------------- | | `Action.Execute` | Server‑side processing | Send data to your bot for processing. Best for forms & multi‑step workflows. | | `Action.Submit` | Simple data submission | Legacy action type. Prefer `Execute` for new projects. | | `Action.OpenUrl` | External navigation | Open a URL in the user's browser. | | `Action.ShowCard` | Progressive disclosure | Display a nested card when clicked. | | `Action.ToggleVisibility` | UI state management | Show/hide card elements dynamically. | > For complete reference, see the [official documentation](https://adaptivecards.microsoft.com/?topic=Action.Execute). --- ## Creating Actions with the SDK ### Single Actions The SDK provides builder helpers that abstract the underlying JSON. For example: ```typescript /** import ExecuteAction from "@microsoft/teams.cards"; */ new ExecuteAction( title: 'Submit Feedback' ) .withData( action: 'submit_feedback' ) .withAssociatedInputs('auto'), ``` ### Action Sets Group actions together using `ActionSet`: ```typescript /** * import * AdaptiveCard, * ExecuteAction, * OpenUrlAction, * ActionSet, * from "@microsoft/teams.cards"; */ new ActionSet( new ExecuteAction( title: 'Submit Feedback' ) .withData( action: 'submit_feedback' ) .withAssociatedInputs('auto'), new OpenUrlAction('https://adaptivecards.microsoft.com').withTitle( 'Learn More' ) ) ``` ### Raw JSON Alternative Just like when building cards, if you prefer to work with raw JSON, you can do just that. You get typesafety for free in typescript. ```typescript as const ``` --- ## Working with Input Values ### Associating data with the cards Sometimes you want to send a card and have it be associated with some data. Set the `data` value to be sent back to the client so you can associate it with a particular entity. ```typescript function editProfileCard() { const card = new AdaptiveCard( new TextInput( id: 'name' ).withLabel('Name').withValue('John Doe'), new TextInput(), new ToggleInput('Subscribe to newsletter') .withId('subscribe') .withValue('false'), new ActionSet( new ExecuteAction( title: 'Save' ) .withData( action: 'save_profile', entityId: '12345', // This will come back once the user submits ) .withAssociatedInputs('auto') ) ); // Data received in handler /** */ return card; } ``` ### Input Validation Input Controls provide ways for you to validate. More details can be found on the Adaptive Cards [documentation](https://adaptivecards.microsoft.com/?topic=input-validation). ```typescript function createProfileCardInputValidation() { const ageInput = new NumberInput( id: 'age' ) .withLabel('Age') .withIsRequired(true) .withMin(0) .withMax(120); const nameInput = new TextInput( id: 'name' ) .withLabel('Name') .withIsRequired() .withErrorMessage('Name is required!'); // Custom error messages const card = new AdaptiveCard( nameInput, ageInput, new TextInput( id: 'location' ).withLabel('Location'), new ActionSet( new ExecuteAction( title: 'Save' ) .withData( action: 'save_profile', ) .withAssociatedInputs('auto') // All inputs should be validated ) ); return card; } ``` --- ## Server Handlers ### Basic Structure Card actions arrive as `card.action` activities in your app. These give you access to the validated input values plus any `data` values you had configured to be sent back to you. ```typescript app.on('card.action', async ( activity, send ) => { const data = activity.value?.action?.data; if (!data?.action) { return { statusCode: 400, type: 'application/vnd.microsoft.error', value: { code: 'BadRequest', message: 'No action specified', innerHttpError: { statusCode: 400, body: error: 'No action specified' , }, }, } satisfies AdaptiveCardActionErrorResponse; } console.debug('Received action data:', data); switch (data.action) { case 'submit_feedback': await send(`Feedback received: $`); break; case 'purchase_item': await send( `Purchase request received for game: $` ); break; case 'save_profile': await send( `Profile saved!\nName: $\nEmail: $\nSubscribed: $` ); break; default: return { statusCode: 400, type: 'application/vnd.microsoft.error', value: { code: 'BadRequest', message: 'Unknown action', innerHttpError: { statusCode: 400, body: error: 'Unknown action' , }, }, } satisfies AdaptiveCardActionErrorResponse; } return satisfies AdaptiveCardActionMessageResponse; }); ``` :::note The `data` values are not typed and come as `any`, so you will need to cast them to the correct type in this case. ::: --- ### Handling Dialog Submissions # Handling Dialog Submissions Dialogs have a specific `dialog.submit` event to handle submissions. When a user submits a form inside a dialog, the app is notified via this event, which is then handled to process the submission values, and can either send a response or proceed to more steps in the dialogs (see [Multi-step Dialogs](./handling-multi-step-forms)). In this example, we show how to handle dialog submissions from an Adaptive Card form: ```typescript app.on('dialog.submit', async ( activity, send, next ) => { const dialogType = activity.value.data?.submissiondialogtype; if (dialogType === 'simple_form') { // This is data from the form that was submitted const name = activity.value.data.name; await send(`Hi $name, thanks for submitting the form!`); return { task: type: 'message', // This appears as a final message in the dialog value: 'Form was submitted', , }; } }); ``` Similarly, handling dialog submissions from rendered webpages is also possible: ```typescript // The submission from a webpage happens via the microsoftTeams.tasks.submitTask(formData) // call. app.on('dialog.submit', async ( activity, send, next ) => { const dialogType = activity.value.data.submissiondialogtype; if (dialogType === 'webpage_dialog') { // This is data from the form that was submitted const name = activity.value.data.name; const email = activity.value.data.email; await send( `Hi $name, thanks for submitting the form! We got that your email is $email` ); // You can also return a blank response return status: 200, ; } }); ``` --- ### How Auth Works # How Auth Works When building Teams applications, choosing the right authentication method is crucial for both security and user experience. Teams supports two primary authentication approaches: OAuth and Single Sign-On (SSO). While both methods serve the same fundamental purpose of validating user identity, they differ significantly in their implementation, supported identity providers, and user experience. Understanding these differences is essential for making the right choice for your application. The following table provides a clear comparison between OAuth and SSO authentication methods, highlighting their key differences in terms of identity providers, authentication flows, and user experience. ## Single Sign-On (SSO) Single Sign-On (SSO) in Teams provides a seamless authentication experience by leveraging a user's existing Teams identity. Once a user is logged into Teams, they can access your app without needing to sign in again. The only requirement is a one-time consent from the user, after which your app can securely retrieve their access details from Microsoft Entra ID. This consent is device-agnostic - once granted, users can access your app from any device without additional authentication steps. When an access token expires, the app automatically initiates a token exchange flow. In this process: 1. The Teams client sends an OAuth ID token containing the user's information 2. Your app exchanges this ID token for a new access token with the previously consented scopes 3. This exchange happens silently without requiring user interaction :::tip Always use SSO if you authenticating the user with Microsoft Entra ID. ::: ### The SSO Signin Flow The SSO signin flow involves several components working together. Here's how it works: 1. User interacts with your bot or message extension app 2. App initiates the signin process 3. If it's the first time: - User is shown a consent form for the requested scopes - Upon consent, Microsoft Entra ID issues an access token (in simple terms) 4. For subsequent interactions: - If token is valid, app uses it directly - If token expires, app silently signs the user in using the token exchange flow See the [SSO in Teams at runtime](https://learn.microsoft.com/en-us/microsoftteams/platform/bots/how-to/authentication/bot-sso-overview#sso-in-teams-at-runtime) guide to learn more about the SSO signin flow ### The SSO consent form This is what the SSO consent form looks like in Teams: ![SSO Consent Form](/screenshots/auth-consent-popup.png) ## OAuth You can use a third-party OAuth Identity Provider (IdP) to authenticate your app users. The app user is registered with the identity provider, which has a trust relationship with your app. When the user attempts to log in, the identity provider validates the app user and provides them with access to your app. Microsoft Entra ID is one such third party OAuth provider. You can use other providers, such as Google, Facebook, GitHub, or any other provider. ### The OAuth Signin Flow The OAuth signin flow involves several components working together. Here's how it works: 1. User interacts with your bot or message extension app 2. App sends a sign-in card with a link to the OAuth provider 3. User clicks the link and is redirected to the provider's authentication page 4. User authenticates and grants consent for requested scopes 5. Provider issues an access token to your app (in simple terms) 6. App uses the token to access services on user's behalf When an access token expires, the user will need to go through the sign-in process again. Unlike SSO, there is no automatic token exchange - the user must explicitly authenticate each time their token expires. ### The OAuth Card This is what the OAuth card looks like in Teams: ![OAuthCard](/screenshots/auth-explicit-signin.png) ## OAuth vs SSO - Head-to-Head Comparison | Feature | OAuth | SSO | |---------|-------|-----| | Identity Provider | Works with any OAuth provider (Microsoft Entra ID, Google, Facebook, GitHub, etc.) | Only works with Microsoft Entra ID | | Authentication Flow | User is sent a card with a sign-in link | If user has already consent to the requested scopes in the past they will "silently" login through the token exchange flow. Otherwise user is shown a consent form | | User Experience | Requires explicit sign-in, and consent to scopes | Re-use existing Teams credential, Only requires consent to scopes | | Conversation scopes (`personal`, `groupChat`, `teams`) | `personal` scope only | `personal` scope only | | Azure Configuration differences | Same configuration except `Token Exchange URL` is blank | Same configuration except `Token Exchange URL` is set ## Resources - [User Authentication Basics](https://learn.microsoft.com/en-us/azure/bot-service/bot-builder-concept-authentication?view=azure-bot-service-4.0) - [User Authentication in Teams](https://learn.microsoft.com/en-us/microsoftteams/platform/concepts/authentication/authentication) - [Enable SSO for bot and message extension app using Entra ID](https://learn.microsoft.com/en-us/microsoftteams/platform/bots/how-to/authentication/bot-sso-overview) - [Add authentication to your Teams bot](https://learn.microsoft.com/en-us/microsoftteams/platform/bots/how-to/authentication/add-authentication) --- ### MCP Client # MCP Client You are able to leverage other MCP servers that expose tools via the SSE protocol as part of your application. This allows your AI agent to use remote tools to accomplish tasks. :::info Take a look at [Function calling](../function-calling) to understand how the `ChatPrompt` leverages tools to enhance the LLM's capabilities. MCP extends this functionality by allowing remote tools, that may or may not be developed or maintained by you, to be used by your application. ::: ## Remote MCP Server The first thing that's needed is access to a **remote** MCP server. MCP Servers (at present) come using two main types protocols: 1. StandardIO - This is a _local_ MCP server, which runs on your machine. An MCP client may connect to this server, and use standard input and outputs to communicate with it. Since our application is running remotely, this is not something that we want to use 2. SSE - This is a _remote_ MCP server. An MCP client may send it requests and the server responds in the expected MCP protocol. For hooking up to your a valid SSE server, you will need to know the URL of the server, and if applicable, and keys that must be included as part of the header. ## MCP Client Plugin The `MCPClientPlugin` (from `@microsoft/teams.mcpclient` package) integrates directly with the `ChatPrompt` object as a plugin. When the `ChatPrompt`'s `send` function is called, it calls the external MCP server and loads up all the tools that are available to it. Once loaded, it treats these tools like any functions that are available to the `ChatPrompt` object. If the LLM then decides to call one of these remote MCP tools, the MCP Client plugin will call the remote MCP server and return the result back to the LLM. The LLM can then use this result in its response. ```typescript const logger = new ConsoleLogger('mcp-client', level: 'debug' ); const prompt = new ChatPrompt( { instructions: 'You are a helpful assistant. You MUST use tool calls to do all your work.', model: new OpenAIChatModel(), logger }, // Tell the prompt that the plugin needs to be used // Here you may also pass in additional configurations such as // a tool-cache, which can be used to limit the tools that are used // or improve performance [new McpClientPlugin( logger )], ) // Here we are saying you can use any tool from localhost:3978/mcp // (that is the URL for the server we built using the mcp plugin) .usePlugin('mcpClient', url: 'http://localhost:3978/mcp' ) // Alternatively, you can use a different server hosted somewhere else // Here we are using the mcp server hosted on an Azure Function .usePlugin('mcpClient', { url: 'https://aiacceleratormcp.azurewebsites.net/runtime/webhooks/mcp/sse', params: { headers: , }, }).usePlugin('mcpClient', { url: 'https://aiacceleratormcp.azurewebsites.net/runtime/webhooks/mcp/sse', params: { headers: , }, }).usePlugin('mcpClient', ); app.on('message', async ( send, activity ) => { await send( type: 'typing' ); const result = await prompt.send(activity.text); if (result.content) }); ``` In this example, we augment the `ChatPrompt` with a few remote MCP Servers. :::note Feel free to build an MCP Server in a different agent using the [MCP Server Guide](./mcp-server). Or you can quickly set up an MCP server using [Azure Functions](https://techcommunity.microsoft.com/blog/appsonazureblog/build-ai-agent-tools-using-remote-mcp-with-azure-functions/4401059). ::: ![MCP Client in Devtools](/screenshots/mcp-client-pokemon.gif) In this example, our MCP server is a Pokemon API and our client knows how to call it. The LLM is able to call the `getPokemon` function exposed by the server and return the result back to the user. --- ### Using The App # Using The App The `@microsoft/teams.client` App class helps solve common challenges when building Single Page Applications hosted in Microsoft Teams, Outlook, and Microsoft 365. It is the client-side counterpart to the `@microsoft/teams.app` App that you can use to build AI agents. These two App classes are designed to work well together. For instance, when you use the `@microsoft/teams.app` App to expose a server-side function, you can then use the `@microsoft/teams.client` App `exec` method to easily invoke that function, as the client-side app knows how to construct an HTTP request that the server-side app can process. It can issue a request to the right URL, with the expected payload and contextual headers. The client-side app even includes a bearer token that the server side app uses to authenticate the caller. # Starting the app To use the `@microsoft/teams.client` package, you first create an App instance and then call `app.start()`. ```typescript const app = new App(clientId); await app.start(); ``` The app constructor strives to make it easy to get started on a new app, while still being flexible enough that it can integrate easily with existing apps. The constructor takes two arguments: a required app client ID, and an optional `AppOptions` argument. The app client ID is the AAD app registration **Application (client) ID**. The options can be used to customize observability, Microsoft Authentication Library (MSAL) configuration, and remote agent function calling. For more details on the app options, see the [App options](./app-options.md) page. ## What happens during start The app constructor does the following: - it creates an app logger, if none is provided in the app options. - it creates an http client used to call the remote agent. - it creates a graph client that can be used as soon as the app is started. The `app.start()` call does the following: - it initializes TeamsJS. - it creates an MSAL instance, if none is provided in the app options. - it connects the MSAL instance to the graph client. - it prompts the user for MSAL token consent, if needed and if pre-warming is not disabled through the app options. ## Using the app When the `app.start()` call has completed, you can use the app instance to call Graph APIs and to call remote agent functions using the `exec()` function, or directly by using the `app.http` HTTP client. TeamsJS is now initialized, so you can interact with the hosting app. The `app.msalInstance` is now populated, in case you need to use the same MSAL for other purposes. ```typescript const app = new App(clientId); await app.start(); // you can now get the TeamsJS context... const context = await teamsJs.app.getContext(); // ...call Graph end points... const presenceResult = await app.graph.me.presence.get(); // ...end call remote agent functions... const agentResult = await app.exec('hello-world'); ``` --- ### ⚙️ Settings # ⚙️ Settings You can add a settings page that allows users to configure settings for your app. The user can access the settings by right-clicking the app item in the compose box
Settings This guide will show how to enable user access to settings, as well as setting up a page that looks like this: ![Settings Page](/screenshots/settings-page.png) ## 1. Update the Teams Manifest Set the `canUpdateConfiguration` field to `true` in the desired message extension under `composeExtensions`. ```json "composeExtensions": [ { "botId": "${BOT_ID}", "canUpdateConfiguration": true, ... } ] ``` ## 2. Serve the settings `html` page This is the code snippet for the settings `html` page: ```html
What programming language do you prefer? Typescript
C#


``` Save it in the `index.html` file in the same folder as where your app is initialized. You can serve it by adding the following code to your app: ```typescript app.tab('settings', path.resolve(__dirname)); ``` :::note This will serve the HTML page to the `$BOT_ENDPOINT/tabs/settings` endpoint as a tab. See [Tabs Guide](../tabs/README.md) to learn more. ::: ## 3. Specify the URL to the settings page To enable the settings page, your app needs to handle the `message.ext.query-settings-url` activity that Teams sends when a user right-clicks the app in the compose box. Your app must respond with the URL to your settings page. Here's how to implement this: ```typescript app.on('message.ext.query-settings-url', async ( activity ) => { // Get user settings from storage if available const userSettings = await app.storage.get(activity.from.id) || selectedOption: '' ; const escapedSelectedOption = encodeURIComponent(userSettings.selectedOption); return { composeExtension: { type: 'config', suggestedActions: { actions: [ { type: 'openUrl', title: 'Settings', // ensure the bot endpoint is set in the environment variables // process.env.BOT_ENDPOINT is not populated by default in the Teams Toolkit setup. value: `$/tabs/settings?selectedOption=$escapedSelectedOption` } ] } } }; }); ``` ## 4. Handle Form Submission When a user submits the settings form, Teams sends a `message.ext.setting` activity with the selected option in the `activity.value.state` property. Handle it to save the user's selection: ```typescript app.on('message.ext.setting', async ( activity, send ) => { const state = activity.value; if (state == 'CancelledByUser') { return status: 400 ; } const selectedOption = state; // Save the selected option to storage await app.storage.set(activity.from.id, selectedOption ); await send(`Selected option: $selectedOption`); return status: 200 ; }); ``` --- ### 📖 Message Extensions # 📖 Message Extensions Message extensions (or Compose Extensions) allow your application to hook into messages that users can send or perform actions on messages that users have already sent. They enhance user productivity by providing quick access to information and actions directly within the Teams interface. Users can search or initiate actions from the compose message area, the command box, or directly from a message, with the results returned as richly formatted cards that make information more accessible and actionable. There are two types of message extensions: [API-based](https://learn.microsoft.com/en-us/microsoftteams/platform/messaging-extensions/api-based-overview) and [Bot-based](https://learn.microsoft.com/en-us/microsoftteams/platform/messaging-extensions/build-bot-based-message-extension?tabs=search-commands). API-based message extensions use an OpenAPI specification that Teams directly queries, requiring no additional application to build or maintain, but offering less customization. Bot-based message extensions require building an application to handle queries, providing more flexibility and customization options. This library supports bot-based message extensions only. ## Resources - [What are message extensions?](https://learn.microsoft.com/en-us/microsoftteams/platform/messaging-extensions/what-are-messaging-extensions?tabs=desktop) --- ### App Options # App Options The app options offer various settings that you can use to customize observability, Microsoft Authentication Library (MSAL) configuration, and remote agent function calling. Each setting is optional, with the app using a reasonable default as needed. ## Logger If no logger is specified in the app options, the app will create a [ConsoleLogger](../../in-depth-guides/observability/logging.md). You can however provide your own logger implementation to control log level and destination. ```typescript const app = new App(clientId, { logger: new ConsoleLogger('myTabApp', level: 'debug' ) }); await app.start(); ``` ## Remote API options The remote API options let you control which endpoint that `app.exec()` make a request to, as well as the default resource name to use when requesting an MSAL token to attach to the request. ### Base URL The `baseUrl` value is used to provide the URL where the remote API is hosted. This can be omitted if the tab app is hosted on the same domain as the remote agent. ```typescript const app = new App(clientId, { remoteApiOptions: , }); await app.start(); // this requests a token for 'api:///access_as_user' and attaches // that to a request to https://agent1.contoso.com/api/functions/my-function await app.exec('my-function'); ``` ### Remote app resource The `remoteAppResource` value is used to control the default resource name used when building a token request for the Entra token to include when invoking the function. This can be omitted if the tab app and the remote agent app are in the same AAD app, but should be provided if they're in different apps or the agent requires scopes for a different resource than the default `api:///access_as_user`. ```typescript const app = new App(clientId, { remoteApiOptions: , }); await app.start(); // this requests a token for 'api://agent1ClientId/access_as_user' and attaches that // to a request to https://agent1.contoso.com/api/functions/my-function await app.exec('my-function'); ``` ## MSAL options The MSAL options let you control how the Microsoft Authentication Library (MSAL) is initialized and used, and how the user is prompted for scope consent as the app starts. ### MSAL instance and configuration You have three options to control the MSAL instance used by the app. - Provide a pre-configured and pre-initialized MSAL IPublicClientApplication. - Provide a custom MSAL configuration for the app to use when creating an MSAL IPublicClientApplication instance. - Provide neither, and let the app create IPublicClientApplication from a default MSAL configuration. #### Default behavior If the app options contain neither an MSAL instance nor an MSAL configuration, the app constructs a simple MSAL configuration that is suitable for multi-tenant apps and that connects the MSAL logger callbacks to the app logger. ```typescript const app = new App(clientId); await app.start(); // app.msalInstance is now available, and any logging is forwarded from // MSAL to the app.log instance. ``` #### Providing a custom MSAL configuration MSAL offers a rich set of configuration options, and you can provide your own configuration as an app option. ```typescript const configuration: msal.Configuration = /* custom MSAL configuration options */ ; const app = new App(clientId, { msalOptions: configuration }); await app.start(); ``` #### Providing a pre-configured MSAL IPublicClientApplication MSAL cautions against an app using multiple IPublicClientApp instances at the same time. If you're already using MSAL, you can provide a pre-created MSAL instance to use as an app option. ```typescript const msalInstance = await msal .createNestablePublicClientApplication(/* custom MSAL configuration */); await msalInstance.initialize(); const app = new App(clientId, { msalOptions: msalInstance }); await app.start(); ``` If you need multiple app instances in order to call functions in several agents, you can re-use the MSAL instance from one as you construct another. ```typescript // let app1 create & initialize an MSAL IPublicClientApplication const app1 = new App(clientId, { remoteApiOptions: , }); await app1.start(); // let app2 re-use the MSAL IPublicClientApplication from app1 const app2 = new App(clientId, { remoteApiOptions: , msalOptions: }); ``` ### Scope consent pre-warming The MSAL options let you control whether and how the user is prompted to give the app permission for any necessary scope as the app starts. This option can be used to reduce the number of consent prompts the user sees while using the app, and to help make sure the app gets consent for the resource it needs to function. With this option, you can either pre-warm a specific set of scopes or disable pre-warming altogether. If no setting is provided, the default behavior is to prompt the user for the Graph scopes listed in the app manifest, unless they've already consented to at least on Graph scope. For more details on how and when to prompt for scope consent, see the [Graph](./graph.md) documentation. #### Default behavior If the app is started without specifying any option to control scope pre-warming, the `.default` scope is pre-warmed. This means that in a first-run experience, the user would be prompted to consent for all Graph permissions listed in the app manifest. However, if the user has consented to at least one Graph permission, any one at all, no prompt appears. ```typescript const app = new App(clientId); // if the user hasn't already given consent for any scope at // all, this will prompt them await app.start(); ``` :::info The user can decline the prompt and the app will still continue to run. However, the user will again be prompted next time they launch the app. ::: #### Pre-warm a specific set of scopes If your app requires a specific set of scopes in order to run well, you can list those in the set of scopes to pre-warm. ```typescript const app = new App(clientId, { msalOptions: , }); // if the user hasn't already given consent for each listed scope, // this will prompt them await app.start(); ``` :::info The user can decline the prompt and the app will still continue to run. However, the user will again be prompted next time they launch the app. ::: #### Disabling pre-warming Scope pre-warming can be disabled if needed. This is useful if your app doesn't use graph APIs, or if you want more control over the consent prompt. ```typescript const app = new App(clientId, { msalOptions: prewarmScopes: false , }); // this will not raise any consent prompt await app.start(); // this will prompt for the '.default' scope if the user hasn't already // consented to any scope const top10Chats = await app.graph.chats.list( $top: 10 ); ``` :::info Even if pre-warming is disabled and the user is not prompted to consent, a prompt for the `.default` scope will appear when invoking any graph API. ::: ## References [MSAL Configuration](https://learn.microsoft.com/en-us/entra/identity-platform/msal-client-application-configuration) --- ### App Setup # App Setup There are a few ways you can enable your application to access secured external services on the user's behalf. :::note This is an advanced guide. It is highly recommended that you are familiar with [Teams Core Concepts](/teams/core-concepts) before attempting this guide. ::: ## Authenticate the user to Entra ID to access Microsoft Graph APIs A very common use case is to access enterprise related information about the user, which can be done through Microsoft Graph's APIs. To do that the user will have to be authenticated to Entra ID. :::note See [How Auth Works](how-auth-works.txt) to learn more about how authentication works. ::: ### Manual Setup In this step you will have to tweak your Azure Bot service and App registration to add authentication configurations and enable Single Sign-On (SSO). :::info [Single Sign-On (SSO)](./auth-sso#single-sign-on-sso) in Teams allows users to access your app seamlessly by using their existing Teams account credentials for authentication. A user who has logged into Teams doesn't need to log in again to your app within the Teams environment. ::: You can follow the [Enable SSO for bot and message extension app using Entra ID](https://learn.microsoft.com/en-us/microsoftteams/platform/bots/how-to/authentication/bot-sso-register-aad?tabs=botid) guide in the Microsoft Learn docs. ### Using Microsoft 365 Agents Toolkit with the `teams` CLI Open your terminal and navigate to the root folder of your app and run the following command: ```sh teams config add atk.oauth ``` The `atk.oauth` configuration is a basic setup for Agents Toolkit along with configurations to authenticate the user with Microsoft Entra ID to access Microsoft Graph APIs. This [CLI](/developer-tools/cli) command adds configuration files required by Agents Toolkit, including: - Azure Application Entra ID manifest file `aad.manifest.json`. - Azure bicep files to provision Azure bot in `infra/` folder. :::info Agents Toolkit, in the debugging flow, will deploy the `aad.manifest.json` and `infra/azure.local.bicep` file to provision the Application Entra ID and Azure bot with oauth configurations. ::: ## Authenticate the user to third-party identity provider You can follow the [Add authentication to bot app](https://learn.microsoft.com/en-us/microsoftteams/platform/bots/how-to/authentication/add-authentication?tabs=dotnet%2Cdotnet-sample) Microsoft Learn guide. ## Configure the OAuth Connection Name in the `App` instance In the [Using Agents Toolkit with `teams` CLI](#using-m365-agents-toolkit-with-the-teams-cli) guide, you will notice that the OAuth Connection Name that was created in the Azure Bot configuration is `graph`. This is arbitrary and you can even create more than one configuration. You can specify which configuration to use by defining it in the app options on intialization: ```typescript const app = new App({ oauth: , logger: new ConsoleLogger('@tests/auth', level: 'debug' ) }); ``` ## Resources - [User Authentication Basics](https://learn.microsoft.com/en-us/azure/bot-service/bot-builder-concept-authentication?view=azure-bot-service-4.0) - [User Authentication in Teams](https://learn.microsoft.com/en-us/microsoftteams/platform/concepts/authentication/authentication) --- ### Functions # Functions It's possible to hook up functions that the LLM can decide to call if it thinks it can help with the task at hand. This is done by adding a `function` to the `ChatPrompt`. ```typescript const prompt = new ChatPrompt() // Include `function` as part of the prompt .function( 'pokemonSearch', 'search for pokemon', // Include the schema of the parameters // the LLM needs to return to call the function { type: 'object', properties: { pokemonName: type: 'string', description: 'the name of the pokemon', , }, required: ['text'], }, // The cooresponding function will be called // automatically if the LLM decides to call this function async ( pokemonName : IPokemonSearch) => { log.info('Searching for pokemon', pokemonName); const response = await fetch(`https://pokeapi.co/api/v2/pokemon/$pokemonName`); if (!response.ok) const data = await response.json(); // The result of the function call is sent back to the LLM return { name: data.name, height: data.height, weight: data.weight, types: data.types.map((type: { type: name: string }) => type.type.name), }; } ); // The LLM will then produce a final response to be sent back to the user // activity.text could have text like 'pikachu' const result = await prompt.send(activity.text); await send(result.content ?? 'Sorry I could not find that pokemon'); ``` ## Multiple functions Additionally, for complex scenarios, you can add multiple functions to the `ChatPrompt`. The LLM will then decide which function to call based on the context of the conversation. The LLM can pick one or more functions to call before returning the final response. ```typescript // activity.text could be something like "what's my weather?" // The LLM will need to first figure out the user's location // Then pass that in to the weatherSearch const prompt = new ChatPrompt( instructions: 'You are a helpful assistant that can help the user get the weather', model, ) // Include multiple `function`s as part of the prompt .function( 'getUserLocation', 'gets the location of the user', // This function doesn't need any parameters, // so we do not need to provide a schema async () => ) .function( 'weatherSearch', 'search for weather', { type: 'object', properties: { location: type: 'string', description: 'the name of the location', , }, required: ['location'], }, async ( location : location: string ) => { const weatherByLocation: Record = { Seattle: temperature: 65, condition: 'sunny' , 'San Francisco': temperature: 60, condition: 'foggy' , 'New York': temperature: 75, condition: 'rainy' , }; const weather = weatherByLocation[location]; if (!weather) return 'Sorry, I could not find the weather for that location'; log.info('Found weather', weather); return weather; } ); // The LLM will then produce a final response to be sent back to the user const result = await prompt.send(activity.text); await send(result.content ?? 'Sorry I could not figure it out'); ``` --- ### Handling Multi-Step Forms # Handling Multi-Step Forms Dialogs can become complex yet powerful with multi-step forms. These forms can alter the flow of the survey depending on the user's input or customize subsequent steps based on previous answers. Start off by sending an initial card in the `dialog.open` event. ```typescript const dialogCard = new AdaptiveCard( type: 'TextBlock', text: 'This is a multi-step form', size: 'Large', weight: 'Bolder', , new TextInput() .withLabel('Name') .withIsRequired() .withId('name') .withPlaceholder('Enter your name') ) // Inside the dialog, the card actions for submitting the card must be // of type Action.Submit .withActions( new SubmitAction() .withTitle('Submit') .withData( submissiondialogtype: 'webpage_dialog_step_1' ) ); // Return an object with the task value that renders a card return { task: { type: 'continue', value: , }, }; ``` Then in the submission handler, you can choose to `continue` the dialog with a different card. ```typescript app.on('dialog.submit', async ( activity, send, next ) => { const dialogType = activity.value.data.submissiondialogtype; if (dialogType === 'webpage_dialog_step_1') { // This is data from the form that was submitted const name = activity.value.data.name; const nextStepCard = new AdaptiveCard( type: 'TextBlock', text: 'Email', size: 'Large', weight: 'Bolder', , new TextInput() .withLabel('Email') .withIsRequired() .withId('email') .withPlaceholder('Enter your email') ).withActions( new SubmitAction().withTitle('Submit').withData( // This same handler will get called, so we need to identify the step // in the returned data submissiondialogtype: 'webpage_dialog_step_2', // Carry forward data from previous step name, ) ); return { task: { // This indicates that the dialog flow should continue type: 'continue', value: { // Here we customize the title based on the previous response title: `Thanks $name - Get Email`, card: cardAttachment('adaptive', nextStepCard), }, }, }; } else if (dialogType === 'webpage_dialog_step_2') { const name = activity.value.data.name; const email = activity.value.data.email; await send( `Hi $name, thanks for submitting the form! We got that your email is $email` ); // You can also return a blank response return status: 200, ; } }); ``` --- ### In-Depth Guides # In-Depth Guides --- ### 🔒 User Authentication # 🔒 User Authentication At times agents must access secured online resources on behalf of the user, such as checking email, checking on flight status, or placing an order. To enable this, the user must authenticate their identity and grant consent for the application to access these resources. This process results in the application receiving a token, which the application can then use to access the permitted resources on the user's behalf. ## Resources [User Authentication Basics](https://learn.microsoft.com/en-us/azure/bot-service/bot-builder-concept-authentication?view=azure-bot-service-4.0) --- ### 🔗 Link unfurling # 🔗 Link unfurling Link unfurling lets your app respond when users paste URLs into Teams. When a URL from your registered domain is pasted, your app receives the URL and can return a card with additional information or actions. This works like a search command where the URL acts as the search term. > [!note] > Users can use link unfurling even before they discover or install your app in Teams. This is called [Zero install link unfurling](https://learn.microsoft.com/en-us/microsoftteams/platform/messaging-extensions/how-to/link-unfurling?tabs=desktop%2Cjson%2Cadvantages#zero-install-for-link-unfurling). In this scenario, your app will receive a `message.ext.anon-query-link` activity instead of the usual `message.ext.query-link`. ## Setting up your Teams app manifest ```json "composeExtensions": [ { "botId": "${BOT_ID}", "messageHandlers": [ { "type": "link", "value": } ] } ] ``` When a user pastes a URL from your registered domain (like `www.test.com`) into the Teams compose box, your app will receive a notification. Your app can then respond by returning an adaptive card that displays a preview of the linked content. This preview card appears before the user sends their message in the compose box, allowing them to see how the link will be displayed to others. ```mermaid flowchart TD A1["User pastes a URL (e.g., www\.test\.com) in Teams compose box"] B1([Microsoft Teams]) C1["Your App"] D1["Adaptive Card Preview"] A1 --> B1 B1 -->|Sends URL paste notification| C1 C1 -->|Returns card and preview| B1 B1 --> D1 %% Styling for readability and compatibility style B1 fill:#2E86AB,stroke:#1B4F72,stroke-width:2px,color:#ffffff style C1 fill:#28B463,stroke:#1D8348,stroke-width:2px,color:#ffffff style D1 fill:#F39C12,stroke:#D68910,stroke-width:2px,color:#ffffff ``` ## Handle link unfurling Handle link unfurling when a URL from your registered domain is submited into the Teams compose box. ```typescript app.on('message.ext.query-link', async ( activity ) => { const url = activity.value; if (!url) { return status: 400 ; } const card, thumbnail = createLinkUnfurlCard(url); const attachment = ; return { composeExtension: , }; }); ``` `createLinkUnfurlCard()` function ```typescript export function createLinkUnfurlCard(url: string) { const thumbnail = { title: 'Unfurled Link', text: url, images: [ url: IMAGE_URL, , ], } as ThumbnailCard; const card = new AdaptiveCard( new TextBlock('Unfurled Link', size: 'Large', weight: 'Bolder', color: 'Accent', style: 'heading', ), new TextBlock(url, size: 'Small', weight: 'Lighter', color: 'Good', ) ); return card, thumbnail, ; } ``` The link unfurling response includes both a full adaptive card and a preview card. The preview card appears in the compose box when a user pastes a URL: ![Link unfurl preview card](/screenshots/link-unfurl-preview.png) The user can expand the preview card by clicking on the _expand_ button on the top right. ![Link unfurl card in conversation](/screenshots/link-unfurl-card.png) The user can then choose to send entire the preview or the full adaptive card as a message. ## Resources - [Link unfurling](https://learn.microsoft.com/en-us/microsoftteams/platform/messaging-extensions/how-to/link-unfurling?tabs=desktop%2Cjson%2Cadvantages) --- ### Functions # Functions The client App exposes an `exec()` method that can be used to call functions implemented in an agent created with this library. The function call uses the `app.http` client to make a request, attaching a bearer token created from the `app.msalInstance` MSAL public client application, so that the remote function can authenticate and authorize the caller. The `exec()` method supports passing arguments and provides options to attach custom request headers and/or controlling the MSAL token scope. ## Invoking a remote function When the tab app and the remote agent are deployed to the same location and in the same AAD app, it's simple to construct the client app and call the function. ```typescript const app = new App(clientId); await app.start(); // this requests a token for 'api:///access_as_user' and attaches // that to an HTTP POST request to /api/functions/my-function const result = await app.exec('my-function'); ``` If the deployment is more complex, the [AppOptions](./app-options.md) can be used to influence the URL as well as the scope in the token. ## Function arguments Any argument for the remote function can be provided as an object. ```typescript const args = arg1: 'value1', arg2: 'value2' ; const result = await app.exec('my-function', args); ``` ## Request headers By default, the HTTP request will include a header with a bearer token as well as headers that give contextual information about the state of the app, such as which channel or team or chat or meeting the tab is active in. If needed, you can add additional headers to the `requestHeaders` option field. This may be handy to provide additional context to the remote function, such as a logging correlation ID. ```typescript const requestHeaders = 'x-custom-correlation-id': 'bf12aa3c-7460-4644-a22e-fb890af2ff41', ; // custom headers when the function does not take arguments const result = await app.exec('my-function', undefined, requestHeaders ); // custom headers when the function takes arguments const args = arg1: 'value1', arg2: 'value2' ; const result = await app.exec('my-other-function', args, requestHeaders ); ``` ## Request bearer token By default, the HTTP request will include a header with a bearer token acquired by requesting an `access_as_user` permission. The resource used for the request depends on the `remoteApiOptions.remoteAppResource` [AppOption](./app-options.md). If this app option is not provided, the token is requested for the scope `api:///access_as_user`. If this option is provided, the token is requested for the scope `/access_as_user`. When calling a function that requires a different permission or scope, the `exec` options let you override the behavior. To specify a custom permission, set the permission field in the `exec` options. ```typescript // with this option, the exec() call will request a token for either // api:///my_custom_permission or // /my_custom_permission, // depending on the app options used. const options = permission: 'my_custom_permission' ; // custom permission when the function does not take arguments const result = await app.exec('my-function', undefined, options ); // custom permission when the function takes arguments const args = arg1: 'value1', arg2: 'value2' ; const result = await app.exec('my-other-function', args, options ); ``` Sometimes you may need even more control. You might for need a scope for a different resource than your default when calling a particular remote agent function. In these cases you can provide the exact token request object you need as part of the `exec` options. ```typescript // with this option, the exec() call will request a token for exactly // api://my-custom-resources/my_custom_scope, regardless of which app // options were used to construct the app. const options = { msalTokenRequest: , }; // custom token request when the function does not take arguments const result = await app.exec('my-function', undefined, options ); // custom token request when the function takes arguments const args = arg1: 'value1', arg2: 'value2' ; const result = await app.exec('my-other-function', args, options ); ``` ## Ensuring user consent The `exec()` function supports incremental, just-in-time consent such that the user is prompted to consent during the `exec()` call, if they haven't already consented earlier. If you find that you'd rather test for consent or request consent before making the `exec()` call, the `hasConsentForScopes` and `ensureConsentForScopes` can be used. More details about those are given in the [Graph](./graph.md) section. ## References - [Graph API overview](https://learn.microsoft.com/en-us/graph/api/overview) - [Graph API permissions overview](https://learn.microsoft.com/en-us/graph/permissions-reference) --- ### Keeping State # Keeping State By default, LLMs are not stateful. This means that they do not remember previous messages or context when generating a response. It's common practice to keep state of the conversation history in your application and pass it to the LLM each time you make a request. By default, the `ChatPrompt` instance will create a temporary in-memory store to keep track of the conversation history. This is beneficial when you want to use it to generate an LLM response, but not persist the conversation history. But in other cases, you may want to keep the conversation history :::warning By reusing the same `ChatPrompt` class instance across multiple conversations will lead to the conversation history being shared across all conversations. Which is usually not the desired behavior. ::: To avoid this, you need to get messages from your persistent (or in-memory) store and pass it in to the `ChatPrompt`. :::note The `ChatPrompt` class will modify the messages object that's passed into it. So if you want to manually manage it, you need to make a copy of the messages object before passing it in. ::: ```typescript // Simple in-memory store for conversation histories // In your application, it may be a good idea to use a more // persistent store backed by a database or other storage solution const conversationStore = new Map(); const getOrCreateConversationHistory = (conversationId: string) => { // Check if conversation history exists const existingMessages = conversationStore.get(conversationId); if (existingMessages) return existingMessages; // If not, create a new conversation history const newMessages: Message[] = []; conversationStore.set(conversationId, newMessages); return newMessages; }; ``` ```typescript /** * Example of a stateful conversation handler that maintains conversation history * using an in-memory store keyed by conversation ID. * @param model The chat model to use * @param activity The incoming activity * @param send Function to send an activity */ export const handleStatefulConversation = async ( model: IChatModel, activity: IMessageActivity, send: (activity: ActivityLike) => Promise, log: ILogger ) => { log.info('Received message', activity.text); // Retrieve existing conversation history or initialize new one const existingMessages = getOrCreateConversationHistory(activity.conversation.id); log.info('Existing messages before sending to prompt', existingMessages); // Create prompt with existing messages const prompt = new ChatPrompt(); const result = await prompt.send(activity.text); if (result) log.info('Messages after sending to prompt:', existingMessages); }; ``` ![Stateful Chat Example](/screenshots/stateful-chat-example.png) --- ### Signing In # Signing In Prompting the user to sign in using an `OAuth` connection has never been easier! Just use the `signin` method to send the request and the listen to the `signin` event to handle the token result. ```typescript app.on('message', async ( log, signin, userGraph, isSignedIn ) => { if (!isSignedIn) { await signin(); // call signin for your auth connection... return; } const me = await userGraph.me.get(); log.info(`user "$" already signed in!`); }); app.event('signin', async ( send, userGraph, token ) => { const me = await userGraph.me.get(); await send(`user "$" signed in. Here's the token: $`); }); ``` --- ### 🤖 AI # 🤖 AI The AI packages in this library are designed to make it easier to build applications with LLMs. The `@microsoft/teams.ai` package has two main components: ## 📦 Prompts A `Prompt` is the component that orchestrates everything, it handles state management, function definitions, and invokes the model/template when needed. This layer abstracts many of the complexities of the Models to provide a common interface. ## 🧠 Models A `Model` is the component that interfaces with the LLM, being given some `input` and returning the `output`. This layer deals with any of the nuances of the particular Models being used. It is in the model implementation that the individual LLM features (i.e. streaming/tools etc.) are made compatible with the more general features of the `@microsoft/teams.ai` package. :::note You are not restricted to use the `@microsoft/teams.ai` package to build your Teams Agent applications. You can use models directly if you choose. These packages are there to simplify the interactions with the models and Teams. ::: --- ### Best Practices # Best Practices When sending messages using AI, Teams recommends a number of best practices to help with both user and developer experience. ## AI-Generated Indicator When sending messages using AI, Teams recommends including an indicator that the message was generated by AI. This can be done by adding a `addAiGenerated` property to outgoing message. This will help users understand that the message was generated by AI, and not by a human and can help with trust and transparency. ```typescript const messageToBeSent = new Message().addAiGenerated().text('Hello!'); ``` ![AI Generated Indicator](/screenshots/ai-generated.gif) ## Gather feedback to improve prompts AI Generated messages are not always perfect. Prompts can have gaps, and can sometimes lead to unexpected results. To help improve the prompts, Teams recommends gathering feedback from users on the AI-generated messages. See [Feedback](../feedback) for more information on how to gather feedback. This does involve thinking through a pipeline for gathering feedback and then automatically, or manually, updating prompts based on the feedback. The feedback system is an point of entry to your eval pipeline. ## Citations AI generated messages can hallucinate even if messages are grounded in real data. To help with this, Teams recommends including citations in the AI Generated messages. This is easy to do by simply using the `addCitations` method on the message. This will add a citation to the message, and the LLM will be able to use it to generate a citation for the user. :::warning Citations are added with a `position` property. This property value needs to also be included in the message text as `[]`. If there is a citation that's added without the associated value in the message text, Teams will not render the citation ::: ```typescript const messageActivity = new MessageActivity(result.content).addAiGenerated(); for (let i = 0; i < citedDocs.length; i++) { const doc = citedDocs[i]; // The corresponding citation needs to be added in the message content messageActivity.text += `[$i + 1]`; messageActivity.addCitation(i + 1, ); } ``` ![AI Generated Indicator](/screenshots/citation.gif) --- ### Microsoft Graph Client # Microsoft Graph Client The client App exposes a `graph` property that gives type-safe access to Microsoft Graph functions. When graph functions are invoked, the app attaches an MSAL bearer token to the request so that the call can be authenticated and authorized. ## Invoking Graph functions After constructing and starting an App instance, you can invoke any graph function by using the `app.graph` client. ```typescript const app = new App(clientId); await app.start(); const top10Chats = await app.graph.chats.list( $top: 10 ); ``` For best result, it's wise to ensure that the user has consented to a permission required by the graph API before attempting to invoke it. Otherwise, the call is likely to be rejected by the graph server. ## Graph APIs and permissions Different graph APIs have different permission requirements. The app developer should make sure that consent is granted before invoking a graph API. To help request and test for consent, the client App offers three methods: - Pre-warming while starting the app. - Requesting consent if not already granted. - Testing for consent without prompting. ### Pre-warming while starting the app The App constructor takes an option that lets you control how scope consent is requested while starting the app. For more details on this option, see the [App options](./app-options.md) documentation. ### Requesting consent if not already granted The app provides an `ensureConsentForScopes` method that tests if the user has consented to a certain set of scopes and prompts them if consent isn't yet granted. The method returns a promise that resolves to true if the user has already provided consent to all listed scopes; and to false if the user declines the prompt. This method is useful for building an incremental, just-in-time, consent model, or to fully control how consent is pre-warmed. ```typescript const app = new App(clientId, { msalOptions: , }); // this will prompt for the User.Read scope if not already granted await app.start(); // this will prompt for Chat.ReadBasic if not already granted const canReadChat = await app.ensureConsentForScopes( ['Chat.ReadBasic'] ); if (canReadChat) { const top10Chats = await app.graph.chats.list( $top: 10 ); // ... do something useful ... } ``` #### Testing for consent without prompting The app also provides a `hasConsentForScopes` method to test for consent without raising a prompt. This is handy to enable or disable features based on user choice, or to provide friendly messaging before raising a prompt with `ensureConsentForScopes`. ```typescript const app = new App(clientId); // this will prompt for the '.default' scope if the user hasn't already // consented to any scope await app.start(); // this will not raise a prompt under any circumstance const canReadChat = await app.hasConsentForScopes( ['Chat.ReadBasic'] ); if (canReadChat) { const top10Chats = await app.graph.chats.list( $top: 10 ); // ... do something useful ... } ``` ## References - [Graph API overview](https://learn.microsoft.com/en-us/graph/api/overview) - [Graph API permissions overview](https://learn.microsoft.com/en-us/graph/permissions-reference) --- ### Signing Out # Signing Out Sign a user out by calling the `signout` method to discard the cached access token in the Bot Framework token service. ```typescript app.message('/signout', async ( send, signout, isSignedIn ) => ); ``` --- ### MCP # MCP Teams AI Library has optional packages which support the [Model Context Protocol (MCP)](https://modelcontextprotocol.io/introduction) as a service or client. This allows you to use MCP to call functions and tools in your application. MCP servers and MCP clients dynamically load function definitions and tools. When building Servers, this could mean that you can introduce new tools as part of your application, and the MCP clients that are connected to it will automatically start consuming those tools. When building Clients, this could mean that you can connect to other MCP servers and your application has the flexibility to improve as the MCP servers its connected to evolve over time. :::tip The guides here can be used to build a server and a client that can leverage each other. That means you can build a server that has the ability to do complex things for the client agent. ::: --- ### Tabs # Tabs Tabs are host-aware webpages embedded in Microsoft Teams, Outlook, and Microsoft 365. Tabs are commonly implemented as Single Page Applications that use the Teams [JavaScript client library](https://learn.microsoft.com/en-us/microsoftteams/platform/tabs/how-to/using-teams-client-library) (TeamsJS) to interact with the app host. Tab apps will often need to interact with remote services. They may need to fetch data from [Microsoft Graph](https://learn.microsoft.com/en-us/graph/overview) or invoke remote agent functions, using the [Nested App Authentication](https://learn.microsoft.com/en-us/microsoftteams/platform/concepts/authentication/nested-authentication) (NAA) and the [Microsoft Authentication Library](https://learn.microsoft.com/en-us/entra/identity-platform/msal-overview) (MSAL) to ensure user consent and to allow the remote service authenticate the user. The `@microsoft/teams.client` package in this library builds on TeamsJS and MSAL to streamline these common scenarios. It aims to simplify: - **Remote Service Authentication** through MSAL-based authentication and token acquisition. - **Graph API Integration** by offering a pre-configured and type-safe Microsoft Graph client. - **Agent Function Calling** by handling authentication and including app context when calling server-side functions implemented Teams AI agents. - **Scope Consent Management** by providing simple APIs to test for and request user consent. ## Resources - [Tabs overview](https://learn.microsoft.com/en-us/microsoftteams/platform/tabs/what-are-tabs?tabs=personal) - [Teams JavaScript client library](https://learn.microsoft.com/en-us/microsoftteams/platform/tabs/how-to/using-teams-client-library) - [Microsoft Graph overview](https://learn.microsoft.com/en-us/graph/overview) - [Microsoft Authentication Library (MSAL)](https://learn.microsoft.com/en-us/entra/identity-platform/msal-overview) - [Nested App Authentication (NAA)](https://learn.microsoft.com/en-us/microsoftteams/platform/concepts/authentication/nested-authentication) --- ### Feedback # Feedback User feedback is essential for the improvement of any application. Teams provides specialized UI components to help facilitate the gathering of feedback from users. ![Feedback Message](/screenshots/feedback.gif) ## Storage Once you receive a feedback event, you can choose to store it in some persistent storage. In the example below, we are storing it in an in-memory store. ```typescript // This store would ideally be persisted in a database export const storedFeedbackByMessageId = new Map< string, >(); ``` ## Including Feedback Buttons When sending a message that you want feedback in, simply `addFeedback()` to the message you are sending. ```typescript const id: sentMessageId = await send( result.content != null ? new MessageActivity(result.content) .addAiGenerated() /** Add feedback buttons via this method */ .addFeedback() : 'I did not generate a response.' ); storedFeedbackByMessageId.set(sentMessageId, ); ``` ## Handling the feedback Once the user decides to like/dislike the message, you can handle the feedback in a received event. Once received, you can choose to include it in your persistent store. ```typescript app.on('message.submit.feedback', async ( activity, log ) => { const reaction, feedback: feedbackJson = activity.value.actionValue; if (activity.replyToId == null) { log.warn(`No replyToId found for messageId $`); return; } const existingFeedback = storedFeedbackByMessageId.get(activity.replyToId); /** * feedbackJson looks like: * "feedbackText":"Nice!" */ if (!existingFeedback) { log.warn(`No feedback found for messageId $`); } else { storedFeedbackByMessageId.set(activity.id, ); } }); ``` --- ### Observability # Observability ---