Skip to main content

๐Ÿ’ฌ Chat Generation

Before going through this guide, please make sure you have completed the setup and prerequisites guide.

Setup

The basic setup involves creating a ChatPrompt and giving it the Model you want to use.

Simple chat generationโ€‹

Chat generation is the the most basic way of interacting with an LLM model. It involves setting up your ChatPrompt, the Model, and sending it the message.

Import the relevant namespaces:

// AI
using Microsoft.Teams.AI.Models.OpenAI;
using Microsoft.Teams.AI.Prompts;
// Teams
using Microsoft.Teams.Api.Activities;
using Microsoft.Teams.Apps;
using Microsoft.Teams.Apps.Activities;
using Microsoft.Teams.Apps.Annotations;

Create a ChatModel, ChatPrompt, and handle user - LLM interactions:

[TeamsController("main")]
public class MainController
{
[Message]
public async Task OnMessage([Context] MessageActivity activity, [Context] IContext.Client client)
{
// Create the OpenAI chat model
var model = new OpenAIChatModel(
model: "gpt-4o",
apiKey: Environment.GetEnvironmentVariable("OPENAI_API_KEY")!
);

// Create a chat prompt
var prompt = new OpenAIChatPrompt(
model,
new ChatPromptOptions().WithInstructions("You are a friendly assistant who talks like a pirate.")
);

// Send the user's message to the prompt and get a response
var response = await prompt.Send(activity.Text);
if (!string.IsNullOrEmpty(response.Content))
{
var responseActivity = new MessageActivity { Text = response.Content }.AddAIGenerated();
await client.Send(responseActivity);
// Ahoy, matey! ๐Ÿดโ€โ˜ ๏ธ How be ye doin' this fine day on th' high seas? What can this olโ€™ salty sea dog help ye with? ๐Ÿšขโ˜ ๏ธ
}
}
}
note

The current OpenAIChatModel implementation uses chat-completions API. The responses API is coming soon.

Streaming chat responsesโ€‹

LLMs can take a while to generate a response, so often streaming the response leads to a better, more responsive user experience.

warning

Streaming is only currently supported for single 1:1 chats, and not for groups or channels.

        // Send the user's message to the prompt and stream the response back
var response = await prompt.Send(activity.Text, null,
(chunk) => Task.Run(() => context.Stream.Emit(chunk)),
context.CancellationToken);

User experienceโ€‹

Streaming the response