ai-agents-for-beginners

Exploring AI Agent Frameworks

(Click di image wey dey up to watch di video for dis lesson)

Explore AI Agent Frameworks

AI agent frameworks na software platforms wey dem design to make di work of creating, deploying, and managing AI agents easy. Dis frameworks dey give developers pre-built components, tools, and abstractions wey go help dem build complex AI systems faster.

Dis frameworks dey help developers focus on di unique parts of dia apps by providing standard ways to handle common challenges for AI agent development. E dey make building AI systems more scalable, accessible, and efficient.

Introduction

For dis lesson, we go talk about:

Learning goals

Di goals for dis lesson na to help you sabi:

Wetin AI Agent Frameworks be and wetin dem fit help developers do?

Di normal AI Frameworks fit help you put AI for your apps and make di apps better for di following ways:

E sound good abi? So why we need AI Agent Framework?

AI Agent frameworks dey go beyond di normal AI frameworks. Dem dey help create smart agents wey fit interact with users, other agents, and di environment to achieve specific goals. Dis agents fit behave on dia own, make decisions, and adapt to changes. Make we look di key things wey AI Agent Frameworks fit do:

So, in summary, agents dey help you do more, take automation to di next level, and create smarter systems wey fit learn and adapt to dia environment.

How to quickly prototype, improve, and make di agent better?

Dis area dey move fast, but some things dey common for most AI Agent Frameworks wey fit help you quickly prototype and improve, like module components, collaborative tools, and real-time learning. Make we break am down:

Use Modular Components

SDKs like Microsoft Semantic Kernel and LangChain dey provide pre-built components like AI connectors, prompt templates, and memory management.

How teams fit use am: Teams fit quickly put dis components together to create a working prototype without starting from scratch, so dem fit experiment and improve fast.

How e dey work for real life: You fit use pre-built parser to collect information from user input, memory module to store and retrieve data, and prompt generator to interact with users, all without building dem from scratch.

Example code. Make we see example of how you fit use pre-built AI Connector with Semantic Kernel Python and .Net wey dey use auto-function calling to make di model respond to user input:

# Semantic Kernel Python Example

import asyncio
from typing import Annotated

from semantic_kernel.connectors.ai import FunctionChoiceBehavior
from semantic_kernel.connectors.ai.open_ai import AzureChatCompletion, AzureChatPromptExecutionSettings
from semantic_kernel.contents import ChatHistory
from semantic_kernel.functions import kernel_function
from semantic_kernel.kernel import Kernel

# Define a ChatHistory object to hold the conversation's context
chat_history = ChatHistory()
chat_history.add_user_message("I'd like to go to New York on January 1, 2025")


# Define a sample plugin that contains the function to book travel
class BookTravelPlugin:
    """A Sample Book Travel Plugin"""

    @kernel_function(name="book_flight", description="Book travel given location and date")
    async def book_flight(
        self, date: Annotated[str, "The date of travel"], location: Annotated[str, "The location to travel to"]
    ) -> str:
        return f"Travel was booked to {location} on {date}"

# Create the Kernel
kernel = Kernel()

# Add the sample plugin to the Kernel object
kernel.add_plugin(BookTravelPlugin(), plugin_name="book_travel")

# Define the Azure OpenAI AI Connector
chat_service = AzureChatCompletion(
    deployment_name="YOUR_DEPLOYMENT_NAME", 
    api_key="YOUR_API_KEY", 
    endpoint="https://<your-resource>.azure.openai.com/",
)

# Define the request settings to configure the model with auto-function calling
request_settings = AzureChatPromptExecutionSettings(function_choice_behavior=FunctionChoiceBehavior.Auto())


async def main():
    # Make the request to the model for the given chat history and request settings
    # The Kernel contains the sample that the model will request to invoke
    response = await chat_service.get_chat_message_content(
        chat_history=chat_history, settings=request_settings, kernel=kernel
    )
    assert response is not None

    """
    Note: In the auto function calling process, the model determines it can invoke the 
    `BookTravelPlugin` using the `book_flight` function, supplying the necessary arguments. 
    
    For example:

    "tool_calls": [
        {
            "id": "call_abc123",
            "type": "function",
            "function": {
                "name": "BookTravelPlugin-book_flight",
                "arguments": "{'location': 'New York', 'date': '2025-01-01'}"
            }
        }
    ]

    Since the location and date arguments are required (as defined by the kernel function), if the 
    model lacks either, it will prompt the user to provide them. For instance:

    User: Book me a flight to New York.
    Model: Sure, I'd love to help you book a flight. Could you please specify the date?
    User: I want to travel on January 1, 2025.
    Model: Your flight to New York on January 1, 2025, has been successfully booked. Safe travels!
    """

    print(f"`{response}`")
    # Example AI Model Response: `Your flight to New York on January 1, 2025, has been successfully booked. Safe travels! ✈️🗽`

    # Add the model's response to our chat history context
    chat_history.add_assistant_message(response.content)


if __name__ == "__main__":
    asyncio.run(main())
// Semantic Kernel C# example

using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.ChatCompletion;
using System.ComponentModel;
using Microsoft.SemanticKernel.Connectors.AzureOpenAI;

ChatHistory chatHistory = [];
chatHistory.AddUserMessage("I'd like to go to New York on January 1, 2025");

var kernelBuilder = Kernel.CreateBuilder();
kernelBuilder.AddAzureOpenAIChatCompletion(
    deploymentName: "NAME_OF_YOUR_DEPLOYMENT",
    apiKey: "YOUR_API_KEY",
    endpoint: "YOUR_AZURE_ENDPOINT"
);
kernelBuilder.Plugins.AddFromType<BookTravelPlugin>("BookTravel"); 
var kernel = kernelBuilder.Build();

var settings = new AzureOpenAIPromptExecutionSettings()
{
    FunctionChoiceBehavior = FunctionChoiceBehavior.Auto()
};

var chatCompletion = kernel.GetRequiredService<IChatCompletionService>();

var response = await chatCompletion.GetChatMessageContentAsync(chatHistory, settings, kernel);

/*
Behind the scenes, the model recognizes the tool to call, what arguments it already has (location) and (date)
{

"tool_calls": [
    {
        "id": "call_abc123",
        "type": "function",
        "function": {
            "name": "BookTravelPlugin-book_flight",
            "arguments": "{'location': 'New York', 'date': '2025-01-01'}"
        }
    }
]
*/

Console.WriteLine(response.Content);
chatHistory.AddMessage(response!.Role, response!.Content!);

// Example AI Model Response: Your flight to New York on January 1, 2025, has been successfully booked. Safe travels! ✈️🗽

// Define a plugin that contains the function to book travel
public class BookTravelPlugin
{
    [KernelFunction("book_flight")]
    [Description("Book travel given location and date")]
    public async Task<string> BookFlight(DateTime date, string location)
    {
        return await Task.FromResult( $"Travel was booked to {location} on {date}");
    }
}

From dis example, you go see how you fit use pre-built parser to collect key information from user input, like di origin, destination, and date for flight booking request. Dis modular way go make you focus on di main logic.

Leverage Collaborative Tools

Frameworks like CrewAI, Microsoft AutoGen, and Semantic Kernel dey help create multiple agents wey fit work together.

How teams fit use am: Teams fit design agents wey get specific roles and tasks, so dem fit test and improve how dem dey work together and make di whole system better.

How e dey work for real life: You fit create a team of agents wey each one get specific work, like data retrieval, analysis, or decision-making. Dis agents fit communicate and share information to achieve one goal, like answering user question or completing task.

Example code (AutoGen):

# creating agents, then create a round robin schedule where they can work together, in this case in order

# Data Retrieval Agent
# Data Analysis Agent
# Decision Making Agent

agent_retrieve = AssistantAgent(
    name="dataretrieval",
    model_client=model_client,
    tools=[retrieve_tool],
    system_message="Use tools to solve tasks."
)

agent_analyze = AssistantAgent(
    name="dataanalysis",
    model_client=model_client,
    tools=[analyze_tool],
    system_message="Use tools to solve tasks."
)

# conversation ends when user says "APPROVE"
termination = TextMentionTermination("APPROVE")

user_proxy = UserProxyAgent("user_proxy", input_func=input)

team = RoundRobinGroupChat([agent_retrieve, agent_analyze, user_proxy], termination_condition=termination)

stream = team.run_stream(task="Analyze data", max_turns=10)
# Use asyncio.run(...) when running in a script.
await Console(stream)

For di code wey you see before, you go see how you fit create task wey involve multiple agents wey dey work together to analyze data. Each agent dey do specific work, and di task dey run by coordinating di agents to achieve di goal. By creating agents wey get specific roles, you fit make di task faster and better.

Learn in Real-Time

Advanced frameworks dey provide ways for real-time understanding and adaptation.

How teams fit use am: Teams fit add feedback loops wey go make agents learn from interactions and adjust dia behavior as things dey happen, so dem go dey improve.

How e dey work for real life: Agents fit analyze user feedback, environmental data, and task results to update dia knowledge, adjust how dem dey make decisions, and improve over time. Dis learning process go make agents adapt to changes and user preferences, so di system go dey more effective.

Wetin be di difference between AutoGen, Semantic Kernel, and Azure AI Agent Service?

Plenty ways dey to compare dis frameworks, but make we look di main differences for dia design, wetin dem fit do, and di kind work dem dey target:

AutoGen

AutoGen na open-source framework wey Microsoft Research’s AI Frontiers Lab develop. E dey focus on event-driven, distributed agentic applications, wey dey allow multiple LLMs and SLMs, tools, and advanced multi-agent design patterns.

AutoGen dey based on di idea of agents, wey be independent entities wey fit understand dia environment, make decisions, and take actions to achieve specific goals. Agents dey communicate through asynchronous messages, so dem fit work on dia own and at di same time, making di system scalable and responsive.

Agents dey based on di actor model. According to Wikipedia, actor na di basic building block of concurrent computation. If e receive message, e fit: make local decisions, create more actors, send more messages, and decide how e go respond to di next message wey e receive.

Use Cases: Automating code generation, data analysis tasks, and building custom agents for planning and research functions.

Here be some important core concepts of AutoGen:

Semantic Kernel + Agent Framework

Semantic Kernel na enterprise-ready AI Orchestration SDK. E get AI and memory connectors, plus Agent Framework.

Make we first talk about di core components:

Dis facts dem go store am for di memory collection wey dem call SummarizedAzureDocs. Dis na very simple example, but you fit see how you fit store information for di memory make di LLM fit use am.

So na di basics of di Semantic Kernel framework be dis, wetin about di Agent Framework?

Azure AI Agent Service

Azure AI Agent Service na new addition wey Microsoft introduce for Ignite 2024. E dey allow make you fit develop and deploy AI agents wey get more flexible models, like to directly call open-source LLMs like Llama 3, Mistral, and Cohere.

Azure AI Agent Service dey provide strong enterprise security mechanisms and data storage methods, wey make am good for enterprise applications.

E dey work straight out-of-the-box with multi-agent orchestration frameworks like AutoGen and Semantic Kernel.

Dis service dey currently for Public Preview and e dey support Python and C# to build agents.

If we use Semantic Kernel Python, we fit create Azure AI Agent with user-defined plugin:

import asyncio
from typing import Annotated

from azure.identity.aio import DefaultAzureCredential

from semantic_kernel.agents import AzureAIAgent, AzureAIAgentSettings, AzureAIAgentThread
from semantic_kernel.contents import ChatMessageContent
from semantic_kernel.contents import AuthorRole
from semantic_kernel.functions import kernel_function


# Define a sample plugin for the sample
class MenuPlugin:
    """A sample Menu Plugin used for the concept sample."""

    @kernel_function(description="Provides a list of specials from the menu.")
    def get_specials(self) -> Annotated[str, "Returns the specials from the menu."]:
        return """
        Special Soup: Clam Chowder
        Special Salad: Cobb Salad
        Special Drink: Chai Tea
        """

    @kernel_function(description="Provides the price of the requested menu item.")
    def get_item_price(
        self, menu_item: Annotated[str, "The name of the menu item."]
    ) -> Annotated[str, "Returns the price of the menu item."]:
        return "$9.99"


async def main() -> None:
    ai_agent_settings = AzureAIAgentSettings.create()

    async with (
        DefaultAzureCredential() as creds,
        AzureAIAgent.create_client(
            credential=creds,
            conn_str=ai_agent_settings.project_connection_string.get_secret_value(),
        ) as client,
    ):
        # Create agent definition
        agent_definition = await client.agents.create_agent(
            model=ai_agent_settings.model_deployment_name,
            name="Host",
            instructions="Answer questions about the menu.",
        )

        # Create the AzureAI Agent using the defined client and agent definition
        agent = AzureAIAgent(
            client=client,
            definition=agent_definition,
            plugins=[MenuPlugin()],
        )

        # Create a thread to hold the conversation
        # If no thread is provided, a new thread will be
        # created and returned with the initial response
        thread: AzureAIAgentThread | None = None

        user_inputs = [
            "Hello",
            "What is the special soup?",
            "How much does that cost?",
            "Thank you",
        ]

        try:
            for user_input in user_inputs:
                print(f"# User: '{user_input}'")
                # Invoke the agent for the specified thread
                response = await agent.get_response(
                    messages=user_input,
                    thread_id=thread,
                )
                print(f"# {response.name}: {response.content}")
                thread = response.thread
        finally:
            await thread.delete() if thread else None
            await client.agents.delete_agent(agent.id)


if __name__ == "__main__":
    asyncio.run(main())

Core concepts

Azure AI Agent Service get di following core concepts:

Use Cases: Azure AI Agent Service dey designed for enterprise applications wey need secure, scalable, and flexible AI agent deployment.

Wetin be di difference between dis frameworks?

E be like say di frameworks get plenty overlap, but some key differences dey for di way dem design, wetin dem fit do, and di kind use cases dem dey target:

Still dey confuse on which one to choose?

Use Cases

Make we try help you by going through some common use cases:

Q: I dey experiment, dey learn and dey build proof-of-concept agent applications, and I wan build and experiment quick quick

A: AutoGen go be better choice for dis scenario, as e dey focus on event-driven, distributed agentic applications and e dey support advanced multi-agent design patterns.

Q: Wetin make AutoGen better pass Semantic Kernel and Azure AI Agent Service for dis use case?

A: AutoGen dey specifically designed for event-driven, distributed agentic applications, wey make am good for automating code generation and data analysis tasks. E dey provide di tools and capabilities wey you need to build complex multi-agent systems well well.

Q: E be like say Azure AI Agent Service fit work here too, e get tools for code generation and more?

A: Yes, Azure AI Agent Service na platform service for agents and e get built-in capabilities for multiple models, Azure AI Search, Bing Search, and Azure Functions. E dey make am easy to build your agents for di Foundry Portal and deploy dem for scale.

Q: I still dey confuse, just give me one option

A: Better choice na to build your application for Semantic Kernel first, then use Azure AI Agent Service to deploy your agent. Dis approach go allow you persist your agents easily while you dey enjoy di power to build multi-agent systems for Semantic Kernel. Plus, Semantic Kernel get connector for AutoGen, wey make am easy to use di two frameworks together.

Make we summarize di key differences for table:

Framework Focus Core Concepts Use Cases
AutoGen Event-driven, distributed agentic applications Agents, Personas, Functions, Data Code generation, data analysis tasks
Semantic Kernel Understanding and generating human-like text content Agents, Modular Components, Collaboration Natural language understanding, content generation
Azure AI Agent Service Flexible models, enterprise security, Code generation, Tool calling Modularity, Collaboration, Process Orchestration Secure, scalable, and flexible AI agent deployment

Wetin be di ideal use case for each of dis frameworks?

I fit integrate my existing Azure ecosystem tools directly, or I need standalone solutions?

Di answer na yes, you fit integrate your existing Azure ecosystem tools directly with Azure AI Agent Service, especially because dem build am to work well with other Azure services. For example, you fit integrate Bing, Azure AI Search, and Azure Functions. E also get deep integration with Azure AI Foundry.

For AutoGen and Semantic Kernel, you fit still integrate with Azure services, but e fit require you to call di Azure services from your code. Another way to integrate na to use di Azure SDKs to interact with Azure services from your agents. Plus, like we talk before, you fit use Azure AI Agent Service as orchestrator for your agents wey you build for AutoGen or Semantic Kernel, wey go make access to di Azure ecosystem easy.

Sample Codes

Get More Questions about AI Agent Frameworks?

Join di Azure AI Foundry Discord to meet other learners, attend office hours, and get answers to your AI Agents questions.

References

Previous Lesson

Introduction to AI Agents and Agent Use Cases

Next Lesson

Understanding Agentic Design Patterns


Disclaimer:
Dis dokyument don use AI translation service Co-op Translator do di translation. Even though we dey try make am correct, abeg make you sabi say automated translations fit get mistake or no dey accurate well. Di original dokyument for im native language na di one wey you go take as di correct source. For important information, e good make professional human translation dey use. We no go fit take blame for any misunderstanding or wrong interpretation wey fit happen because you use dis translation.