🗃️ Code Execution
5 items
🗃️ OpenAI Assistant
1 items
🗃️ GroupChat
4 items
🗃️ Using Non-OpenAI Models
16 items
🗃️ Handling Long Contexts
2 items
📄️ LLM Caching
AutoGen supports caching API requests so that they can be reused when the same request is issued. This is useful when repeating or continuing experiments for reproducibility and cost saving.
📄️ Agent Observability
AutoGen supports advanced LLM agent observability and monitoring through built-in logging and partner providers.
📄️ LLM Configuration
Open In Colab
🗃️ Prompting and Reasoning
2 items
📄️ Retrieval Augmentation
Retrieval Augmented Generation (RAG) is a powerful technique that combines language models with external knowledge retrieval to improve the quality and relevance of generated responses.
📄️ Task Decomposition
Open In Colab