🗃️ Code Execution
4 items
🗃️ OpenAI Assistant
1 items
🗃️ GroupChat
1 items
🗃️ Using Non-OpenAI Models
9 items
🗃️ Handling Long Contexts
2 items
📄️ LLM Caching
AutoGen supports caching API requests so that they can be reused when the same request is issued. This is useful when repeating or continuing experiments for reproducibility and cost saving.
📄️ LLM Configuration
Open In Colab
🗃️ Prompting and Reasoning
2 items
📄️ Retrieval Augmentation
Retrieval Augmented Generation (RAG) is a powerful technique that combines language models with external knowledge retrieval to improve the quality and relevance of generated responses.
📄️ Task Decomposition
Open In Colab