This site has been deprecated and no longer being maintained. The repo will be deleted on December 1, 2025. YouTube content will be updated here: https://www.youtube.com/@theacademyhub and an archive of the repo will be available here: https://github.com/bhitney/PartnerResources

Mitigating Hallucinations in Large Language Models (LLMs) with Azure AI Services

This document provides actionable best practices to reduce hallucinations—instances where models generate inaccurate or fabricated information—when using LLMs. We highlight strategies for effective prompt engineering, data grounding, evaluation, and security using Azure AI services (Azure OpenAI Service, Azure AI Foundry, Prompt Flow, and Content Safety).

View Best Practices on GitHub