Challenge 05 - Responsible AI

< Previous Challenge - Home

Pre-requisites

Introduction

As LLMs grow in popularity and use around the world, the need to manage and monitor their outputs becomes increasingly important. In this challenge, you will learn how to evaluate the outputs of LLMs and how to identify and mitigate potential biases in the model.

Description

Questions you should be able to answer by the end of this challenge:

Sections in this Challenge:

  1. Identifying harms and detecting Personal Identifiable Information (PII)
  2. Evaluating truthfulness using Ground-Truth Datasets
  3. Evaluating truthfulness using GPT without Ground-Truth Datasets

You will run the following Jupyter notebook for this challenge:

The file can be found in your Codespace under the /notebooks folder. If you are working locally or in the Cloud, you can find it in the /notebooks folder of Resources.zip file.

To run a Jupyter notebook, navigate to it in your Codespace or open it in VS Code on your local workstation. You will find further instructions for the challenge, as well as in-line code blocks that you will interact with to complete the tasks for the challenge. Return here to the student guide after completing all tasks in the Jupyter notebook to validate you have met the success criteria below for this challenge.

Success Criteria

To complete this challenge successfully, you should be able to:

Additional Resources