Challenge 01 - Prompt Engineering

< Previous Challenge - Home - Next Challenge >

Prerequisites

Introduction

As LLMs grow in popularity and use around the world, the need to manage and monitor their outputs becomes increasingly important. In this challenge, you will learn how to use prompt engineering techniques to generate desired results for LLMs.

Description

Model deployment for the challenge:

NOTE: For model families currently available, please reference this link for more information: Azure OpenAI Service models.

Some models are not available for new deployments beginning July 6, 2023. Deployments created prior to July 6, 2023 remain available to customers until July 5, 2024. You may revise the environment file and the model you deploy accordingly. Please refer to the following link for more details: Azure OpenAI Service legacy models

Questions you should be able to answer by the end of this challenge:

You will run the following Jupyter notebook to complete the tasks for this challenge:

The file can be found in your Codespace under the /notebooks folder. If you are working locally or in the Cloud, you can find it in the /notebooks folder of Resources.zip file.

To run a Jupyter notebook, navigate to it in your Codespace or open it in VS Code on your local workstation. You will find further instructions for the challenge, as well as in-line code blocks that you will interact with to complete the tasks for the challenge. Return here to the student guide after completing all tasks in the Jupyter notebook to validate you have met the success criteria below for this challenge.

Sections in this Challenge:

  1. Parameter Experimentation
  2. System Message Engineering
  3. Iterative Prompting Principles:

    3.1 Write clear and specific instructions

    3.2 Give the model time to “think”

Success Criteria

To complete this challenge successfully:

Additional Resources