Basics of prompt¶
This notebook demonstrates how to use FewShotPromptBuilder
to create structured prompts for OpenAI models.
In [ ]:
Copied!
# Import FewShotPromptBuilder from openaivec
from openaivec import FewShotPromptBuilder
# Import FewShotPromptBuilder from openaivec
from openaivec import FewShotPromptBuilder
Basic Usage¶
Create a simple prompt with purpose, cautions, and examples.
In [2]:
Copied!
# Build a basic prompt with purpose, cautions, and examples
prompt_str: str = (
FewShotPromptBuilder()
.purpose("some purpose")
.caution("some caution")
.caution("some other caution")
.example(
input_value="some input",
output_value="some output"
)
.example(
input_value="some other input",
output_value="some other output"
)
.build()
)
# Print the generated prompt
print(prompt_str)
# Build a basic prompt with purpose, cautions, and examples
prompt_str: str = (
FewShotPromptBuilder()
.purpose("some purpose")
.caution("some caution")
.caution("some other caution")
.example(
input_value="some input",
output_value="some output"
)
.example(
input_value="some other input",
output_value="some other output"
)
.build()
)
# Print the generated prompt
print(prompt_str)
<Prompt> <Purpose>some purpose</Purpose> <Cautions> <Caution>some caution</Caution> <Caution>some other caution</Caution> </Cautions> <Examples> <Example> <Input>some input</Input> <Output>some output</Output> </Example> <Example> <Input>some other input</Input> <Output>some other output</Output> </Example> </Examples> </Prompt>
Structured Output Example¶
Demonstrate how to use a structured output with Pydantic models.
In [ ]:
Copied!
# Import BaseModel from pydantic for structured outputs
from pydantic import BaseModel
# Define a structured result model
class Result(BaseModel):
field1: str
field2: str
# Build a prompt using structured examples
prompt_str: str = (
FewShotPromptBuilder()
.purpose("some purpose")
.caution("some caution")
.caution("some other caution")
.example(
input_value="some input",
output_value=Result(field1="some field", field2="some other field")
)
.example(
input_value="some other input",
output_value=Result(field1="some field", field2="some other field")
)
.build()
)
# Print the structured prompt
print(prompt_str)
# Import BaseModel from pydantic for structured outputs
from pydantic import BaseModel
# Define a structured result model
class Result(BaseModel):
field1: str
field2: str
# Build a prompt using structured examples
prompt_str: str = (
FewShotPromptBuilder()
.purpose("some purpose")
.caution("some caution")
.caution("some other caution")
.example(
input_value="some input",
output_value=Result(field1="some field", field2="some other field")
)
.example(
input_value="some other input",
output_value=Result(field1="some field", field2="some other field")
)
.build()
)
# Print the structured prompt
print(prompt_str)
Improving Prompts with LLM¶
Use an OpenAI model to automatically improve and explain the prompt.
The improve()
method can be used with or without explicit configuration.
In [4]:
Copied!
# Define a structured model for fruits
class Fruit(BaseModel):
name: str
color: str
# Method 1: Use default configuration with environment variables
# (Requires OPENAI_API_KEY or Azure OpenAI environment variables)
prompt: str = (
FewShotPromptBuilder()
.purpose("Return the color of given fruit")
.caution("The fruit name should be in English")
.example("Apple", Fruit(name="Apple", color="Red"))
.example("Peach", Fruit(name="Peach", color="Pink"))
.example("Banana", Fruit(name="Banana", color="Yellow"))
.example("Strawberry", Fruit(name="Strawberry", color="Red"))
.example("Blueberry", Fruit(name="Blueberry", color="Blue"))
.improve() # Uses OPENAI_API_KEY environment variable and default model (gpt-4.1-mini)
.explain()
.build()
)
# Method 2: Explicitly provide client and model_name
# from openai import OpenAI
# client = OpenAI(api_key="your-api-key")
# prompt: str = (
# FewShotPromptBuilder()
# .purpose("Return the color of given fruit")
# .caution("The fruit name should be in English")
# .example("Apple", Fruit(name="Apple", color="Red"))
# .example("Peach", Fruit(name="Peach", color="Pink"))
# .example("Banana", Fruit(name="Banana", color="Yellow"))
# .example("Strawberry", Fruit(name="Strawberry", color="Red"))
# .example("Blueberry", Fruit(name="Blueberry", color="Blue"))
# .improve(client=client, model_name="gpt-4o") # Explicit client and model
# .explain()
# .build()
# )
# Define a structured model for fruits
class Fruit(BaseModel):
name: str
color: str
# Method 1: Use default configuration with environment variables
# (Requires OPENAI_API_KEY or Azure OpenAI environment variables)
prompt: str = (
FewShotPromptBuilder()
.purpose("Return the color of given fruit")
.caution("The fruit name should be in English")
.example("Apple", Fruit(name="Apple", color="Red"))
.example("Peach", Fruit(name="Peach", color="Pink"))
.example("Banana", Fruit(name="Banana", color="Yellow"))
.example("Strawberry", Fruit(name="Strawberry", color="Red"))
.example("Blueberry", Fruit(name="Blueberry", color="Blue"))
.improve() # Uses OPENAI_API_KEY environment variable and default model (gpt-4.1-mini)
.explain()
.build()
)
# Method 2: Explicitly provide client and model_name
# from openai import OpenAI
# client = OpenAI(api_key="your-api-key")
# prompt: str = (
# FewShotPromptBuilder()
# .purpose("Return the color of given fruit")
# .caution("The fruit name should be in English")
# .example("Apple", Fruit(name="Apple", color="Red"))
# .example("Peach", Fruit(name="Peach", color="Pink"))
# .example("Banana", Fruit(name="Banana", color="Yellow"))
# .example("Strawberry", Fruit(name="Strawberry", color="Red"))
# .example("Blueberry", Fruit(name="Blueberry", color="Blue"))
# .improve(client=client, model_name="gpt-4o") # Explicit client and model
# .explain()
# .build()
# )
=== Iteration 1 === Instruction: The original purpose "Return the color of given fruit" is somewhat terse and could lead to ambiguity regarding the output format and expected behavior. Specifically, it does not specify that the output should be a JSON string containing the fruit's name and color, which is crucial for consistent responses. Refining the purpose to explicitly describe both the input and the expected output format will improve clarity and reduce ambiguity. --- before +++ after @@ -1,7 +1,7 @@ <Prompt> - <Purpose>Return the color of given fruit</Purpose> + <Purpose>Given the name of a fruit in English, return a JSON string containing the fruit's name and its typical color.</Purpose> <Cautions> - <Caution>The fruit name should be in English</Caution> + <Caution>The fruit name should be in English.</Caution> </Cautions> <Examples> <Example> === Iteration 2 === Instruction: The 'cautions' field currently only specifies that the fruit name should be in English, which is important but insufficient. There are other potential edge cases and pitfalls that need highlighting, such as handling unknown fruits, case sensitivity, spelling variations, and ensuring the typical color (not rare variants) is used. Adding these cautions will help guide the model to generate more accurate and robust responses. --- before +++ after @@ -2,6 +2,10 @@ <Purpose>Given the name of a fruit in English, return a JSON string containing the fruit's name and its typical color.</Purpose> <Cautions> <Caution>The fruit name should be in English.</Caution> + <Caution>The fruit name should be spelled correctly to avoid errors.</Caution> + <Caution>If the fruit is unknown or not in the database, respond with a JSON indicating the fruit name and color as "Unknown".</Caution> + <Caution>Use the typical or most common color associated with the fruit, ignoring rare or variant colors.</Caution> + <Caution>The input is case-insensitive; handle fruit names regardless of letter case.</Caution> </Cautions> <Examples> <Example> === Iteration 3 === Instruction: The examples currently cover a limited set of fruits, all with known colors, and do not demonstrate handling of edge cases such as case variations or unknown fruits. Expanding the examples to include such cases will better demonstrate expected behavior and improve the prompt's comprehensiveness. We add examples with varied casing and an unknown fruit to cover these scenarios. --- before +++ after @@ -28,5 +28,17 @@ <Input>Blueberry</Input> <Output>{"name":"Blueberry","color":"Blue"}</Output> </Example> + <Example> + <Input>apple</Input> + <Output>{"name":"Apple","color":"Red"}</Output> + </Example> + <Example> + <Input>Mango</Input> + <Output>{"name":"Mango","color":"Orange"}</Output> + </Example> + <Example> + <Input>Dragonfruit</Input> + <Output>{"name":"Dragonfruit","color":"Unknown"}</Output> + </Example> </Examples> </Prompt> === Iteration 4 === Instruction: In this final iteration, we conducted a thorough review of the entire prompt to ensure it is clear, unambiguous, and free from redundancies or contradictions. We standardized phrasing across fields, ensured JSON outputs are consistently formatted, and confirmed that the purpose, cautions, and examples fully support the task requirements. No further issues were found; thus, the prompt is finalized.
Display Improved Prompt¶
Output the improved and explained prompt.
In [5]:
Copied!
# Print the improved prompt
print(prompt)
# Print the improved prompt
print(prompt)
<Prompt> <Purpose>Given the name of a fruit in English, return a JSON string containing the fruit's name and its typical color.</Purpose> <Cautions> <Caution>The fruit name should be in English.</Caution> <Caution>The fruit name should be spelled correctly to avoid errors.</Caution> <Caution>If the fruit is unknown or not in the database, respond with a JSON indicating the fruit name and color as "Unknown".</Caution> <Caution>Use the typical or most common color associated with the fruit, ignoring rare or variant colors.</Caution> <Caution>The input is case-insensitive; handle fruit names regardless of letter case.</Caution> </Cautions> <Examples> <Example> <Input>Apple</Input> <Output>{"name":"Apple","color":"Red"}</Output> </Example> <Example> <Input>Peach</Input> <Output>{"name":"Peach","color":"Pink"}</Output> </Example> <Example> <Input>Banana</Input> <Output>{"name":"Banana","color":"Yellow"}</Output> </Example> <Example> <Input>Strawberry</Input> <Output>{"name":"Strawberry","color":"Red"}</Output> </Example> <Example> <Input>Blueberry</Input> <Output>{"name":"Blueberry","color":"Blue"}</Output> </Example> <Example> <Input>apple</Input> <Output>{"name":"Apple","color":"Red"}</Output> </Example> <Example> <Input>Mango</Input> <Output>{"name":"Mango","color":"Orange"}</Output> </Example> <Example> <Input>Dragonfruit</Input> <Output>{"name":"Dragonfruit","color":"Unknown"}</Output> </Example> </Examples> </Prompt>
Conclusion¶
This notebook illustrated how to effectively use FewShotPromptBuilder
to create, structure, and enhance prompts for OpenAI models.