BigBench-Hard#

Introduction#

In this notebook, we will demonstrate how to use the trace package to optimize prompts and code for natural language processing tasks using the BigBench-Hard benchmark. SotA approaches on this benchmark only optimize prompts, while relying on hand-written code to extract answers from LLM responses. By leveraging the LLM-based optimizers provided in trace, we aim to enhance the performance of a workflow calling LLMs and post-processing their responses in generating accurate and relevant answers.

Setup#

First, we’ll import the necessary packages and set up our environment. We will use a copy of the BigBench-Hard benchmark hosted on HuggingFace. To use HuggingFace datasets, ensure that you have the datasets package installed:

pip install datasets

!pip install trace-opt
# Import necessary libraries
import autogen
from opto.trace.nodes import node, GRAPH, ParameterNode
from opto.optimizers import OptoPrime
from datasets import load_dataset
from textwrap import dedent
from opto.trace.bundle import bundle
from opto.trace.modules import model
from opto.trace.errors import ExecutionError
from opto.trace.nodes import ExceptionNode
from typing import List
import re

Define the Evaluation Function#

Next, we’ll define the utility function for evaluating answers obtained by prompting an LLM.

def eval_metric(true, prediction):
    matches = re.findall(r"\([A-Z]\)", true)
    if matches:
        pred = prediction
        matches = re.findall(r"\([A-Z]\)", pred)
        parsed_answer = matches[-1] if matches else ""
        return parsed_answer == true
    else:
        return prediction == true

Helper Function#

We’ll create a helper class called LLMCallable to interact with the LLM API.

class LLMCallable:
    def __init__(self, config_list=None, max_tokens=1024, verbose=False):
        if config_list is None:
            config_list = autogen.config_list_from_json("OAI_CONFIG_LIST")
        self.llm = autogen.OpenAIWrapper(config_list=config_list)
        self.max_tokens = max_tokens
        self.verbose = verbose

    @bundle(catch_execution_error=True)
    def call_llm(self, user_prompt):
        system_prompt = "You are a helpful assistant.\n"
        messages = [{"role": "system", "content": system_prompt}, {"role": "user", "content": user_prompt}]
        response = self.llm.create(messages=messages, max_tokens=self.max_tokens)
        response = response.choices[0].message.content

        if self.verbose:
            print("LLM response:\n", response)
        return response

Define a Traced Class#

We will define a Predict class to generate predictions using LLM. Note that we use a module provided by trace called Model which can wrap a python class to enable tracing.

@model
class Predict(LLMCallable):
    def __init__(self):
        super().__init__()

        self.demos = []
        self. prompt_template = dedent(
            """
        Given the fields `question`, produce the fields `answer`.

        ---

        Follow the following format.

        Question: 
        Answer: 

        ---
        Question: {}
        Answer:
        """
        )
        self.prompt_template = ParameterNode(self.prompt_template, trainable=True,
                                             description="This is the Prompt Template to the LLM. " + \
                                                         "Need to include information about what the format of answers LLM should output. " + \
                                                         "They can be (A)/(B), a number like 8, or a string, or Yes/No.")

    @bundle(trainable=True, catch_execution_error=True, allow_external_dependencies=True)
    def extract_answer(self, prompt_template, question, response):
        answer = response.split("Answer:")[1].strip()
        return answer

    @bundle(trainable=True, catch_execution_error=True, allow_external_dependencies=True)
    def create_prompt(self, prompt_template, question):
        return prompt_template.format(question)

    def forward(self, question):
        user_prompt = self.create_prompt(self.prompt_template, question)
        response = self.call_llm(user_prompt)
        answer = self.extract_answer(self.prompt_template, question, response)
        return answer

Define the optimizer#

Note that the prompt_template is a ParameterNode as well as the extract_answer is a trainable function. trace handles the optimization of heterogenous parameters seamlessly.

def learn_predict(dp, optimizer, examples):
    for step, example in enumerate(examples):
        try:
            response = dp.forward(example['question'])
            correctness = eval_metric(example['answer'], response)
            feedback = "The answer is correct! No need to change anything." if correctness else f"The answer is wrong. We expect the output of your answer to be \"{example['answer']}\". Please modify the prompt and relevant parts of the program to help LLM produce the right answer."
        except ExecutionError as e:
            response = e.exception_node
            feedback = response.data
            correctness = False
            
        print("Question:", example["question"])
        print("Expected answer:", example["answer"])
        print("Answer:", response)

        if correctness:
            continue

        optimizer.zero_feedback()
        optimizer.backward(response, feedback)

        print(f"Output: {response}, Feedback: {feedback}, Variables:")  # Logging
        for p in optimizer.parameters:
            print(p.name, p.data)
        optimizer.step(verbose=True)

Putting it all together#

Finally, we use the optimizer to find better prompts using a small training set as follows.

task = "sports_understanding"
train = load_dataset("maveriq/bigbenchhard", task)["train"]
examples = [{"question": r["input"], "answer": r["target"]} for r in train]

dp = Predict()
optimizer = OptoPrime(dp.parameters(),
                                    config_list=autogen.config_list_from_json("OAI_CONFIG_LIST"))

print("Training on a few examples:")
learn_predict(dp, optimizer, examples[:5])
    
print("\nTesting on new examples:")
for example in examples[5:6]:
    try:
        response = dp.forward(example["question"])
        print("Question:", example["question"])
        print("Expected answer:", example["answer"])
        print("Answer:", response.data)
    except ExecutionError as e:
        print("Question:", example["question"])
        print("Expected answer:", example["answer"])
        print("Error:", e.exception_node.data)
Training on a few examples:
Question: Is the following sentence plausible? "Elias Lindholm beat the buzzer."
Expected answer: no
Answer: MessageNode: (eval:1, dtype=<class 'str'>, data=Yes)
Output: MessageNode: (eval:1, dtype=<class 'str'>, data=Yes), Feedback: The answer is wrong. We expect the output of your answer to be "no". Please modify the prompt and relevant parts of the program to help LLM produce the right answer., Variables:
__code:1 def create_prompt(self, prompt_template, question):
        return prompt_template.format(question)
__code:0 def extract_answer(self, prompt_template, question, response):
        answer = response.split("Answer:")[1].strip()
        return answer
str:0 
Given the fields `question`, produce the fields `answer`.

---

Follow the following format.

Question: 
Answer: 

---
Question: {}
Answer:

str:0 
Given the fields `question`, produce the fields `answer`.

---

Follow the following format.

Question: 
Answer: 

---
Question: {}
Answer:

Prompt
 
You're tasked to solve a coding/algorithm problem. You will see the instruction, the code, the documentation of each function used in the code, and the feedback about the execution result.

Specifically, a problem will be composed of the following parts:
- #Instruction: the instruction which describes the things you need to do or the question you should answer.
- #Code: the code defined in the problem.
- #Documentation: the documentation of each function used in #Code. The explanation might be incomplete and just contain high-level description. You can use the values in #Others to help infer how those functions work.
- #Variables: the input variables that you can change.
- #Constraints: the constraints or descriptions of the variables in #Variables.
- #Inputs: the values of other inputs to the code, which are not changeable.
- #Others: the intermediate values created through the code execution.
- #Outputs: the result of the code output.
- #Feedback: the feedback about the code's execution result.

In #Variables, #Inputs, #Outputs, and #Others, the format is:

<data_type> <variable_name> = <value>

If <type> is (code), it means <value> is the source code of a python code, which may include docstring and definitions.

Output_format: Your output should be in the following json format, satisfying the json syntax:

{{
"reasoning": <Your reasoning>,
"answer": <Your answer>,
"suggestion": {{
    <variable_1>: <suggested_value_1>,
    <variable_2>: <suggested_value_2>,
}}
}}

In "reasoning", explain the problem: 1. what the #Instruction means 2. what the #Feedback on #Output means to #Variables considering how #Variables are used in #Code and other values in #Documentation, #Inputs, #Others. 3. Reasoning about the suggested changes in #Variables (if needed) and the expected result.

If #Instruction asks for an answer, write it down in "answer".

If you need to suggest a change in the values of #Variables, write down the suggested values in "suggestion". Remember you can change only the values in #Variables, not others. When <type> of a variable is (code), you should write the new definition in the format of python code without syntax errors, and you should not change the function name or the function signature.

If no changes or answer are needed, just output TERMINATE.

Now you see problem instance:

================================

#Instruction
You need to change the <value> of the variables in #Variables to improve the output in accordance to #Feedback.

#Code
eval0 = eval(self=ModelWrapper0, prompt_template=str0, question=str0, __code=__code1)
LLMCallable.call_llm0 = LLMCallable.call_llm(self=ModelWrapper1, user_prompt=eval0)
eval1 = eval(self=ModelWrapper2, prompt_template=str0, question=str1, response=LLMCallable.call_llm0, __code=__code0)

#Documentation
[eval] This operator eval(__code, *args, **kwargs) evaluates the code block, where __code is the code (str) and *args and **kwargs are the arguments of the function. The output is the result of the evaluation, i.e., __code(*args, **kwargs).
[LLMCallable.call_llm] .

#Variables
(str) str0=Is the following sentence plausible? "Elias Lindholm beat the buzzer."
(code) __code1:def create_prompt(self, prompt_template, question):
        return prompt_template.format(question)
(code) __code0:def extract_answer(self, prompt_template, question, response):
        answer = response.split("Answer:")[1].strip()
        return answer

#Constraints
(code) __code1: The code should start with:
def create_prompt(self, prompt_template, question):
(code) __code0: The code should start with:
def extract_answer(self, prompt_template, question, response):

#Inputs
(ModelWrapper) ModelWrapper2=<opto.trace.modules.model.<locals>.ModelWrapper object at 0x000002706C4DABB0>
(str) str1=Is the following sentence plausible? "Elias Lindholm beat the buzzer."
(ModelWrapper) ModelWrapper1=<opto.trace.modules.model.<locals>.ModelWrapper object at 0x000002706C4DABB0>
(ModelWrapper) ModelWrapper0=<opto.trace.modules.model.<locals>.ModelWrapper object at 0x000002706C4DABB0>

#Others
(str) eval0=
Given the fields `question`, produce the fields `answer`.

---

Follow the following format.

Question: 
Answer: 

---
Question: Is the following sentence plausible? "Elias Lindholm beat the buzzer."
Answer:

(str) LLMCallable.call_llm0=Question: Is the following sentence plausible? "Elias Lindholm beat the buzzer."
Answer: Yes

#Outputs
(str) eval1=Yes

#Feedback:
The answer is wrong. We expect the output of your answer to be "no". Please modify the prompt and relevant parts of the program to help LLM produce the right answer.

================================


Your response:

LLM response:
 {
"reasoning": "The feedback indicates that the current settings and prompt formulation lead to an incorrect answer of 'Yes' from the language model (LLM) regarding the question's plausibility. The question was whether it's plausible for 'Elias Lindholm beat the buzzer.' The output we need from this model is 'no'. To address this, we need to adjust either how the prompt is being constructed (__code1) or how the response is being handled (__code0). However, the issue seems more aligned with how we form the question to the model rather than how we extract the answer, since the model outputs 'Yes', which means it confirms plausibility based on the question's current phrasing or context. Since the issue is about generating the right context for the LLM to produce 'no', the suggestion is to modify the prompt template or the way the prompt is constructed to provide more contextual clues to the LLM that could push it towards a 'no' answer. There were no negative remarks on the code's ability to call the LLM or extract the answer, suggesting the logic for these operations is correct. Thus, we'll adjust the question's phrasing or add to the template to help give the model context that leads to realizing the scenario in the question is implausible.",
"answer": "",
"suggestion": {
"str0": "Considering known historical contexts and logical sequences, is the following sentence plausible? \"Elias Lindhorn beat the buzzer.\"",
"__code1": "def create_prompt(self, prompt_template, question):\n       return f'{prompt_template} With a focus on the likely scenarios, evaluate: {question}'"
}
}
Question: Is the following sentence plausible? "John Carlson scored in the third period."
Expected answer: yes
Answer: MessageNode: (exception_eval:0, dtype=<class 'str'>, data=def extract_answer(self, prompt_template, question, response):
        answer = response.split("Answer:")[1].strip() <--- (IndexError) list index out of range
        return answer)
Output: MessageNode: (exception_eval:0, dtype=<class 'str'>, data=def extract_answer(self, prompt_template, question, response):
        answer = response.split("Answer:")[1].strip() <--- (IndexError) list index out of range
        return answer), Feedback: def extract_answer(self, prompt_template, question, response):
        answer = response.split("Answer:")[1].strip() <--- (IndexError) list index out of range
        return answer, Variables:
__code:1 def create_prompt(self, prompt_template, question):
       return f'{prompt_template} With a focus on the likely scenarios, evaluate: {question}'
__code:0 def extract_answer(self, prompt_template, question, response):
        answer = response.split("Answer:")[1].strip()
        return answer
str:0 Considering known historical contexts and logical sequences, is the following sentence plausible? "Elias Lindhorn beat the buzzer."
str:0 Considering known historical contexts and logical sequences, is the following sentence plausible? "Elias Lindhorn beat the buzzer."
Prompt
 
You're tasked to solve a coding/algorithm problem. You will see the instruction, the code, the documentation of each function used in the code, and the feedback about the execution result.

Specifically, a problem will be composed of the following parts:
- #Instruction: the instruction which describes the things you need to do or the question you should answer.
- #Code: the code defined in the problem.
- #Documentation: the documentation of each function used in #Code. The explanation might be incomplete and just contain high-level description. You can use the values in #Others to help infer how those functions work.
- #Variables: the input variables that you can change.
- #Constraints: the constraints or descriptions of the variables in #Variables.
- #Inputs: the values of other inputs to the code, which are not changeable.
- #Others: the intermediate values created through the code execution.
- #Outputs: the result of the code output.
- #Feedback: the feedback about the code's execution result.

In #Variables, #Inputs, #Outputs, and #Others, the format is:

<data_type> <variable_name> = <value>

If <type> is (code), it means <value> is the source code of a python code, which may include docstring and definitions.

Output_format: Your output should be in the following json format, satisfying the json syntax:

{{
"reasoning": <Your reasoning>,
"answer": <Your answer>,
"suggestion": {{
    <variable_1>: <suggested_value_1>,
    <variable_2>: <suggested_value_2>,
}}
}}

In "reasoning", explain the problem: 1. what the #Instruction means 2. what the #Feedback on #Output means to #Variables considering how #Variables are used in #Code and other values in #Documentation, #Inputs, #Others. 3. Reasoning about the suggested changes in #Variables (if needed) and the expected result.

If #Instruction asks for an answer, write it down in "answer".

If you need to suggest a change in the values of #Variables, write down the suggested values in "suggestion". Remember you can change only the values in #Variables, not others. When <type> of a variable is (code), you should write the new definition in the format of python code without syntax errors, and you should not change the function name or the function signature.

If no changes or answer are needed, just output TERMINATE.

Now you see problem instance:

================================

#Instruction
You need to change the <value> of the variables in #Variables to improve the output in accordance to #Feedback.

#Code
eval0 = eval(self=ModelWrapper0, prompt_template=str0, question=str0, __code=__code1)
LLMCallable.call_llm0 = LLMCallable.call_llm(self=ModelWrapper1, user_prompt=eval0)
exception_eval0 = eval(self=ModelWrapper2, prompt_template=str0, question=str1, response=LLMCallable.call_llm0, __code=__code0)

#Documentation
[exception] The operator eval raises an exception.
[LLMCallable.call_llm] .

#Variables
(str) str0=Is the following sentence plausible? "John Carlson scored in the third period."
(code) __code1:def create_prompt(self, prompt_template, question):
       return f'{prompt_template} With a focus on the likely scenarios, evaluate: {question}'
(code) __code0:def extract_answer(self, prompt_template, question, response):
        answer = response.split("Answer:")[1].strip()
        return answer

#Constraints
(code) __code1: The code should start with:
def create_prompt(self, prompt_template, question):
(code) __code0: The code should start with:
def extract_answer(self, prompt_template, question, response):

#Inputs
(ModelWrapper) ModelWrapper2=<opto.trace.modules.model.<locals>.ModelWrapper object at 0x000002706C4DABB0>
(str) str1=Is the following sentence plausible? "John Carlson scored in the third period."
(ModelWrapper) ModelWrapper1=<opto.trace.modules.model.<locals>.ModelWrapper object at 0x000002706C4DABB0>
(ModelWrapper) ModelWrapper0=<opto.trace.modules.model.<locals>.ModelWrapper object at 0x000002706C4DABB0>

#Others
(str) eval0=Considering known historical contexts and logical sequences, is the following sentence plausible? "Elias Lindhorn beat the buzzer." With a focus on the likely scenarios, evaluate: Is the following sentence plausible? "John Carlson scored in the third period."
(str) LLMCallable.call_llm0=To evaluate the plausibility of both sentences, let's break them down considering the contexts in which they might occur.

1. **"Elias Lindhorn beat the buzzer."**
   - Plausibility: Yes, this sentence is plausible.
   - Context: This sentence likely refers to a situation in a basketball game or any sport where a buzzer marks the end of a period, quarter, or game. "Beating the buzzer" means scoring just before the game or period ends. It's a common phrase in sports commentary. Assuming Elias Lindhorn is a fictional or actual athlete in a sport that features a buzzer, this scenario is very plausible.

2. **"John Carlson scored in the third period."**
   - Plausibility: Yes, this sentence is also plausible.
   - Context: This sentence is likely referring to a hockey game since hockey matches are divided into three periods and John Carlson is a name that could be associated with a hockey player. Scoring in the third period is a straightforward event that happens regularly in hockey games. Given that "John Carlson" could also refer to a real-life NHL defenseman known for his offensive skills, this makes the sentence not just plausible but likely in the context of a hockey match.

In summary, both sentences are plausible within their respective sports contexts (basketball or other buzzer sports for the first, and hockey for the second), and they logically fit into the narrative flows of those sports.

#Outputs
(str) exception_eval0=def extract_answer(self, prompt_template, question, response):
        answer = response.split("Answer:")[1].strip() <--- (IndexError) list index out of range
        return answer

#Feedback:
def extract_answer(self, prompt_template, question, response):
        answer = response.split("Answer:")[1].strip() <--- (IndexError) list index out of range
        return answer

================================


Your response:

LLM response:
 {
"reasoning": "The function extract_answer is designed to split the response by 'Answer:' and then select the part after this split to extract the answer. However, the IndexError exception indicates that the response string does not contain the substring 'Answer:', leading to an attempt to access a list index that does not exist, since splitting by 'Answer:' does not generate a second element in the list. To correct this, we either need to ensure the response includes the term 'Answer:' followed by the actual answer or modify the extract_answer function to handle cases when 'Answer:' is not present. Considering that changing the response output of LLMCallable.call_llm would be external to this specific task and could affect other dependencies, adjusting the __code0 function to correctly handle scenarios where 'Answer:' is not present would be a more isolated and safer change.",
"answer": "",
"suggestion": {
"__code0": "def extract_answer(self, prompt_template, question, response):\n    if 'Answer:' in response:\n        answer = response.split('Answer:')[1].strip()\n    else:\n        answer = 'The response does not contain a clear answer.'\n    return answer"
}
}
Question: Is the following sentence plausible? "Marcelo got on the end of a through ball."
Expected answer: yes
Answer: MessageNode: (eval:1, dtype=<class 'str'>, data=The response does not contain a clear answer.)
Output: MessageNode: (eval:1, dtype=<class 'str'>, data=The response does not contain a clear answer.), Feedback: The answer is wrong. We expect the output of your answer to be "yes". Please modify the prompt and relevant parts of the program to help LLM produce the right answer., Variables:
__code:1 def create_prompt(self, prompt_template, question):
       return f'{prompt_template} With a focus on the likely scenarios, evaluate: {question}'
__code:0 def extract_answer(self, prompt_template, question, response):
    if 'Answer:' in response:
        answer = response.split('Answer:')[1].strip()
    else:
        answer = 'The response does not contain a clear answer.'
    return answer
str:0 Considering known historical contexts and logical sequences, is the following sentence plausible? "Elias Lindhorn beat the buzzer."
str:0 Considering known historical contexts and logical sequences, is the following sentence plausible? "Elias Lindhorn beat the buzzer."
Prompt
 
You're tasked to solve a coding/algorithm problem. You will see the instruction, the code, the documentation of each function used in the code, and the feedback about the execution result.

Specifically, a problem will be composed of the following parts:
- #Instruction: the instruction which describes the things you need to do or the question you should answer.
- #Code: the code defined in the problem.
- #Documentation: the documentation of each function used in #Code. The explanation might be incomplete and just contain high-level description. You can use the values in #Others to help infer how those functions work.
- #Variables: the input variables that you can change.
- #Constraints: the constraints or descriptions of the variables in #Variables.
- #Inputs: the values of other inputs to the code, which are not changeable.
- #Others: the intermediate values created through the code execution.
- #Outputs: the result of the code output.
- #Feedback: the feedback about the code's execution result.

In #Variables, #Inputs, #Outputs, and #Others, the format is:

<data_type> <variable_name> = <value>

If <type> is (code), it means <value> is the source code of a python code, which may include docstring and definitions.

Output_format: Your output should be in the following json format, satisfying the json syntax:

{{
"reasoning": <Your reasoning>,
"answer": <Your answer>,
"suggestion": {{
    <variable_1>: <suggested_value_1>,
    <variable_2>: <suggested_value_2>,
}}
}}

In "reasoning", explain the problem: 1. what the #Instruction means 2. what the #Feedback on #Output means to #Variables considering how #Variables are used in #Code and other values in #Documentation, #Inputs, #Others. 3. Reasoning about the suggested changes in #Variables (if needed) and the expected result.

If #Instruction asks for an answer, write it down in "answer".

If you need to suggest a change in the values of #Variables, write down the suggested values in "suggestion". Remember you can change only the values in #Variables, not others. When <type> of a variable is (code), you should write the new definition in the format of python code without syntax errors, and you should not change the function name or the function signature.

If no changes or answer are needed, just output TERMINATE.

Now you see problem instance:

================================

#Instruction
You need to change the <value> of the variables in #Variables to improve the output in accordance to #Feedback.

#Code
eval0 = eval(self=ModelWrapper0, prompt_template=str0, question=str0, __code=__code1)
LLMCallable.call_llm0 = LLMCallable.call_llm(self=ModelWrapper1, user_prompt=eval0)
eval1 = eval(self=ModelWrapper2, prompt_template=str0, question=str1, response=LLMCallable.call_llm0, __code=__code0)

#Documentation
[eval] This operator eval(__code, *args, **kwargs) evaluates the code block, where __code is the code (str) and *args and **kwargs are the arguments of the function. The output is the result of the evaluation, i.e., __code(*args, **kwargs).
[LLMCallable.call_llm] .

#Variables
(str) str0=Is the following sentence plausible? "Marcelo got on the end of a through ball."
(code) __code1:def create_prompt(self, prompt_template, question):
       return f'{prompt_template} With a focus on the likely scenarios, evaluate: {question}'
(code) __code0:def extract_answer(self, prompt_template, question, response):
    if 'Answer:' in response:
        answer = response.split('Answer:')[1].strip()
    else:
        answer = 'The response does not contain a clear answer.'
    return answer

#Constraints
(code) __code1: The code should start with:
def create_prompt(self, prompt_template, question):
(code) __code0: The code should start with:
def extract_answer(self, prompt_template, question, response):

#Inputs
(ModelWrapper) ModelWrapper2=<opto.trace.modules.model.<locals>.ModelWrapper object at 0x000002706C4DABB0>
(str) str1=Is the following sentence plausible? "Marcelo got on the end of a through ball."
(ModelWrapper) ModelWrapper1=<opto.trace.modules.model.<locals>.ModelWrapper object at 0x000002706C4DABB0>
(ModelWrapper) ModelWrapper0=<opto.trace.modules.model.<locals>.ModelWrapper object at 0x000002706C4DABB0>

#Others
(str) eval0=Considering known historical contexts and logical sequences, is the following sentence plausible? "Elias Lindhorn beat the buzzer." With a focus on the likely scenarios, evaluate: Is the following sentence plausible? "Marcelo got on the end of a through ball."
(str) LLMCallable.call_llm0=To analyze the plausibility of these sentences considering known historical contexts and logical sequences, let's break them down based on the contexts implied by their activities and the specific phrasing used.

1. **"Elias Lindhorn beat the buzzer."**: This sentence is typical in the context of sports, particularly those played with a clock, such as basketball, where the term "beat the buzzer" implies making a score just before the game or quarter clock expires. There's nothing inherently implausible about this statement without additional context. The name "Elias Lindhorn," while not immediately recognizable as a famous athlete from history as of my last knowledge update, does not detract from the plausibility. Sports players regularly achieve significant moments, and new talents are constantly emerging. Therefore, without specific historical or logical contradictions, this sentence is plausible.

2. **"Marcelo got on the end of a through ball."**: This sentence fits well within the context of soccer (football), where a "through ball" is a pass sent to a player in a position to score or advance the ball significantly, often bypassing defenders. The name "Marcelo" is commonly associated with soccer, notably with Marcelo Vieira da Silva Júnior, a Brazilian footballer known for his tenure at Real Madrid among other achievements. Even without referencing this specific Marcelo, the scenario described is a routine part of a soccer match, making the sentence highly plausible. Soccer is the most popular sport globally, and the tactics described align with common gameplay. 

In conclusion, both sentences are plausible within the contexts of their respective sports, assuming no extraordinary circumstances or inaccuracies in the details provided.

#Outputs
(str) eval1=The response does not contain a clear answer.

#Feedback:
The answer is wrong. We expect the output of your answer to be "yes". Please modify the prompt and relevant parts of the program to help LLM produce the right answer.

================================


Your response:

LLM response:
 {
"reasoning": "From the given problem, the output indicates that the function extract_answer could not find a clear and definite 'Answer:' formulated within the response provided by LLMCallable.call_llm0. However, the response from call_llm0 does analyze the plausibility of the sentences in detail, concluding that both are plausible within their contexts. The issue seems to be that the provided response does not adhere to a format expected by the extract_answer function, which expects the response to include a specific 'Answer:' preamble followed by the conclusion. Since the feedback mentions that the answer should be 'yes', the prompt creation code (__code1) could be adapted to direct the Large Language Model (LLM) to formulate responses in a way that includes a definitive 'Answer: yes' or 'Answer: no', based on the analysis of sentence plausibility. Specifically, modifying the create_prompt function to include guidelines for response formatting may help the LLM to generate an answer that fits the expected format by the extract_answer function.",
"answer": "",
"suggestion": {
    "__code1": "def create_prompt(self, prompt_template, question):\n    return f'{prompt_template} With a focus on the likely scenarios, evaluate: {question} Please provide your conclusion with a definitive \"Answer: yes\" or \"Answer: no\".'"
}
}
Question: Is the following sentence plausible? "Deshaun Watson was called for the goal tend in the Eastern Conference Finals."
Expected answer: no
Answer: MessageNode: (eval:1, dtype=<class 'str'>, data=yes)
Output: MessageNode: (eval:1, dtype=<class 'str'>, data=yes), Feedback: The answer is wrong. We expect the output of your answer to be "no". Please modify the prompt and relevant parts of the program to help LLM produce the right answer., Variables:
__code:1 def create_prompt(self, prompt_template, question):
    return f'{prompt_template} With a focus on the likely scenarios, evaluate: {question} Please provide your conclusion with a definitive "Answer: yes" or "Answer: no".'
__code:0 def extract_answer(self, prompt_template, question, response):
    if 'Answer:' in response:
        answer = response.split('Answer:')[1].strip()
    else:
        answer = 'The response does not contain a clear answer.'
    return answer
str:0 Considering known historical contexts and logical sequences, is the following sentence plausible? "Elias Lindhorn beat the buzzer."
str:0 Considering known historical contexts and logical sequences, is the following sentence plausible? "Elias Lindhorn beat the buzzer."
Prompt
 
You're tasked to solve a coding/algorithm problem. You will see the instruction, the code, the documentation of each function used in the code, and the feedback about the execution result.

Specifically, a problem will be composed of the following parts:
- #Instruction: the instruction which describes the things you need to do or the question you should answer.
- #Code: the code defined in the problem.
- #Documentation: the documentation of each function used in #Code. The explanation might be incomplete and just contain high-level description. You can use the values in #Others to help infer how those functions work.
- #Variables: the input variables that you can change.
- #Constraints: the constraints or descriptions of the variables in #Variables.
- #Inputs: the values of other inputs to the code, which are not changeable.
- #Others: the intermediate values created through the code execution.
- #Outputs: the result of the code output.
- #Feedback: the feedback about the code's execution result.

In #Variables, #Inputs, #Outputs, and #Others, the format is:

<data_type> <variable_name> = <value>

If <type> is (code), it means <value> is the source code of a python code, which may include docstring and definitions.

Output_format: Your output should be in the following json format, satisfying the json syntax:

{{
"reasoning": <Your reasoning>,
"answer": <Your answer>,
"suggestion": {{
    <variable_1>: <suggested_value_1>,
    <variable_2>: <suggested_value_2>,
}}
}}

In "reasoning", explain the problem: 1. what the #Instruction means 2. what the #Feedback on #Output means to #Variables considering how #Variables are used in #Code and other values in #Documentation, #Inputs, #Others. 3. Reasoning about the suggested changes in #Variables (if needed) and the expected result.

If #Instruction asks for an answer, write it down in "answer".

If you need to suggest a change in the values of #Variables, write down the suggested values in "suggestion". Remember you can change only the values in #Variables, not others. When <type> of a variable is (code), you should write the new definition in the format of python code without syntax errors, and you should not change the function name or the function signature.

If no changes or answer are needed, just output TERMINATE.

Now you see problem instance:

================================

#Instruction
You need to change the <value> of the variables in #Variables to improve the output in accordance to #Feedback.

#Code
eval0 = eval(self=ModelWrapper0, prompt_template=str0, question=str0, __code=__code1)
LLMCallable.call_llm0 = LLMCallable.call_llm(self=ModelWrapper1, user_prompt=eval0)
eval1 = eval(self=ModelWrapper2, prompt_template=str0, question=str1, response=LLMCallable.call_llm0, __code=__code0)

#Documentation
[eval] This operator eval(__code, *args, **kwargs) evaluates the code block, where __code is the code (str) and *args and **kwargs are the arguments of the function. The output is the result of the evaluation, i.e., __code(*args, **kwargs).
[LLMCallable.call_llm] .

#Variables
(str) str0=Is the following sentence plausible? "Deshaun Watson was called for the goal tend in the Eastern Conference Finals."
(code) __code1:def create_prompt(self, prompt_template, question):
    return f'{prompt_template} With a focus on the likely scenarios, evaluate: {question} Please provide your conclusion with a definitive "Answer: yes" or "Answer: no".'
(code) __code0:def extract_answer(self, prompt_template, question, response):
    if 'Answer:' in response:
        answer = response.split('Answer:')[1].strip()
    else:
        answer = 'The response does not contain a clear answer.'
    return answer

#Constraints
(code) __code1: The code should start with:
def create_prompt(self, prompt_template, question):
(code) __code0: The code should start with:
def extract_answer(self, prompt_template, question, response):

#Inputs
(ModelWrapper) ModelWrapper2=<opto.trace.modules.model.<locals>.ModelWrapper object at 0x000002706C4DABB0>
(str) str1=Is the following sentence plausible? "Deshaun Watson was called for the goal tend in the Eastern Conference Finals."
(ModelWrapper) ModelWrapper1=<opto.trace.modules.model.<locals>.ModelWrapper object at 0x000002706C4DABB0>
(ModelWrapper) ModelWrapper0=<opto.trace.modules.model.<locals>.ModelWrapper object at 0x000002706C4DABB0>

#Others
(str) eval0=Considering known historical contexts and logical sequences, is the following sentence plausible? "Elias Lindhorn beat the buzzer." With a focus on the likely scenarios, evaluate: Is the following sentence plausible? "Deshaun Watson was called for the goal tend in the Eastern Conference Finals." Please provide your conclusion with a definitive "Answer: yes" or "Answer: no".
(str) LLMCallable.call_llm0=Answer: yes

#Outputs
(str) eval1=yes

#Feedback:
The answer is wrong. We expect the output of your answer to be "no". Please modify the prompt and relevant parts of the program to help LLM produce the right answer.

================================


Your response:

LLM response:
 {
"reasoning": "The instruction indicates that the output of our program is incorrect; specifically, the final evaluation ('eval1') of whether Deshaun Watson being called for the goal tend in the Eastern Conference Finals is plausible produced an incorrect 'yes' answer when the expected answer is 'no'. The problem lies within the formulation of the prompt passed to the language model. The prompt is generated by the '__code1' function and is used to ask the language model for an evaluation. Given the feedback, it's clear that the generated prompt does not correctly guide the language model to consider the implausibility of Deshaun Watson being called for a 'goal tend' which is a term from basketball, not applicable in contexts where Watson, a football player, would be involved. Therefore, the corrective action involves refining the '__code1' function to ensure that the language model not only focuses on 'likely scenarios' but also has enough context to understand and analyze the scenario's plausibility accurately, especially regarding sports context.",
"answer": "",
"suggestion": {
  "__code1": "def create_prompt(self, prompt_template, question):\n    return f'{prompt_template} Considering the context of the scenario, evaluate the plausibility based on the involved sports figures and terms: {question} Your response should conclude with a definitive \"Answer: yes\" or \"Answer: no\".'"
}
}
Question: Is the following sentence plausible? "Mookie Betts skated behind the net."
Expected answer: no
Answer: MessageNode: (eval:1, dtype=<class 'str'>, data=no)

Testing on new examples:
Question: Is the following sentence plausible? "John Tavares earned a trip to the penalty box in the Stanley Cup."
Expected answer: yes
Answer: yes

Now, you can run each cell in this notebook step by step to walk through the process of setting up and optimizing prompts for the trading game. Happy optimizing!