Module tinytroupe.utils.semantics

Semantic-related mechanisms.

Expand source code
"""
Semantic-related mechanisms.
"""
from tinytroupe.utils import llm

@llm()
def correct_according_to_rule(observation, rules) -> str:
    """
    Given an observation and a one or more rules, this function rephrases or completely changes the observation in accordance with what the rules
    specify. Some guidelines:
        - Rules might require changes either to style or to content.
        - The rephrased observation should be coherent and consistent with the original observation, unless the rules require otherwise.
        - If the rules require, the corrected observation can contradict the original observation.
        - Enforce the rules very strictly, even if the original observation seems correct or acceptable.
        - Rules might contain additional information or suggestions that you may use to improve your output.

    ## Examples

        Observation: "You know, I am so sad these days."
        Rule: "I am always happy and depression is unknown to me"
        Modified observation: "You know, I am so happy these days."

    Args:
        observation: The observation that should be rephrased or changed. Something that is said or done, or a description of events or facts.
        rules: The rules that specifies what the modidfied observation should comply with.        
    
    Returns:
        str: The rephrased or corrected observation.
    """
    # llm decorator will handle the body of this function

@llm()
def restructure_as_observed_vs_expected(description) -> str:
    """
    Given the description of something (either a real event or abstract concept), but that violates an expectation, this function 
    extracts the following elements from it:

        - OBSERVED: The observed event or statement.
        - BROKEN EXPECTATION: The expectation that was broken by the observed event.
        - REASONING: The reasoning behind the expectation that was broken.
    
    If in reality the description does not mention any expectation violation, then the function should instead extract
    the following elements:

        - OBSERVED: The observed event.
        - MET EXPECTATION: The expectation that was met by the observed event.
        - REASONING: The reasoning behind the expectation that was met.

    This way of restructuring the description can be useful for downstream processing, making it easier to analyze or
    modify system outputs, for example.

    ## Examples

        Input: "Ana mentions she loved the proposed new food, a spicier flavor of gazpacho. However, this goes agains her known dislike
                of spicy food."
        Output: 
            "OBSERVED: Ana mentions she loved the proposed new food, a spicier flavor of gazpacho.
             BROKEN EXPECTATION: Ana should have mentioned that she disliked the proposed spicier gazpacho.
             REASONING: Ana has a known dislike of spicy food."

             
        Input: "Carlos traveled to Firenzi and was amazed by the beauty of the city. This was in line with his love for art and architecture."
        Output: 
            "OBSERVED: Carlos traveled to Firenzi and was amazed by the beauty of the city.
             MET EXPECTATION: Carlos should have been amazed by the beauty of the city.
             REASONING: Carlos loves art and architecture."

    Args:
        description (str): A description of an event or concept that either violates or meets an expectation.
    
    Returns:
        str: The restructured description.
    """
    # llm decorator will handle the body of this function

@llm()
def extract_observed_vs_expected_rules(description):
    """
    Given the description of something (either a real event or abstract concept), extract:
      - The object or person about whom something is said.
      - A list where each element contains:
        * The name of a behavior or property that is expected to be observed.
        * The typical or expected observation.
        * The actual observation. If this does not match the expected observation, this should be made very clear.
        * A proposed correction to the observation, if possible.

    
    # Example:
         **Description:**
             ```
               Quality feedback

                This is the action that was generated by the agent:
                    {'type': 'TALK', 'content': "I might consider buying bottled gazpacho, although I prefer making it fresh at home, and I find that most pre-packaged products don't meet my expectations in terms of quality. ", 'target': 'Michael Thompson'}

                Unfortunately, the action failed to pass the quality checks. The following problems were detected.
                
                Problem: The action does not adhere to the persona specification.
                Score = 5 (out of 9). Justification = The next action of Emily Carter, which involves expressing her opinion on bottled gazpacho, aligns with her persona specification of being critical and having high standards for products. She articulates her preferences and concerns about quality, which is consistent with her persona traits of being overly critical and rarely satisfied. However, she seems too ready to consider it, going against her strong rejection of new products and services. Therefore, it deviates substantially from her persona, leading to a score of 5.
                
                Problem: The action is not suitable to the situation or task.
                Score = 5 (out of 9). Justification = The next action, where Emily expresses her consideration about buying bottled gazpacho, aligns with the task of discussing her opinion on the product. However, it fails to give a clear "yes" or "no" answer, that was requested by her interviewer.
              ```
    
          **Output:**
              ```
                {
                    "object": "Emily Carter",
                    "behavior": [
                        {
                            "name:": "Persona Adherence",
                            "expected": "She is very critical and have high standards for products. Would never adopt a new product unless it meets her expectations.",
                            "actual": "She seems more inclined than expected to try the product.",
                            "correction": "She should say she won't consider buying bottled gazpacho, and give reasons for that."
                        },

                        {
                            "name:": "Task Suitability",
                            "expected": "She should give a clear 'yes' or 'no' answer to the question.",
                            "actual": "She doesn't give a clear 'yes' or 'no' answer to the question, but instead providing more nuanced feedback.",
                            "correction": "She should give a clear 'yes' or 'no' answer to the question, and optionally provide additional nuanced feedback."
                        }
                    ]
                }
              ```
    
    """
    # llm decorator will handle the body of this function

@llm()
def formulate_corrective_rule(feedback) -> str:
    """
    Given the feedback about something (either a real event or abstract concept) that violated an expectation, 
    this function transforms it into a general rule that must be followed in the future to avoid breaking the expectation again.
    
    The feedback always refers to some actual behavior, fact or event, and some broken expectation. The abstracted rule should
    specify that this expectation should not be violated in the future, and the behavior, fact or event not repeated. The idea is
    to learn from past mistakes, so that the rule is a way to avoid them in the future.

    The rule is meant to CHANGE the actual behavior, facts or events, so that it CONFORMS to the expectation, regardless of whether the
    expectation is a good or bad one. This is critical, because the rule will be refered in the future as a guideline
    about what must happen or be done.

    For instance, if the feedback is of the form (modulo grammatical adjustments): 
        OBSERVED BEHAVIOR, but EXPECTED BEHAVIOR, because REASONING.
    then the rule would be of the form:
        "I should have EXPECTED BEHAVIOR, because REASONING, and never OBSERVED BEHAVIOR."

    DO NOT make moral judgements about the expectation or the behavior, fact or event. Even if the expectation is bad, 
    the correct rule remains important, since there are legitimate cases where this is necessary (e.g., a simulation of 
    a person with a negative behavior, in the context of psychological research; or an evil character in a movie script).

    ## Examples

        Feedback: "Ana mentions she loved the proposed new food, a spicier flavor of gazpacho. However, this goes agains her known dislike
                    of spicy food."
        Rule: "Whenever I'm proposed spicy foods, I should reject the proposal, because I don't like spicy foods."
    """
    # llm decorator will handle the body of this function


@llm(enable_json_output_format=False)
def combine_texts(*texts) -> str:
    """
    Given a list of input texts, this function combines them into a single text, ensuring that the
    output is coherent, consistent, and logically structured. In particular, the resulting combination
    must follow these rules:
        - The combination consolidates the information from the inputs. It **does not** just concatenate them.
        - Information that was repeated across the inputs is not repeated in the output, but rather unified and consolidated there.
        - The combination preserves all the essential information from the inputs, but it is not a simple copy of them.
        - If information from some inputs can be combined in a more concise formulation, this new formulation should be used in the output.
        This allows to reduce redundancy and improve clarity.
        - The combination might be larger than the sum of the inputs, since it preserves the information from the inputs.
        - If the various inputs seem to follow some common format or style, the output must follow that format or style too.
        - The combination can contain inconsistencies or contradictions, in case the inputs do.

    Args:
        *texts: A list of input texts to be combined.
    
    Returns:
        str: The combined text.
    """
    # llm decorator will handle the body of this function

@llm(enable_json_output_format=False)
def extract_information_from_text(query: str, text: str, context:str=None) -> str:
    """
    Given a text and a query, this function extracts the information from the text that either answers the query directly or
    provides relevant information related to it. The query can be a question, a request for specific information, or a general
    request for details about the text. If the desired information is not present in the text, the function should return an empty string.
    If a context is provided, it is used to help in understanding the query or the text, and to provide additional background
    information or expectations about the input/output. Any requests in the context are respected and enforced in the output.

    Args:
        query (str): The query that specifies what information to extract.
        text (str): The text from which to extract information.
        context (str, optional): Additional context that might help in extracting the information. This can be used to provide 
          background information or specify expectations about the input/output.

    Returns:
        str: The extracted information that answers the query. If no information is found, an empty string is returned.
    """
    # llm decorator will handle the body of this function

@llm(enable_json_output_format=False)
def accumulate_based_on_query(query: str, new_entry:str, current_accumulation:str, context=None) -> str:
    """
    This function accumulates information that is relevant to a given query. It takes a new entry and updates the current accumulation of information
    such that the final accumulation preserves its original information and in addition integrates the new entry in a way that addresses the query or provides related information. 
    Details are **never** suppressed, but rather expanded upon, while mantaining the coherence and structure of the overall accumulation.
    In other words, it is a monotonic accumulation process that builds on the current accumulation, **minimally** adjusts it to maintain coherence,
    while ensuring that the new entry is integrated in a way that is relevant to the query.
    The query itself specifies the problem that the accumulation is trying to address, and the new entry is a piece of information that might be relevant to that problem.
    
    The function should ensure that the accumulation is coherent, well-written, and that it does not contain redundant information. More precisely:
      - INTEGRATES NEW ENTRIES: The accumulation process is not a simple concatenation of the new entry and the current accumulation. Rather, it should intelligently integrate 
        the new entry into the current accumulation, even if this requires rephrasing, restructuring or rewriting the resulting accumulation.
      - EXPAND ON DETAILS: When integrating the new entry, always try to expand the level of detail rather than reduce it.
      - AVOID OBVIOUS REDUNDANCY: The integration of the new entry should be done in a way to avoid obvious redundancy and ensure that the resulting accumulation is coherent and well-structured. However,
        it **must** preserve nuances that might be somewhat redundant.
      - ALWAYS PRESERVE INFORMATION: Previous information should **never** be lost. Previous emphasis or details are **never** lost. Rather, the accumulation is suitably expanded to include the new entry, 
        while preserving the previous information and maintaining the coherence of the overall accumulation.
      - INTEGRATE ONLY IF RELEVANT: The new entry should be integrated into the current accumulation only if it is relevant to the query. Otherwise, the accumulation should remain unchanged.
      - TOLERATE CONTRADICTIONS: If the new entry contradicts the current accumulation, it should be integrated in a way that mentions the fact that there are 
        divergent pieces of information, and that the accumulation reflects this divergence. That is to say, the contradiction is not discarded, but rather acknowledged and preserved.
      - MAINTAIN COHERENCE: The resulting accumulation should be coherent and well-structured, with a clear flow of information.
      - CONSIDER CONTEXT: If a context is provided, it should be used to help in understanding the query or the new entry, and to provide additional background 
        information or expectations about the input/output. Make sure any requests in the context are respected and enforced in the output.

    Args:
        query (str): The query that specifies the problem that the accumulation is trying to address.
        new_entry (str): The new entry of information to be considered for accumulation.
        current_accumulation (str): The current accumulation of information.
        context (str, optional): Additional context that might help in understanding the query or the new entry. This can be used to provide 
          background information or specify expectations about the input/output.

    Returns:
        str: The updated accumulation of information that includes the new entry if it is relevant to the query.
    """
    # llm decorator will handle the body of this function

@llm()
def compute_semantic_proximity(text1: str, text2: str, context: str = None) -> dict:
    """
    Computes the semantic proximity between two texts and returns a proximity score along with justification.
    This function is particularly useful for comparing agent justifications, explanations, or reasoning
    to assess how similar they are in meaning and content.

    Args:
        text1 (str): The first text to compare.
        text2 (str): The second text to compare.
        context (str, optional): Additional context that might help in understanding the comparison.
                                This can provide background information about what the texts represent
                                or the purpose of the comparison.

    Returns:
        dict: A dictionary containing:
            - 'proximity_score' (float): A score between 0.0 and 1.0, where 0.0 means completely different
                                       and 1.0 means semantically identical.
            - 'justification' (str): A detailed explanation of why this score was assigned, including
                                   specific similarities and differences found between the texts.
    
    Example:
        >>> result = compute_semantic_proximity(
        ...     "I prefer luxury travel because I enjoy comfort and high-quality service",
        ...     "I like premium vacations since I value convenience and excellent amenities"
        ... )
        >>> print(result['proximity_score'])  # Expected: ~0.85
        >>> print(result['justification'])    # Detailed explanation of similarities
    """
    # llm decorator will handle the body of this function

Functions

def accumulate_based_on_query(query: str, new_entry: str, current_accumulation: str, context=None) ‑> str

This function accumulates information that is relevant to a given query. It takes a new entry and updates the current accumulation of information such that the final accumulation preserves its original information and in addition integrates the new entry in a way that addresses the query or provides related information. Details are never suppressed, but rather expanded upon, while mantaining the coherence and structure of the overall accumulation. In other words, it is a monotonic accumulation process that builds on the current accumulation, minimally adjusts it to maintain coherence, while ensuring that the new entry is integrated in a way that is relevant to the query. The query itself specifies the problem that the accumulation is trying to address, and the new entry is a piece of information that might be relevant to that problem.

The function should ensure that the accumulation is coherent, well-written, and that it does not contain redundant information. More precisely: - INTEGRATES NEW ENTRIES: The accumulation process is not a simple concatenation of the new entry and the current accumulation. Rather, it should intelligently integrate the new entry into the current accumulation, even if this requires rephrasing, restructuring or rewriting the resulting accumulation. - EXPAND ON DETAILS: When integrating the new entry, always try to expand the level of detail rather than reduce it. - AVOID OBVIOUS REDUNDANCY: The integration of the new entry should be done in a way to avoid obvious redundancy and ensure that the resulting accumulation is coherent and well-structured. However, it must preserve nuances that might be somewhat redundant. - ALWAYS PRESERVE INFORMATION: Previous information should never be lost. Previous emphasis or details are never lost. Rather, the accumulation is suitably expanded to include the new entry, while preserving the previous information and maintaining the coherence of the overall accumulation. - INTEGRATE ONLY IF RELEVANT: The new entry should be integrated into the current accumulation only if it is relevant to the query. Otherwise, the accumulation should remain unchanged. - TOLERATE CONTRADICTIONS: If the new entry contradicts the current accumulation, it should be integrated in a way that mentions the fact that there are divergent pieces of information, and that the accumulation reflects this divergence. That is to say, the contradiction is not discarded, but rather acknowledged and preserved. - MAINTAIN COHERENCE: The resulting accumulation should be coherent and well-structured, with a clear flow of information. - CONSIDER CONTEXT: If a context is provided, it should be used to help in understanding the query or the new entry, and to provide additional background information or expectations about the input/output. Make sure any requests in the context are respected and enforced in the output.

Args

query : str
The query that specifies the problem that the accumulation is trying to address.
new_entry : str
The new entry of information to be considered for accumulation.
current_accumulation : str
The current accumulation of information.
context : str, optional
Additional context that might help in understanding the query or the new entry. This can be used to provide background information or specify expectations about the input/output.

Returns

str
The updated accumulation of information that includes the new entry if it is relevant to the query.
Expand source code
@llm(enable_json_output_format=False)
def accumulate_based_on_query(query: str, new_entry:str, current_accumulation:str, context=None) -> str:
    """
    This function accumulates information that is relevant to a given query. It takes a new entry and updates the current accumulation of information
    such that the final accumulation preserves its original information and in addition integrates the new entry in a way that addresses the query or provides related information. 
    Details are **never** suppressed, but rather expanded upon, while mantaining the coherence and structure of the overall accumulation.
    In other words, it is a monotonic accumulation process that builds on the current accumulation, **minimally** adjusts it to maintain coherence,
    while ensuring that the new entry is integrated in a way that is relevant to the query.
    The query itself specifies the problem that the accumulation is trying to address, and the new entry is a piece of information that might be relevant to that problem.
    
    The function should ensure that the accumulation is coherent, well-written, and that it does not contain redundant information. More precisely:
      - INTEGRATES NEW ENTRIES: The accumulation process is not a simple concatenation of the new entry and the current accumulation. Rather, it should intelligently integrate 
        the new entry into the current accumulation, even if this requires rephrasing, restructuring or rewriting the resulting accumulation.
      - EXPAND ON DETAILS: When integrating the new entry, always try to expand the level of detail rather than reduce it.
      - AVOID OBVIOUS REDUNDANCY: The integration of the new entry should be done in a way to avoid obvious redundancy and ensure that the resulting accumulation is coherent and well-structured. However,
        it **must** preserve nuances that might be somewhat redundant.
      - ALWAYS PRESERVE INFORMATION: Previous information should **never** be lost. Previous emphasis or details are **never** lost. Rather, the accumulation is suitably expanded to include the new entry, 
        while preserving the previous information and maintaining the coherence of the overall accumulation.
      - INTEGRATE ONLY IF RELEVANT: The new entry should be integrated into the current accumulation only if it is relevant to the query. Otherwise, the accumulation should remain unchanged.
      - TOLERATE CONTRADICTIONS: If the new entry contradicts the current accumulation, it should be integrated in a way that mentions the fact that there are 
        divergent pieces of information, and that the accumulation reflects this divergence. That is to say, the contradiction is not discarded, but rather acknowledged and preserved.
      - MAINTAIN COHERENCE: The resulting accumulation should be coherent and well-structured, with a clear flow of information.
      - CONSIDER CONTEXT: If a context is provided, it should be used to help in understanding the query or the new entry, and to provide additional background 
        information or expectations about the input/output. Make sure any requests in the context are respected and enforced in the output.

    Args:
        query (str): The query that specifies the problem that the accumulation is trying to address.
        new_entry (str): The new entry of information to be considered for accumulation.
        current_accumulation (str): The current accumulation of information.
        context (str, optional): Additional context that might help in understanding the query or the new entry. This can be used to provide 
          background information or specify expectations about the input/output.

    Returns:
        str: The updated accumulation of information that includes the new entry if it is relevant to the query.
    """
    # llm decorator will handle the body of this function
def combine_texts(*texts) ‑> str

Given a list of input texts, this function combines them into a single text, ensuring that the output is coherent, consistent, and logically structured. In particular, the resulting combination must follow these rules: - The combination consolidates the information from the inputs. It does not just concatenate them. - Information that was repeated across the inputs is not repeated in the output, but rather unified and consolidated there. - The combination preserves all the essential information from the inputs, but it is not a simple copy of them. - If information from some inputs can be combined in a more concise formulation, this new formulation should be used in the output. This allows to reduce redundancy and improve clarity. - The combination might be larger than the sum of the inputs, since it preserves the information from the inputs. - If the various inputs seem to follow some common format or style, the output must follow that format or style too. - The combination can contain inconsistencies or contradictions, in case the inputs do.

Args

*texts
A list of input texts to be combined.

Returns

str
The combined text.
Expand source code
@llm(enable_json_output_format=False)
def combine_texts(*texts) -> str:
    """
    Given a list of input texts, this function combines them into a single text, ensuring that the
    output is coherent, consistent, and logically structured. In particular, the resulting combination
    must follow these rules:
        - The combination consolidates the information from the inputs. It **does not** just concatenate them.
        - Information that was repeated across the inputs is not repeated in the output, but rather unified and consolidated there.
        - The combination preserves all the essential information from the inputs, but it is not a simple copy of them.
        - If information from some inputs can be combined in a more concise formulation, this new formulation should be used in the output.
        This allows to reduce redundancy and improve clarity.
        - The combination might be larger than the sum of the inputs, since it preserves the information from the inputs.
        - If the various inputs seem to follow some common format or style, the output must follow that format or style too.
        - The combination can contain inconsistencies or contradictions, in case the inputs do.

    Args:
        *texts: A list of input texts to be combined.
    
    Returns:
        str: The combined text.
    """
    # llm decorator will handle the body of this function
def compute_semantic_proximity(text1: str, text2: str, context: str = None) ‑> dict

Computes the semantic proximity between two texts and returns a proximity score along with justification. This function is particularly useful for comparing agent justifications, explanations, or reasoning to assess how similar they are in meaning and content.

Args

text1 : str
The first text to compare.
text2 : str
The second text to compare.
context : str, optional
Additional context that might help in understanding the comparison. This can provide background information about what the texts represent or the purpose of the comparison.

Returns

dict
A dictionary containing: - 'proximity_score' (float): A score between 0.0 and 1.0, where 0.0 means completely different and 1.0 means semantically identical. - 'justification' (str): A detailed explanation of why this score was assigned, including specific similarities and differences found between the texts.

Example

>>> result = compute_semantic_proximity(
...     "I prefer luxury travel because I enjoy comfort and high-quality service",
...     "I like premium vacations since I value convenience and excellent amenities"
... )
>>> print(result['proximity_score'])  # Expected: ~0.85
>>> print(result['justification'])    # Detailed explanation of similarities
Expand source code
@llm()
def compute_semantic_proximity(text1: str, text2: str, context: str = None) -> dict:
    """
    Computes the semantic proximity between two texts and returns a proximity score along with justification.
    This function is particularly useful for comparing agent justifications, explanations, or reasoning
    to assess how similar they are in meaning and content.

    Args:
        text1 (str): The first text to compare.
        text2 (str): The second text to compare.
        context (str, optional): Additional context that might help in understanding the comparison.
                                This can provide background information about what the texts represent
                                or the purpose of the comparison.

    Returns:
        dict: A dictionary containing:
            - 'proximity_score' (float): A score between 0.0 and 1.0, where 0.0 means completely different
                                       and 1.0 means semantically identical.
            - 'justification' (str): A detailed explanation of why this score was assigned, including
                                   specific similarities and differences found between the texts.
    
    Example:
        >>> result = compute_semantic_proximity(
        ...     "I prefer luxury travel because I enjoy comfort and high-quality service",
        ...     "I like premium vacations since I value convenience and excellent amenities"
        ... )
        >>> print(result['proximity_score'])  # Expected: ~0.85
        >>> print(result['justification'])    # Detailed explanation of similarities
    """
    # llm decorator will handle the body of this function
def correct_according_to_rule(observation, rules) ‑> str

Given an observation and a one or more rules, this function rephrases or completely changes the observation in accordance with what the rules specify. Some guidelines: - Rules might require changes either to style or to content. - The rephrased observation should be coherent and consistent with the original observation, unless the rules require otherwise. - If the rules require, the corrected observation can contradict the original observation. - Enforce the rules very strictly, even if the original observation seems correct or acceptable. - Rules might contain additional information or suggestions that you may use to improve your output.

Examples

Observation: "You know, I am so sad these days."
Rule: "I am always happy and depression is unknown to me"
Modified observation: "You know, I am so happy these days."

Args

observation
The observation that should be rephrased or changed. Something that is said or done, or a description of events or facts.
rules
The rules that specifies what the modidfied observation should comply with.

Returns

str
The rephrased or corrected observation.
Expand source code
@llm()
def correct_according_to_rule(observation, rules) -> str:
    """
    Given an observation and a one or more rules, this function rephrases or completely changes the observation in accordance with what the rules
    specify. Some guidelines:
        - Rules might require changes either to style or to content.
        - The rephrased observation should be coherent and consistent with the original observation, unless the rules require otherwise.
        - If the rules require, the corrected observation can contradict the original observation.
        - Enforce the rules very strictly, even if the original observation seems correct or acceptable.
        - Rules might contain additional information or suggestions that you may use to improve your output.

    ## Examples

        Observation: "You know, I am so sad these days."
        Rule: "I am always happy and depression is unknown to me"
        Modified observation: "You know, I am so happy these days."

    Args:
        observation: The observation that should be rephrased or changed. Something that is said or done, or a description of events or facts.
        rules: The rules that specifies what the modidfied observation should comply with.        
    
    Returns:
        str: The rephrased or corrected observation.
    """
    # llm decorator will handle the body of this function
def extract_information_from_text(query: str, text: str, context: str = None) ‑> str

Given a text and a query, this function extracts the information from the text that either answers the query directly or provides relevant information related to it. The query can be a question, a request for specific information, or a general request for details about the text. If the desired information is not present in the text, the function should return an empty string. If a context is provided, it is used to help in understanding the query or the text, and to provide additional background information or expectations about the input/output. Any requests in the context are respected and enforced in the output.

Args

query : str
The query that specifies what information to extract.
text : str
The text from which to extract information.
context : str, optional
Additional context that might help in extracting the information. This can be used to provide background information or specify expectations about the input/output.

Returns

str
The extracted information that answers the query. If no information is found, an empty string is returned.
Expand source code
@llm(enable_json_output_format=False)
def extract_information_from_text(query: str, text: str, context:str=None) -> str:
    """
    Given a text and a query, this function extracts the information from the text that either answers the query directly or
    provides relevant information related to it. The query can be a question, a request for specific information, or a general
    request for details about the text. If the desired information is not present in the text, the function should return an empty string.
    If a context is provided, it is used to help in understanding the query or the text, and to provide additional background
    information or expectations about the input/output. Any requests in the context are respected and enforced in the output.

    Args:
        query (str): The query that specifies what information to extract.
        text (str): The text from which to extract information.
        context (str, optional): Additional context that might help in extracting the information. This can be used to provide 
          background information or specify expectations about the input/output.

    Returns:
        str: The extracted information that answers the query. If no information is found, an empty string is returned.
    """
    # llm decorator will handle the body of this function
def extract_observed_vs_expected_rules(description)

Given the description of something (either a real event or abstract concept), extract: - The object or person about whom something is said. - A list where each element contains: * The name of a behavior or property that is expected to be observed. * The typical or expected observation. * The actual observation. If this does not match the expected observation, this should be made very clear. * A proposed correction to the observation, if possible.

Example:

 **Description:**
     ```
       Quality feedback

        This is the action that was generated by the agent:
            {'type': 'TALK', 'content': "I might consider buying bottled gazpacho, although I prefer making it fresh at home, and I find that most pre-packaged products don't meet my expectations in terms of quality. ", 'target': 'Michael Thompson'}

        Unfortunately, the action failed to pass the quality checks. The following problems were detected.

        Problem: The action does not adhere to the persona specification.
        Score = 5 (out of 9). Justification = The next action of Emily Carter, which involves expressing her opinion on bottled gazpacho, aligns with her persona specification of being critical and having high standards for products. She articulates her preferences and concerns about quality, which is consistent with her persona traits of being overly critical and rarely satisfied. However, she seems too ready to consider it, going against her strong rejection of new products and services. Therefore, it deviates substantially from her persona, leading to a score of 5.

        Problem: The action is not suitable to the situation or task.
        Score = 5 (out of 9). Justification = The next action, where Emily expresses her consideration about buying bottled gazpacho, aligns with the task of discussing her opinion on the product. However, it fails to give a clear "yes" or "no" answer, that was requested by her interviewer.
      ```

  **Output:**
      ```
        {
            "object": "Emily Carter",
            "behavior": [
                {
                    "name:": "Persona Adherence",
                    "expected": "She is very critical and have high standards for products. Would never adopt a new product unless it meets her expectations.",
                    "actual": "She seems more inclined than expected to try the product.",
                    "correction": "She should say she won't consider buying bottled gazpacho, and give reasons for that."
                },

                {
                    "name:": "Task Suitability",
                    "expected": "She should give a clear 'yes' or 'no' answer to the question.",
                    "actual": "She doesn't give a clear 'yes' or 'no' answer to the question, but instead providing more nuanced feedback.",
                    "correction": "She should give a clear 'yes' or 'no' answer to the question, and optionally provide additional nuanced feedback."
                }
            ]
        }
      ```
Expand source code
@llm()
def extract_observed_vs_expected_rules(description):
    """
    Given the description of something (either a real event or abstract concept), extract:
      - The object or person about whom something is said.
      - A list where each element contains:
        * The name of a behavior or property that is expected to be observed.
        * The typical or expected observation.
        * The actual observation. If this does not match the expected observation, this should be made very clear.
        * A proposed correction to the observation, if possible.

    
    # Example:
         **Description:**
             ```
               Quality feedback

                This is the action that was generated by the agent:
                    {'type': 'TALK', 'content': "I might consider buying bottled gazpacho, although I prefer making it fresh at home, and I find that most pre-packaged products don't meet my expectations in terms of quality. ", 'target': 'Michael Thompson'}

                Unfortunately, the action failed to pass the quality checks. The following problems were detected.
                
                Problem: The action does not adhere to the persona specification.
                Score = 5 (out of 9). Justification = The next action of Emily Carter, which involves expressing her opinion on bottled gazpacho, aligns with her persona specification of being critical and having high standards for products. She articulates her preferences and concerns about quality, which is consistent with her persona traits of being overly critical and rarely satisfied. However, she seems too ready to consider it, going against her strong rejection of new products and services. Therefore, it deviates substantially from her persona, leading to a score of 5.
                
                Problem: The action is not suitable to the situation or task.
                Score = 5 (out of 9). Justification = The next action, where Emily expresses her consideration about buying bottled gazpacho, aligns with the task of discussing her opinion on the product. However, it fails to give a clear "yes" or "no" answer, that was requested by her interviewer.
              ```
    
          **Output:**
              ```
                {
                    "object": "Emily Carter",
                    "behavior": [
                        {
                            "name:": "Persona Adherence",
                            "expected": "She is very critical and have high standards for products. Would never adopt a new product unless it meets her expectations.",
                            "actual": "She seems more inclined than expected to try the product.",
                            "correction": "She should say she won't consider buying bottled gazpacho, and give reasons for that."
                        },

                        {
                            "name:": "Task Suitability",
                            "expected": "She should give a clear 'yes' or 'no' answer to the question.",
                            "actual": "She doesn't give a clear 'yes' or 'no' answer to the question, but instead providing more nuanced feedback.",
                            "correction": "She should give a clear 'yes' or 'no' answer to the question, and optionally provide additional nuanced feedback."
                        }
                    ]
                }
              ```
    
    """
    # llm decorator will handle the body of this function
def formulate_corrective_rule(feedback) ‑> str

Given the feedback about something (either a real event or abstract concept) that violated an expectation, this function transforms it into a general rule that must be followed in the future to avoid breaking the expectation again.

The feedback always refers to some actual behavior, fact or event, and some broken expectation. The abstracted rule should specify that this expectation should not be violated in the future, and the behavior, fact or event not repeated. The idea is to learn from past mistakes, so that the rule is a way to avoid them in the future.

The rule is meant to CHANGE the actual behavior, facts or events, so that it CONFORMS to the expectation, regardless of whether the expectation is a good or bad one. This is critical, because the rule will be refered in the future as a guideline about what must happen or be done.

For instance, if the feedback is of the form (modulo grammatical adjustments): OBSERVED BEHAVIOR, but EXPECTED BEHAVIOR, because REASONING. then the rule would be of the form: "I should have EXPECTED BEHAVIOR, because REASONING, and never OBSERVED BEHAVIOR."

DO NOT make moral judgements about the expectation or the behavior, fact or event. Even if the expectation is bad, the correct rule remains important, since there are legitimate cases where this is necessary (e.g., a simulation of a person with a negative behavior, in the context of psychological research; or an evil character in a movie script).

Examples

Feedback: "Ana mentions she loved the proposed new food, a spicier flavor of gazpacho. However, this goes agains her known dislike
            of spicy food."
Rule: "Whenever I'm proposed spicy foods, I should reject the proposal, because I don't like spicy foods."
Expand source code
@llm()
def formulate_corrective_rule(feedback) -> str:
    """
    Given the feedback about something (either a real event or abstract concept) that violated an expectation, 
    this function transforms it into a general rule that must be followed in the future to avoid breaking the expectation again.
    
    The feedback always refers to some actual behavior, fact or event, and some broken expectation. The abstracted rule should
    specify that this expectation should not be violated in the future, and the behavior, fact or event not repeated. The idea is
    to learn from past mistakes, so that the rule is a way to avoid them in the future.

    The rule is meant to CHANGE the actual behavior, facts or events, so that it CONFORMS to the expectation, regardless of whether the
    expectation is a good or bad one. This is critical, because the rule will be refered in the future as a guideline
    about what must happen or be done.

    For instance, if the feedback is of the form (modulo grammatical adjustments): 
        OBSERVED BEHAVIOR, but EXPECTED BEHAVIOR, because REASONING.
    then the rule would be of the form:
        "I should have EXPECTED BEHAVIOR, because REASONING, and never OBSERVED BEHAVIOR."

    DO NOT make moral judgements about the expectation or the behavior, fact or event. Even if the expectation is bad, 
    the correct rule remains important, since there are legitimate cases where this is necessary (e.g., a simulation of 
    a person with a negative behavior, in the context of psychological research; or an evil character in a movie script).

    ## Examples

        Feedback: "Ana mentions she loved the proposed new food, a spicier flavor of gazpacho. However, this goes agains her known dislike
                    of spicy food."
        Rule: "Whenever I'm proposed spicy foods, I should reject the proposal, because I don't like spicy foods."
    """
    # llm decorator will handle the body of this function
def restructure_as_observed_vs_expected(description) ‑> str

Given the description of something (either a real event or abstract concept), but that violates an expectation, this function extracts the following elements from it:

- OBSERVED: The observed event or statement.
- BROKEN EXPECTATION: The expectation that was broken by the observed event.
- REASONING: The reasoning behind the expectation that was broken.

If in reality the description does not mention any expectation violation, then the function should instead extract the following elements:

- OBSERVED: The observed event.
- MET EXPECTATION: The expectation that was met by the observed event.
- REASONING: The reasoning behind the expectation that was met.

This way of restructuring the description can be useful for downstream processing, making it easier to analyze or modify system outputs, for example.

Examples

Input: "Ana mentions she loved the proposed new food, a spicier flavor of gazpacho. However, this goes agains her known dislike
        of spicy food."
Output: 
    "OBSERVED: Ana mentions she loved the proposed new food, a spicier flavor of gazpacho.
     BROKEN EXPECTATION: Ana should have mentioned that she disliked the proposed spicier gazpacho.
     REASONING: Ana has a known dislike of spicy food."


Input: "Carlos traveled to Firenzi and was amazed by the beauty of the city. This was in line with his love for art and architecture."
Output: 
    "OBSERVED: Carlos traveled to Firenzi and was amazed by the beauty of the city.
     MET EXPECTATION: Carlos should have been amazed by the beauty of the city.
     REASONING: Carlos loves art and architecture."

Args

description : str
A description of an event or concept that either violates or meets an expectation.

Returns

str
The restructured description.
Expand source code
@llm()
def restructure_as_observed_vs_expected(description) -> str:
    """
    Given the description of something (either a real event or abstract concept), but that violates an expectation, this function 
    extracts the following elements from it:

        - OBSERVED: The observed event or statement.
        - BROKEN EXPECTATION: The expectation that was broken by the observed event.
        - REASONING: The reasoning behind the expectation that was broken.
    
    If in reality the description does not mention any expectation violation, then the function should instead extract
    the following elements:

        - OBSERVED: The observed event.
        - MET EXPECTATION: The expectation that was met by the observed event.
        - REASONING: The reasoning behind the expectation that was met.

    This way of restructuring the description can be useful for downstream processing, making it easier to analyze or
    modify system outputs, for example.

    ## Examples

        Input: "Ana mentions she loved the proposed new food, a spicier flavor of gazpacho. However, this goes agains her known dislike
                of spicy food."
        Output: 
            "OBSERVED: Ana mentions she loved the proposed new food, a spicier flavor of gazpacho.
             BROKEN EXPECTATION: Ana should have mentioned that she disliked the proposed spicier gazpacho.
             REASONING: Ana has a known dislike of spicy food."

             
        Input: "Carlos traveled to Firenzi and was amazed by the beauty of the city. This was in line with his love for art and architecture."
        Output: 
            "OBSERVED: Carlos traveled to Firenzi and was amazed by the beauty of the city.
             MET EXPECTATION: Carlos should have been amazed by the beauty of the city.
             REASONING: Carlos loves art and architecture."

    Args:
        description (str): A description of an event or concept that either violates or meets an expectation.
    
    Returns:
        str: The restructured description.
    """
    # llm decorator will handle the body of this function