opto.optimizers.optoprime.OptoPrime

opto.optimizers.optoprime.OptoPrime#

class OptoPrime[source]#

Attributes

default_objective

default_prompt_symbols

example_problem_template

example_prompt

final_prompt

output_format_prompt

propagator

Return a Propagator object that can be used to propagate feedback in backward.

representation_prompt

trace_graph

Aggregate the graphs of all the parameters.

user_prompt_template

Methods

backward(node, *args, **kwargs)

Propagate the feedback backward.

call_llm(system_prompt, user_prompt[, ...])

Call the LLM with a prompt and return the response.

construct_prompt(summary[, mask])

Construct the system and user prompt.

construct_update_dict(suggestion)

Convert the suggestion in text into the right data type.

default_propagator()

Return the default Propagator object of the optimizer.

extract_llm_suggestion(response)

Extract the suggestion from the response.

problem_instance(summary[, mask])

propose(*args, **kwargs)

Propose the new data of the parameters based on the feedback.

replace_symbols(text, symbols)

repr_node_constraint(node_dict)

repr_node_value(node_dict)

step(*args, **kwargs)

Update the parameters based on the feedback.

summarize()

update(update_dict)

Update the trainable parameters given a dictionary of new data.

zero_feedback()

Reset the feedback.

__init__(parameters: List[ParameterNode], llm: AutoGenLLM | None = None, *args, propagator: Propagator | None = None, objective: None | str = None, ignore_extraction_error: bool = True, include_example=False, memory_size=0, max_tokens=4096, log=True, prompt_symbols=None, filter_dict: Dict | None = None, **kwargs)[source]#
__new__(**kwargs)#