Optimization#

Here we show a small example of how to apply trace to optimize python objects based on language feedback. Here we want to change the input to function foobar such that output is large enough. foobar is a function that is composed of foo based on built-in operators and bar which is a blackbox function, whose information is only given via the docstring.

!pip install trace-opt
import opto
from opto.trace import bundle, node
from opto.optimizers import OptoPrime
from opto.trace.nodes import GRAPH


def blackbox(x):
    return -x * 2


@bundle()
def bar(x):
    "This is a test function, which does negative scaling."
    return blackbox(x)


def foo(x):
    y = x + 1
    return x * y


# foobar is a composition of custom function and built-in functions


def foobar(x):
    return foo(bar(x))


def user(x):
    if x < 50:
        return "The number needs to be larger."
    else:
        return "Success."

Backpropagation#

We apply FunctionOptimizer to change the input to the function foobar such that the simulated user is satisfied. To this end, we backpropagated the user’s language feedback about the output, through the graph that connects the input to the output.

We use helper functions from AutoGen to call LLMs to interpret the user’s language feedback. Before running the cell below, please copy OAI_CONFIG_LIST_sample from the root folder of this repository to the current folder, rename it to OAI_CONFIG_LIST, and set the correct configuration for LLMs in there.

import autogen

# One-step optimization example
x = node(-1.0, trainable=True)
optimizer = OptoPrime([x], config_list=autogen.config_list_from_json("OAI_CONFIG_LIST"))
output = foobar(x)
feedback = user(output.data)
optimizer.zero_feedback()
optimizer.backward(output, feedback, visualize=True)  # this is equivalent to the line below
# output.backward(feedback, propagator=optimizer.propagator, visualize=visualize)
../_images/ab5e262fc37bdff5e5cf34a305f77ef97bf205ac6541c27c6e3cd42760f0bea7.svg

The propagated feedback contains graph structure, data of the nodes in the graph, and the transformation used in the graph. They are presented in a python-like syntax.

from opto.optimizers.function_optimizer import node_to_function_feedback

print("Function Feedback")
for k, v in x.feedback.items():
    v = v[0]
    f_feedback = node_to_function_feedback(v)
    print("Graph:")
    for kk, vv in f_feedback.graph:
        print(f"  {kk}: {vv}")
    print("Roots:")
    for kk, vv in f_feedback.roots.items():
        print(f"  {kk}: {vv}")
    print("Others:")
    for kk, vv in f_feedback.others.items():
        print(f"  {kk}: {vv}")
    print("Documentation:")
    for kk, vv in f_feedback.documentation.items():
        print(f"  {kk}: {vv}")
    print("Output:")
    for kk, vv in f_feedback.output.items():
        print(f"  {kk}: {vv}")
    print("User Feedback:")
    print(f"  {f_feedback.user_feedback}")
Function Feedback
Graph:
  1: bar0 = bar(x=float0)
  2: add0 = add(x=bar0, y=int0)
  3: multiply0 = multiply(x=bar0, y=add0)
Roots:
  float0: (-1.0, None)
  int0: (1, None)
Others:
  bar0: (2.0, None)
  add0: (3.0, None)
Documentation:
  bar: [bar] This is a test function, which does negative scaling..
  add: [add] This is an add operator of x and y. .
  multiply: [multiply] This is a multiply operator of x and y. .
Output:
  multiply0: (6.0, None)
User Feedback:
  The number needs to be larger.

Once the feedback is propagated, we can call the optimizer to change the variable based on the feedback.

old_variable = x.data
optimizer.step(verbose=True)

print("\nAfter step")
print("old variable", old_variable)
print("new variable", x.data)
Prompt
 
You're tasked to solve a coding/algorithm problem. You will see the instruction, the code, the documentation of each function used in the code, and the feedback about the execution result.

Specifically, a problem will be composed of the following parts:
- #Instruction: the instruction which describes the things you need to do or the question you should answer.
- #Code: the code defined in the problem.
- #Documentation: the documentation of each function used in #Code. The explanation might be incomplete and just contain high-level description. You can use the values in #Others to help infer how those functions work.
- #Variables: the input variables that you can change.
- #Constraints: the constraints or descriptions of the variables in #Variables.
- #Inputs: the values of other inputs to the code, which are not changeable.
- #Others: the intermediate values created through the code execution.
- #Outputs: the result of the code output.
- #Feedback: the feedback about the code's execution result.

In #Variables, #Inputs, #Outputs, and #Others, the format is:

<data_type> <variable_name> = <value>

If <type> is (code), it means <value> is the source code of a python code, which may include docstring and definitions.

Output_format: Your output should be in the following json format, satisfying the json syntax:

{{
"reasoning": <Your reasoning>,
"answer": <Your answer>,
"suggestion": {{
    <variable_1>: <suggested_value_1>,
    <variable_2>: <suggested_value_2>,
}}
}}

In "reasoning", explain the problem: 1. what the #Instruction means 2. what the #Feedback on #Output means to #Variables considering how #Variables are used in #Code and other values in #Documentation, #Inputs, #Others. 3. Reasoning about the suggested changes in #Variables (if needed) and the expected result.

If #Instruction asks for an answer, write it down in "answer".

If you need to suggest a change in the values of #Variables, write down the suggested values in "suggestion". Remember you can change only the values in #Variables, not others. When <type> of a variable is (code), you should write the new definition in the format of python code without syntax errors, and you should not change the function name or the function signature.

If no changes or answer are needed, just output TERMINATE.

Now you see problem instance:

================================

#Instruction
You need to change the <value> of the variables in #Variables to improve the output in accordance to #Feedback.

#Code
bar0 = bar(x=float0)
add0 = add(x=bar0, y=int0)
multiply0 = multiply(x=bar0, y=add0)

#Documentation
[bar] This is a test function, which does negative scaling..
[add] This is an add operator of x and y. .
[multiply] This is a multiply operator of x and y. .

#Variables
(float) float0=-1.0

#Constraints


#Inputs
(int) int0=1

#Others
(float) bar0=2.0
(float) add0=3.0

#Outputs
(float) multiply0=6.0

#Feedback:
The number needs to be larger.

================================


Your response:

LLM response:
 {
"reasoning": "The given code performs a series of operations using the functions 'bar', 'add', and 'multiply', starting with a float 'float0' and an integer 'int0' as inputs. Initially, 'bar' function performs some operation (described as negative scaling, but the detail is unclear from the description) on 'float0', producing 'bar0'. Then, 'add' combines 'bar0' and 'int0' into 'add0', and finally 'multiply' combines 'bar0' and 'add0' into the final result 'multiply0'. The output of multiply0=6.0 is considered not large enough based on the feedback given. To make the number larger, we can manipulate the value of 'float0' since it's directly involved in the initial calculation and influences all subsequent results. Increasing 'float0' should increase 'bar0', which in turn should increase 'add0' and finally 'multiply0'. However, without more specific details on how 'bar' performs negative scaling, we can assume it reverses the sign and/or scales the magnitude, hence using a more negative 'float0' might actually result in a more positive 'bar0'. To increase the result of the 'multiply' operation, we aim to increase the value of 'float0' but in a manner consistent with producing a larger 'bar0'. If 'bar' makes the number more positive as suggested by the 'Others' section, we should try a smaller (more negative) starting value for 'float0'. However, given that 'bar0' is positive and feedback suggests needing a larger number, and without explicit knowledge of how 'bar' scales negatively, our approach should instead aim to increase 'float0' under the assumption it would inversely increase 'bar0' and thus 'multiply0'. Still, the 'bar' function's behavior contradicts this assumption given 'float0' is negative and 'bar0' ends up positive. Therefore, the suggested change may involve re-evaluating the understanding of 'bar's behavior or considering errors in the initial understanding.",
"answer": "",
"suggestion": {
"float0": "-2.0"
}
}

After step
old variable -1.0
new variable -2.0

Example of Full Optimization Loop#

We can apply the steps above repeatedly to create a training loop to optimize the variable according to the user. Notice because of the way foobar works, the optimizer actually needs to change the input to be lower in order to make the output to be larger (which is what the user suggests).

This is a non-trivial problem, becasue the optimizer sees only

output = blackbox(x) * (blackbox(x)+1)

and the hint/docstring "This is a test function, which does scaling and negation." about how blackbox works. The optimizer needs to figure out how to change the input based on this vague information.

# A small example of how to use the optimizer in a loop
GRAPH.clear()
x = node(-1.0, trainable=True)
optimizer = OptoPrime([x], config_list=autogen.config_list_from_json("OAI_CONFIG_LIST"))

history = [x.data]
feedback = ""
while feedback.lower() != "Success.".lower():
    output = foobar(x)
    feedback = user(output.data)
    optimizer.zero_feedback()
    optimizer.backward(output, feedback)
    print(f"variable={x.data}, output={output.data}, feedback={feedback}")  # logging
    optimizer.step()
    history.append(x.data)  # logging

print("History")
for i, v in enumerate(history):
    print(f"  {i}: {v}")
variable=-1.0, output=6.0, feedback=The number needs to be larger.
variable=-2.0, output=20.0, feedback=The number needs to be larger.
variable=-4.0, output=72.0, feedback=Success.
Cannot extract suggestion from LLM's response:
{
"reasoning": "The given instruction was to change the value of the variables in #Variables to improve the output according to the given feedback. However, the feedback indicates success, suggesting that the modification made to the variables (in this case, the value of float0) has produced the correct or desired outcome. In the code, the variable float0 is used as the input for the function 'bar' which performs negative scaling (though its specific behavior is not detailed, the output suggests it might double the value and change the sign), then this result is added to int2 in the 'add' function, and finally, the result of 'bar' and 'add' is multiplied in the 'multiply' function. Based on the feedback, the results of these operations were deemed successful, therefore no changes are needed to the variables.",
"answer": "",
"suggestion": {}
}
History
  0: -1.0
  1: -2.0
  2: -4.0
  3: -4.0

Adding constraints#

We can add constraints to parameter nodes to guide the optimizer. In this small example, the constraint info helps save one optimization step.

# A small example of how to include constraints on parameters
GRAPH.clear()
x = node(-1.0, trainable=True, constraint="The value should be greater than 2.0")
optimizer = OptoPrime([x], config_list=autogen.config_list_from_json("OAI_CONFIG_LIST"))

history = [x.data]
feedback = ""
while feedback.lower() != "Success.".lower():
    output = foobar(x)
    feedback = user(output.data)
    optimizer.zero_feedback()
    optimizer.backward(output, feedback)
    print(f"variable={x.data}, output={output.data}, feedback={feedback}")  # logging
    optimizer.step()
    history.append(x.data)  # logging

print("History")
for i, v in enumerate(history):
    print(f"  {i}: {v}")
variable=-1.0, output=6.0, feedback=The number needs to be larger.
variable=5.0, output=90.0, feedback=Success.
Cannot extract suggestion from LLM's response:
{
"reasoning": "Since the feedback indicates success and the instruction asks for improvement in output in accordance with the feedback, there is no need for further changes. The feedback suggests that the desired output was achieved with the current variable settings. Given the constraints and functioning of the code, the variables' values are resulting in the expected success as indicated by the feedback.",
"answer": "No changes are needed as the feedback indicates success.",
"suggestion": {}
}
History
  0: -1.0
  1: 5.0
  2: 5.0

Example of optimizing strings#

Below is a similar example, except the variable is written in text and is converted by a poor converter to numbers before inputting to foo and bar.

@bundle()
def convert_english_to_numbers(x):
    """This is a function that converts English to numbers. This function has limited ability."""
    # remove speical characters, like, ", &, etc.
    x = x.replace('"', "")
    try:  # Convert string to integer
        return int(x)
    except:
        pass
    # Convert intergers written in Engligsh in [-10, 10] to numbers
    if x == "negative ten":
        return -10
    if x == "negative nine":
        return -9
    if x == "negative eight":
        return -8
    if x == "negative seven":
        return -7
    if x == "negative six":
        return -6
    if x == "negative five":
        return -5
    if x == "negative four":
        return -4
    if x == "negative three":
        return -3
    if x == "negative two":
        return -2
    if x == "negative one":
        return -1
    if x == "zero":
        return 0
    if x == "one":
        return 1
    if x == "two":
        return 2
    if x == "three":
        return 3
    if x == "four":
        return 4
    if x == "five":
        return 5
    if x == "six":
        return 6
    if x == "seven":
        return 7
    if x == "eight":
        return 8
    if x == "nine":
        return 9
    if x == "ten":
        return 10
    return "FAIL"


def user(x):
    if x == "FAIL":
        return "The text cannot be converted to a number."
    if x < 50:
        return "The number needs to be larger."
    else:
        return "Success."


def foobar_text(x):
    output = convert_english_to_numbers(x)
    if output.data == "FAIL":  # This is not traced
        return output
    else:
        return foo(bar(output))
GRAPH.clear()
x = node("negative point one", trainable=True)
optimizer = OptoPrime([x], config_list=autogen.config_list_from_json("OAI_CONFIG_LIST"))

history = [x.data]
feedback = ""
while feedback.lower() != "Success.".lower():
    output = foobar_text(x)
    feedback = user(output.data)
    optimizer.zero_feedback()
    optimizer.backward(output, feedback)
    print(f"variable={x.data}, output={output.data}, feedback={feedback}")  # logging
    optimizer.step()
    history.append(x.data)  # logging

print("History")
for i, v in enumerate(history):
    print(f"  {i}: {v}")
variable=negative point one, output=FAIL, feedback=The text cannot be converted to a number.
variable=one, output=2, feedback=The number needs to be larger.
variable=two, output=12, feedback=The number needs to be larger.
variable=three, output=30, feedback=The number needs to be larger.
variable=ten, output=380, feedback=Success.
Cannot extract suggestion from LLM's response:
{
"reasoning": "The instruction asked for a change in the variables to improve the output in accordance with the feedback. Since the feedback indicates success, it means that the output generated by the current values of the variables met the expected outcome. The sequence of functions converts an English string to a number, then a negative scaling is applied to it, followed by addition and multiplication operations. Given the feedback, there's no need to suggest any changes since the operations worked as intended with the existing variable.",
"answer": "No changes needed as the feedback indicates success.",
"suggestion": {}
}
History
  0: negative point one
  1: one
  2: two
  3: three
  4: ten
  5: ten

Example of optimizing functions#

We can use trace to optimize python function code directly. This can be achieved by setting trainable=True when decorating a custom function by @bundle. This would create a ParameterNode in the operator, which can be accessed by the parameter attribute of the decorated function. It can be used like any other parameters and sent to the optimizer.

GRAPH.clear()


def user(output):
    if output < 0:
        return "Success."
    else:
        return "Try again. The output should be negative"


# We make this function as a parameter that can be optimized.


@bundle(trainable=True)
def my_fun(x):
    """Test function"""
    return x**2 + 1


x = node(-1, trainable=False)
optimizer = OptoPrime([my_fun.parameter], config_list=autogen.config_list_from_json("OAI_CONFIG_LIST"))

feedback = ""
while feedback != "Success.":
    output = my_fun(x)
    feedback = user(output.data)
    optimizer.zero_feedback()
    optimizer.backward(output, feedback)

    print(f"output={output.data}, feedback={feedback}, variables=\n")  # logging
    for p in optimizer.parameters:
        print(p.name, p.data)
    optimizer.step(verbose=False)
output=2, feedback=Try again. The output should be negative, variables=

__code:0 def my_fun(x):
    """Test function"""
    return x**2 + 1
output=-3, feedback=Success., variables=

__code:0 def my_fun(x):
    """Test function"""
    return x - 2

Example of hyper-parameter optimization for ML models#

We can use trace to optimize the hyper-parameters of a machine learning model using language feedbacks. This example requires scikit-learn. Before running the cell below, please ensure that it is installed using:

pip install scikit-learn
from sklearn.datasets import fetch_openml
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.utils import check_random_state
import numpy as np


train_samples = 10000
X, y = fetch_openml("mnist_784", version=1, return_X_y=True, as_frame=False)

random_state = check_random_state(0)
permutation = random_state.permutation(X.shape[0])
X = X[permutation]
y = y[permutation]
X = X.reshape((X.shape[0], -1))

X_train, X_validation, y_train, y_validation = train_test_split(X, y, train_size=train_samples, test_size=20000)

scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_validation = scaler.transform(X_validation)

The language feedback consists of a text representation of the validation accuracy:

def scorer(classifier, guess, history):
    score = classifier.score(X_validation, y_validation) * 100
    sparsity = np.mean(classifier.coef_ == 0) * 100
    return_feedback = f"\nScore is the accuracy of the classifier on the validation set, and should be maximized."
    return_feedback += f"\nSparsity is the percentage of zero coefficients in the classifier. If the classifier is overfit, a higher sparsity will yield a better score. If the classifier is underfit however, a lower sparsity will yield a better score."
    return_feedback += f"By lowering the regularization parameter (must always be positive), the sparsity will increase. By increasing the regularization parameter, the sparsity will decrease."
    return_feedback += f"\n\nMost recent guess: \nRegularization Parameter: {guess:.4f}, Score: {score:.2f}%, Sparsity: {sparsity:.2f}%"
    if len(history) > 0:
        return_feedback += f"\n\nHistory of guesses:"
        for item in history:
            return_feedback += (
                f"\nRegularization Parameter: {item[0]:.4f}, Score: {item[1]:.2f}%, Sparsity: {item[2]:.2f}%"
            )
    return return_feedback, score, sparsity


@bundle(trainable=False)
def train_classifier(regularization_parameter):
    """regularization_parameter is a positive number that controls the sparsity of the classifier. Lower values will increase sparsity, and higher values will decrease sparsity."""
    classifier = LogisticRegression(C=regularization_parameter, penalty="l1", solver="saga", tol=0.1)
    classifier.fit(X_train, y_train)
    return classifier
x = node(0.005, trainable=True)
optimizer = OptoPrime([x], config_list=autogen.config_list_from_json("OAI_CONFIG_LIST"))

history = []
bestScore = None
bestRegularization = None
for i in range(10):
    classifier = train_classifier(x)
    fb, score, sparsity = scorer(classifier.data, x.data, history)
    history.append((x.data, score, sparsity))
    print(f"variable={x.data}, feedback={fb}")  # logging
    if bestScore is None or score > bestScore:
        bestScore = score
        bestRegularization = x.data

    optimizer.zero_feedback()
    optimizer.backward(classifier, fb)
    optimizer.step()

print("Best regularization parameter:", bestRegularization)
print("Best score:", bestScore)
variable=0.005, feedback=
Score is the accuracy of the classifier on the validation set, and should be maximized.
Sparsity is the percentage of zero coefficients in the classifier. If the classifier is overfit, a higher sparsity will yield a better score. If the classifier is underfit however, a lower sparsity will yield a better score.By lowering the regularization parameter (must always be positive), the sparsity will increase. By increasing the regularization parameter, the sparsity will decrease.

Most recent guess: 
Regularization Parameter: 0.0050, Score: 83.62%, Sparsity: 75.15%
variable=0.01, feedback=
Score is the accuracy of the classifier on the validation set, and should be maximized.
Sparsity is the percentage of zero coefficients in the classifier. If the classifier is overfit, a higher sparsity will yield a better score. If the classifier is underfit however, a lower sparsity will yield a better score.By lowering the regularization parameter (must always be positive), the sparsity will increase. By increasing the regularization parameter, the sparsity will decrease.

Most recent guess: 
Regularization Parameter: 0.0100, Score: 85.67%, Sparsity: 66.26%

History of guesses:
Regularization Parameter: 0.0050, Score: 83.62%, Sparsity: 75.15%
variable=0.02, feedback=
Score is the accuracy of the classifier on the validation set, and should be maximized.
Sparsity is the percentage of zero coefficients in the classifier. If the classifier is overfit, a higher sparsity will yield a better score. If the classifier is underfit however, a lower sparsity will yield a better score.By lowering the regularization parameter (must always be positive), the sparsity will increase. By increasing the regularization parameter, the sparsity will decrease.

Most recent guess: 
Regularization Parameter: 0.0200, Score: 86.81%, Sparsity: 51.86%

History of guesses:
Regularization Parameter: 0.0050, Score: 83.62%, Sparsity: 75.15%
Regularization Parameter: 0.0100, Score: 85.67%, Sparsity: 66.26%
variable=0.03, feedback=
Score is the accuracy of the classifier on the validation set, and should be maximized.
Sparsity is the percentage of zero coefficients in the classifier. If the classifier is overfit, a higher sparsity will yield a better score. If the classifier is underfit however, a lower sparsity will yield a better score.By lowering the regularization parameter (must always be positive), the sparsity will increase. By increasing the regularization parameter, the sparsity will decrease.

Most recent guess: 
Regularization Parameter: 0.0300, Score: 87.17%, Sparsity: 39.44%

History of guesses:
Regularization Parameter: 0.0050, Score: 83.62%, Sparsity: 75.15%
Regularization Parameter: 0.0100, Score: 85.67%, Sparsity: 66.26%
Regularization Parameter: 0.0200, Score: 86.81%, Sparsity: 51.86%
variable=0.04, feedback=
Score is the accuracy of the classifier on the validation set, and should be maximized.
Sparsity is the percentage of zero coefficients in the classifier. If the classifier is overfit, a higher sparsity will yield a better score. If the classifier is underfit however, a lower sparsity will yield a better score.By lowering the regularization parameter (must always be positive), the sparsity will increase. By increasing the regularization parameter, the sparsity will decrease.

Most recent guess: 
Regularization Parameter: 0.0400, Score: 87.38%, Sparsity: 36.57%

History of guesses:
Regularization Parameter: 0.0050, Score: 83.62%, Sparsity: 75.15%
Regularization Parameter: 0.0100, Score: 85.67%, Sparsity: 66.26%
Regularization Parameter: 0.0200, Score: 86.81%, Sparsity: 51.86%
Regularization Parameter: 0.0300, Score: 87.17%, Sparsity: 39.44%
variable=0.05, feedback=
Score is the accuracy of the classifier on the validation set, and should be maximized.
Sparsity is the percentage of zero coefficients in the classifier. If the classifier is overfit, a higher sparsity will yield a better score. If the classifier is underfit however, a lower sparsity will yield a better score.By lowering the regularization parameter (must always be positive), the sparsity will increase. By increasing the regularization parameter, the sparsity will decrease.

Most recent guess: 
Regularization Parameter: 0.0500, Score: 87.49%, Sparsity: 33.83%

History of guesses:
Regularization Parameter: 0.0050, Score: 83.62%, Sparsity: 75.15%
Regularization Parameter: 0.0100, Score: 85.67%, Sparsity: 66.26%
Regularization Parameter: 0.0200, Score: 86.81%, Sparsity: 51.86%
Regularization Parameter: 0.0300, Score: 87.17%, Sparsity: 39.44%
Regularization Parameter: 0.0400, Score: 87.38%, Sparsity: 36.57%
variable=0.06, feedback=
Score is the accuracy of the classifier on the validation set, and should be maximized.
Sparsity is the percentage of zero coefficients in the classifier. If the classifier is overfit, a higher sparsity will yield a better score. If the classifier is underfit however, a lower sparsity will yield a better score.By lowering the regularization parameter (must always be positive), the sparsity will increase. By increasing the regularization parameter, the sparsity will decrease.

Most recent guess: 
Regularization Parameter: 0.0600, Score: 87.52%, Sparsity: 31.90%

History of guesses:
Regularization Parameter: 0.0050, Score: 83.62%, Sparsity: 75.15%
Regularization Parameter: 0.0100, Score: 85.67%, Sparsity: 66.26%
Regularization Parameter: 0.0200, Score: 86.81%, Sparsity: 51.86%
Regularization Parameter: 0.0300, Score: 87.17%, Sparsity: 39.44%
Regularization Parameter: 0.0400, Score: 87.38%, Sparsity: 36.57%
Regularization Parameter: 0.0500, Score: 87.49%, Sparsity: 33.83%
variable=0.07, feedback=
Score is the accuracy of the classifier on the validation set, and should be maximized.
Sparsity is the percentage of zero coefficients in the classifier. If the classifier is overfit, a higher sparsity will yield a better score. If the classifier is underfit however, a lower sparsity will yield a better score.By lowering the regularization parameter (must always be positive), the sparsity will increase. By increasing the regularization parameter, the sparsity will decrease.

Most recent guess: 
Regularization Parameter: 0.0700, Score: 87.58%, Sparsity: 29.74%

History of guesses:
Regularization Parameter: 0.0050, Score: 83.62%, Sparsity: 75.15%
Regularization Parameter: 0.0100, Score: 85.67%, Sparsity: 66.26%
Regularization Parameter: 0.0200, Score: 86.81%, Sparsity: 51.86%
Regularization Parameter: 0.0300, Score: 87.17%, Sparsity: 39.44%
Regularization Parameter: 0.0400, Score: 87.38%, Sparsity: 36.57%
Regularization Parameter: 0.0500, Score: 87.49%, Sparsity: 33.83%
Regularization Parameter: 0.0600, Score: 87.52%, Sparsity: 31.90%
variable=0.08, feedback=
Score is the accuracy of the classifier on the validation set, and should be maximized.
Sparsity is the percentage of zero coefficients in the classifier. If the classifier is overfit, a higher sparsity will yield a better score. If the classifier is underfit however, a lower sparsity will yield a better score.By lowering the regularization parameter (must always be positive), the sparsity will increase. By increasing the regularization parameter, the sparsity will decrease.

Most recent guess: 
Regularization Parameter: 0.0800, Score: 87.54%, Sparsity: 27.33%

History of guesses:
Regularization Parameter: 0.0050, Score: 83.62%, Sparsity: 75.15%
Regularization Parameter: 0.0100, Score: 85.67%, Sparsity: 66.26%
Regularization Parameter: 0.0200, Score: 86.81%, Sparsity: 51.86%
Regularization Parameter: 0.0300, Score: 87.17%, Sparsity: 39.44%
Regularization Parameter: 0.0400, Score: 87.38%, Sparsity: 36.57%
Regularization Parameter: 0.0500, Score: 87.49%, Sparsity: 33.83%
Regularization Parameter: 0.0600, Score: 87.52%, Sparsity: 31.90%
Regularization Parameter: 0.0700, Score: 87.58%, Sparsity: 29.74%
variable=0.09, feedback=
Score is the accuracy of the classifier on the validation set, and should be maximized.
Sparsity is the percentage of zero coefficients in the classifier. If the classifier is overfit, a higher sparsity will yield a better score. If the classifier is underfit however, a lower sparsity will yield a better score.By lowering the regularization parameter (must always be positive), the sparsity will increase. By increasing the regularization parameter, the sparsity will decrease.

Most recent guess: 
Regularization Parameter: 0.0900, Score: 87.53%, Sparsity: 26.11%

History of guesses:
Regularization Parameter: 0.0050, Score: 83.62%, Sparsity: 75.15%
Regularization Parameter: 0.0100, Score: 85.67%, Sparsity: 66.26%
Regularization Parameter: 0.0200, Score: 86.81%, Sparsity: 51.86%
Regularization Parameter: 0.0300, Score: 87.17%, Sparsity: 39.44%
Regularization Parameter: 0.0400, Score: 87.38%, Sparsity: 36.57%
Regularization Parameter: 0.0500, Score: 87.49%, Sparsity: 33.83%
Regularization Parameter: 0.0600, Score: 87.52%, Sparsity: 31.90%
Regularization Parameter: 0.0700, Score: 87.58%, Sparsity: 29.74%
Regularization Parameter: 0.0800, Score: 87.54%, Sparsity: 27.33%
Best regularization parameter: 0.07
Best score: 87.58