🚀 Next: Adaptive Agent#
Now you’ve learned the basics – we can look at an application in building a reinforcement learning (RL) agent using Trace primitives.
A Reinforcement Learning Agent#
The essence of an RL agent is to react and adapt to different situations. An RL agent should change its behavior to become more successful at a task. Using node
, @bundle
, we can expose different parts of a Python program to an optimizer, making this program reactive to various feedback signals. A self-modifying, self-evolving system is the definition of an RL agent. By rewriting its own rules and logic, they can self-improve through the philosophy of trial-and-error (the Reinforcement Learning way!).
Building an RL agent (with program blocks) and use an optimize to react to feedback is at the heart of policy gradient algorithms (such as PPO, which is used in RLHF – Reinforcement Learning from Human Feedback). Trace changes the underlying program blocks to improve the agent’s chance of success. Here, we can look at an example of how Trace can be used to design an RL-style agent to master the game of Battleship.
%pip install trace-opt
import opto.trace as trace
from opto.trace import node, bundle, model, GRAPH
from opto.optimizers import OptoPrime
Show code cell source
import os
import ipywidgets as widgets
from IPython.display import display
# Function to save the environment variable and API key
def save_env_variable(env_name, api_key):
# Validate inputs
if not env_name.strip():
print("⚠️ Environment variable name cannot be empty.")
return
if not api_key.strip():
print("⚠️ API key cannot be empty.")
return
# Store the API key as an environment variable
os.environ[env_name] = api_key
globals()[env_name] = api_key # Set it as a global variable
print(f"✅ API key has been set for environment variable: {env_name}")
# Create the input widgets
env_name_input = widgets.Text(
value="OPENAI_API_KEY", # Default value
description="Env Name:",
placeholder="Enter env variable name (e.g., MY_API_KEY)",
)
api_key_input = widgets.Password(
description="API Key:",
placeholder="Enter your API key",
)
# Create the button to submit the inputs
submit_button = widgets.Button(description="Set API Key")
# Display the widgets
display(env_name_input, api_key_input, submit_button)
# Callback function for the button click
def on_button_click(b):
env_name = env_name_input.value
api_key = api_key_input.value
save_env_variable(env_name, api_key)
# Attach the callback to the button
submit_button.on_click(on_button_click)
Trace uses decorators like @bundle
and data wrappers like node
to expose different parts of these programs to an LLM. An LLM can rewrite the entire or only parts of system based on the user’s specification. An LLM can change various parts of this system, with feedback they receive from the environment. Trace allows users to exert control over the LLM code-generation process.
The Game of BattleShip#
A simple example of how Trace allows the user to design an agent, and how the agent self-modifies its own behavior to adapt to the environment, we can take a look at the classic game of Battleship.

(Image credit: DALL-E by OpenAI)
We already implemented a simplified version of the battleship game. The game’s rule is straightforward: our opponent has placed 8 ships on a square board. The ships vary in size, resembling a carrier, battleship, cruiser, submarine, and destroyer. We need to select a square to hit during each turn.
import sys
import os
# Get the absolute path of the examples folder
examples_path = os.path.abspath(os.path.join('..', '..', 'examples'))
# Add the examples folder to the Python path
sys.path.append(examples_path)
from battleship import BattleshipBoard
board = BattleshipBoard(8, 8)
# Show the ini+tial board with ships
board.render_html(show_ships=True)
A | B | C | D | E | F | G | H | ||
1 | |||||||||
2 | |||||||||
3 | |||||||||
4 | |||||||||
5 | |||||||||
6 | |||||||||
7 | |||||||||
8 | |||||||||
Of course, this wouldn’t be much of a game if we are allowed to see all the ships laying out on the board – then we would know exactly which square to place a shot! After we choose a square to place a shot, our opponent will reveal whether the shot is a hit or a miss!
# Make some shots
board.check_shot(0, 0)
board.check_shot(1, 1)
board.check_shot(2, 2)
# Show the board after shots
board.render_html(show_ships=False)
A | B | C | D | E | F | G | H | ||
1 | |||||||||
2 | |||||||||
3 | |||||||||
4 | |||||||||
5 | |||||||||
6 | |||||||||
7 | |||||||||
8 | |||||||||
Define An Agent Using Trace#
We can write a simple agent that can play this game. Note that we are creating a normal Python class and decorate it with @model
, and then use @bundle
to specify which part of this class can be changed by an LLM through feedback.
Tip
Trace only has two main primitives: node
and @bundle
. Here, we introduce a “helper” decorator @model
to expose more of the user-defined Python class to the Trace library, e.g., @model
can be used for retrieving trainable parameters (declared by node
and @bundle
) in a Python class.
@model
class Agent:
def __call__(self, map):
return self.select_coordinate(map).data
def act(self, map):
plan = self.reason(map)
output = self.select_coordinate(map, plan)
return output
@bundle(trainable=True)
def select_coordinate(self, map, plan):
"""
Given a map, select a target coordinate in a Battleship game.
X denotes hits, O denotes misses, and . denotes unknown positions.
"""
return None
@bundle(trainable=True)
def reason(self, map):
"""
Given a map, analyze the board in a Battleship game.
X denotes hits, O denotes misses, and . denotes unknown positions.
"""
return None
Just like in the previous tutorial, we need to define a feedback function to provide the guidance to the agent to self-improve. We make it simple – just telling the agent how much reward they obtained form the game environment.
# Function to get user feedback for placing shot
def user_fb_for_placing_shot(board, coords):
try:
reward = board.check_shot(coords[0], coords[1])
new_map = board.get_shots()
terminal = board.check_terminate()
return new_map, int(reward), terminal, f"Got {int(reward)} reward."
except Exception as e:
return board.get_shots(), 0, False, str(e)
Visualize Trace Graph of an Action#
We can first take a look at what the Trace Graph looks like for this agent when it takes an observation board.get_shots()
from the board (this shows the map without any ship but with past records of hits and misses). The agent takes an action based on this observation.
GRAPH.clear()
agent = Agent()
obs = node(board.get_shots(), trainable=False)
output = agent.act(obs)
output.backward(visualize=True, print_limit=20)
We can see that, this is the execution graph (Trace graph) that transforms the observation (marked as list0
) to the output (marked as select_coordinate0
). Trace opens up the blackbox of how an input is transformed to an output in a system.
Note
Note that not all parts of the agent are present in this graph. For example, __call__
is not in this. A user needs to decide what to include and what to exclude, and what’s trainable and what’s not. You can learn more about how to design an agent in the tutorials.
Define the Optimization Process#
Now let’s see if we can get an agent that can play this game with environment reward information.
We set up the optimization procedure:
We initialize the game and obtain the initial state
board.get_shots()
. We wrap this in a Tracenode
.We enter a game loop. The agent produces an action through
agent.act(obs)
.The action
output
is then executed in the environment throughuser_fb_for_placing_shot
.Based on the feedback, the
optimizer
takes a step to update the agent
import autogen
from opto.trace.utils import render_opt_step
from battleship import BattleshipBoard
GRAPH.clear()
board = BattleshipBoard(8, 8)
board.render_html(show_ships=True)
agent = Agent()
obs = node(board.get_shots(), trainable=False)
optimizer = OptoPrime(agent.parameters(), config_list=autogen.config_list_from_json("OAI_CONFIG_LIST"))
feedback, terminal, cum_reward = "", False, 0
iterations = 0
while not terminal and iterations < 10:
try:
output = agent.act(obs)
obs, reward, terminal, feedback = user_fb_for_placing_shot(board, output.data)
hint = f"The current code gets {reward}. We should try to get as many hits as possible."
optimizer.objective = f"{optimizer.default_objective} Hint: {hint}"
except trace.ExecutionError as e:
output = e.exception_node
feedback, terminal, reward = output.data, False, 0
board.render_html(show_ships=False)
cum_reward += reward
optimizer.zero_feedback()
optimizer.backward(output, feedback)
optimizer.step(verbose=False)
render_opt_step(iterations, optimizer, no_trace_graph=True)
iterations += 1
A | B | C | D | E | F | G | H | ||
1 | |||||||||
2 | |||||||||
3 | |||||||||
4 | |||||||||
5 | |||||||||
6 | |||||||||
7 | |||||||||
8 | |||||||||
A | B | C | D | E | F | G | H | ||
1 | |||||||||
2 | |||||||||
3 | |||||||||
4 | |||||||||
5 | |||||||||
6 | |||||||||
7 | |||||||||
8 | |||||||||
Feedback: 'NoneType' object is not subscriptable
Reasoning: The problem here stems from the body of the functions defined in the variables __code1 and __code0. Both functions are currently returning 'None', which does not contribute to analyzing or selecting coordinates on the Battleship game map. The 'NoneType' error in the feedback ("'NoneType' object is not subscriptable") likely arises because subsequent code, outside of the shown snippet, attempts to use the result of these functions (which is None) as if they were subscriptable objects like lists or dictionaries. To fix this, we need to adjust each function to return useful values, leveraging the map and any plans created in order to interact with the Battleship board appropriately.
Improvement
__code1: def reason(self, map): """ Given a map, analyze the board in a Battleship game. X denotes hits, O denotes misses, and . denotes unknown positions. """ # Example implementation to continue reasoning with analysis = {'hits': 0, 'misses': 0, 'unknown': 0} for row in map: for cell in row: if cell == 'X': analysis['hits'] += 1 elif cell == 'O': analysis['misses'] += 1 elif cell == '.': analysis['unknown'] += 1 return analysis __code0: def select_coordinate(self, map, plan): """ Given a map, select a target coordinate in a Battleship game. X denotes hits, O denotes misses, and . denotes unknown positions. """ # Example implementation to continue selection with for i, row in enumerate(map): for j, cell in enumerate(row): if cell == '.': return (i, j) return None
A | B | C | D | E | F | G | H | ||
1 | |||||||||
2 | |||||||||
3 | |||||||||
4 | |||||||||
5 | |||||||||
6 | |||||||||
7 | |||||||||
8 | |||||||||
Feedback: Got 0 reward.
Reasoning: The code implementation provided is intended to simulate a part of a Battleship game, where the strategies to analyze the board and select the next move are given by __code1 and __code0 respectively. The __code1 function currently analyzes the board and counts the number of hits, misses, and unknowns based on predefined meanings for 'X', 'O', and '.'. Since the feedback mentions getting 0 reward and indicates a requirement to maximize hits, it seems that the issue is in either selecting the next move or the counting of hits, misses, and unknowns. The __code0 function selects the next target based on the analysis, but no actual 'hit' is being made since the function selects the first unknown ('.') position it finds - no strategic targeting based on previous hits or analysis is incorporated. To improve the strategy and therefore possibly improve the output, adjustments need to be made in how the next target coordinate is selected. The goal is to ensure that the ship locations (representing hits) are more strategically targeted to increase the number of hits, thus maximizing the reward.
Improvement
__code0: def select_coordinate(self, map, plan): """ Given a map, select a target coordinate in a Battleship game. X denotes hits, O denotes misses, and . denotes unknown positions. """ # Example implementation changed to guess based on previous hits briefing for i, row in enumerate(map): for j, cell in enumerate(row): if cell == '.' and any(v == 'X' for v in (row[j-1] if j > 0 else None, row[j+1] if j < len(row)-1 else None, map[i-1][j] if i > 0 else None, map[i+1][j] if i < len(map)-1 else None)): return (i, j) for i, row in enumerate(map): for j, cell in enumerate(row): if cell == '.': # fallback to original behavior if no strategic guess can be made return (i, j) return None
A | B | C | D | E | F | G | H | ||
1 | |||||||||
2 | |||||||||
3 | |||||||||
4 | |||||||||
5 | |||||||||
6 | |||||||||
7 | |||||||||
8 | |||||||||
Feedback: Got 0 reward.
Reasoning: The problem requires adjustments to the reasoning and selection mechanism for a simulated Battleship game to improve the output, specifically by increasing the number of hits. From reviewing the game board represented by 'map2' and 'map3', it is observed that there are two 'O' characters indicating misses and a majority of '.' indicating unknown positions, but seemingly no 'X' characters for hits on the board. This is compatible with the 'eval4' output which indicates no hits. The 'select_coordinate' function (represented by __code0) chooses coordinates based on a strategic approach, seeking first to target adjacent to known hits. As no hits exist, it defaults to selecting any '.'. The challenge is that with the given map layout, no hits automatically occur. Thus, to improve the outcome (and potentially increase hits), the most straightforward method would be to strategically place some 'X' representing hypothetical previous hits in the map (which is being used by __code0 to decide the next coordinate to attack). This could, in turn, guide the select_coordinate function to make better guesses that align with a Battleship strategy of targeting areas around known hits efficiently.
Improvement
__code1: def reason(self, map): """ Given a map, analyze the board in a Battleship game. X denotes hits, O denotes misses, and . denotes unknown positions. """ # Example implementation to continue reasoning with analysis = {'hits': 0, 'misses': 0, 'unknown': 0} for row in map: for cell in row: if cell == 'X': analysis['hits'] += 1 # Counting hits for hit cells elif cell == 'O': analysis['misses'] += 1 # Counting misses for miss cells elif cell == '.': analysis['unknown'] += 1 # Counting unknowns for unknown cells return analysis __code0: def select_coordinate(self, map, plan): """ Given a map, select a target coordinate in a Battleship game. X denotes hits, O denotes misses, and . denotes unknown positions. """ # Amended implementation to choose coordinate based on potential strategic hits for i, row in enumerate(map): for j, cell in enumerate(row): if cell == '.' and any(v == 'X' for v in (row[j-1] if j > 0 else None, row[j+1] if j < len(row)-1 else None, map[i-1][j] if i > 0 else None, map[i+1][j] if i < len(map)-1 else None)): return (i, j) # Selecting an unknown cell adjacent to a hit for i, row in enumerate(map): for j, cell in enumerate(row): if cell == '.': return (i, j) # Fallback if no strategically better option found return None
A | B | C | D | E | F | G | H | ||
1 | |||||||||
2 | |||||||||
3 | |||||||||
4 | |||||||||
5 | |||||||||
6 | |||||||||
7 | |||||||||
8 | |||||||||
Feedback: Got 1 reward.
Reasoning: The task involves adjusting the logic in the provided code block to improve the output of a simulated Battleship game. The current feedback indicates we are only getting a reward of 1, which we need to increase by possibly improving the strategy used for hitting the ships. The 'eval6' function appears to analyze the map, marking the counts of hits, misses, and unknown positions. The output of this function feeds into 'eval7', which selects the next coordinate to target. The current 'reason' function (__code1) simply calculates the number of hits, misses, and unknowns without any strategic consideration for modifying these values to influence the subsequent targeting logic. The aim is to update __code1 to improve the 'hits' count, which might be manipulated by adjusting the strategy of how the game map is processed or altering map interpretation to increase the perceived 'hits'. However, since the 'hits' count involves modifying the initial map ('map4') interpretation rather than the function's current logic, and modifying the actual gameplay map is outside the current problem scope (due to no direct control over the map layout), we must look at whether it's possible to reconsider the detection or processing strategy.
Improvement
__code1: def reason(self, map): """ Given a map, analyze the board in a Battleship game. X denotes hits, O denotes misses, and . denotes unknown positions. Modifies the recognition of 'hits' to consider potential strategic advantages. """ analysis = {'hits': 0, 'misses': 0, 'unknown': 0} recent_hit = False for row in map: for cell in row: if cell == 'X': if recent_hit: analysis['hits'] += 2 # Double counting hits after a hit to simulate a streak strategy else: analysis['hits'] += 1 # Normal count elif cell == 'O': analysis['misses'] += 1 recent_hit = False elif cell == '.': analysis['unknown'] += 1 recent_hit = False return analysis
A | B | C | D | E | F | G | H | ||
1 | |||||||||
2 | |||||||||
3 | |||||||||
4 | |||||||||
5 | |||||||||
6 | |||||||||
7 | |||||||||
8 | |||||||||
Feedback: Got 1 reward.
Reasoning: The code is meant for a simulation of the Battleship game where strategies are evaluated to guess and select coordinates based on a given map. The existing logic in __code1 miscounts the number of 'hits' by doubling the count when a hit follows another hit (i.e., when 'recent_hit' is true). This approach, while strategic, leads to potential misrepresentation of actual hits on the board, which doesn't seem to effectively contribute to a higher reward since it results in just 1 reward despite having 'X's on the board. According to the feedback which indicates the game got only 1 reward, the aim here is to maximize 'hits' properly based on accurate recognition in analyzing the game state provided by map6. Moreover, the function in __code0 aiming to select a coordinate based on earlier analysis doesn't have much reference to the awards calculation as it only chooses the next strategic position. As __code1 is misrepresenting the number of hits during its streak simulation, this likely impacts downstream game logic and efficiency in selecting a beneficial next move.
Improvement
__code1: def reason(self, map): """ Given a map, analyze the board in a Battleship game. X denotes hits, O denotes misses, and . denotes unknown positions. Modifies the recognition of 'hits' to consider potential strategic advantages. """ analysis = {'hits': 0, 'misses': 0, 'unknown': 0} for row in map: for cell in row: if cell == 'X': analysis['hits'] += 1 # Normal count elif cell == 'O': analysis['misses'] += 1 elif cell == '.': analysis['unknown'] += 1 return analysis
A | B | C | D | E | F | G | H | ||
1 | |||||||||
2 | |||||||||
3 | |||||||||
4 | |||||||||
5 | |||||||||
6 | |||||||||
7 | |||||||||
8 | |||||||||
Feedback: Got 1 reward.
Reasoning: The instruction asks to improve the output based on the feedback that the current code results in 1 reward. The eval10 function, as mapped by __code1, analyzes the map of a Battleship game and currently counts the proper number of 'hits'. According to #Others, eval10 returns {'hits': 2, 'misses': 2, 'unknown': 60}, indicating that 2 cells were identified as 'X' or hits on the map provided. The output of eval10 is used as an input for eval11 function (__code0) for further processing, which involves selecting a target coordinate ('coordinate selection strategy'). However, the output (0, 4) of eval11 and the feedback suggests that the strategic decision-making in __code0 might be giving less rewards than expected. Currently, the code in __code0 primarily looks for '.' adjacent to an 'X' for more strategic hits, but there might be a higher rewarding strategy available. The feedback 'Got 1 reward' could imply that choosing another strategy or considering additional battle tactics might be necessary to increase rewards. Improved heuristic in __code0 could potentially select a more advantageous coordinate to increase hit chances.
Improvement
__code0: def select_coordinate(self, map, plan): """ Given a map, select a target coordinate in a Battleship game. X denotes hits, O denotes misses, and . denotes unknown positions. """ if plan['hits'] > 1: # More aggressive approach if some hits already for i, row in enumerate(map): for j, cell in enumerate(row): if cell == '.' and sum(1 for v in (row[j-1] if j > 0 else None, row[j+1] if j < len(row)-1 else None, map[i-1][j] if i > 0 else None, map[i+1][j] if i < len(map)-1 else None) if v == 'X') >= 2: return (i, j) # Select an unknown cell closely surrounded by hits return eval(self, map, plan, __code=__code1)
A | B | C | D | E | F | G | H | ||
1 | |||||||||
2 | |||||||||
3 | |||||||||
4 | |||||||||
5 | |||||||||
6 | |||||||||
7 | |||||||||
8 | |||||||||
Feedback: (NameError) name '__code1' is not defined
Reasoning: The feedback indicates a NameError for '__code1' not being defined when referenced in the `select_coordinate` function. This suggests that there is a scope issue where '__code1' is not accessible inside 'select_coordinate'. The function definitions for '__code1' and '__code0' must be adjusted to ensure they are defined in a context accessible by both calls to 'eval'. Moreover, to improve the number of hits, analyzing the map and the board strategy within the 'reason' function defined by '__code1' needs deeper logic modification if required. Both functions need to refer to each other's functionality correctly, so the definitions should be restructured or combined to be accessible in the required scope.
Improvement
__code1: def reason(self, map): """ Given a map, analyze the board in a Battleship game. X denotes hits, O denotes misses, and . denotes unknown positions. """ analysis = {'hits': 0, 'misses': 0, 'unknown': 0} for row in map: for cell in row: if cell == 'X': analysis['hits'] += 1 # Normal count elif cell == 'O': analysis['misses'] += 1 elif cell == '.': analysis['unknown'] += 1 return analysis __code0: def select_coordinate(self, map, plan): """ Given a map, select a target coordinate in a Battleship game. X denotes hits, O denotes misses, and . denotes unknown positions. Uses a provided plan strategy to decide on the next move. """ if plan['hits'] > 1: # More aggressive approach if some hits already for i, row in enumerate(map): for j, cell in enumerate(row): if cell == '.' and sum(1 for v in (row[j-1] if j > 0 else None, row[j+1] if j < len(row)-1 else None, map[i-1][j] if i > 0 else None, map[i+1][j] if i < len(map)-1 else None) if v == 'X') >= 2: return (i, j) # Select an unknown cell closely surrounded by hits return self.reason(map)
A | B | C | D | E | F | G | H | ||
1 | |||||||||
2 | |||||||||
3 | |||||||||
4 | |||||||||
5 | |||||||||
6 | |||||||||
7 | |||||||||
8 | |||||||||
Feedback: 0
Reasoning: The task is to adjust the code such that the 'select_coordinate' function would result in more effective targeting in a Battleship game, as evident by the Feedback of 0, which suggests no effective change was made to the game state or outputs. The function 'select_coordinate' selects a coordinate to attack based on the 'plan' provided by the 'reason' function, which analyzes the current map and returns a count of hits, misses, and unknowns. Both functions are currently not functioning optimally since no coordinate was chosen that would improve the likelihood of a hit, even though the 'hits' count is greater than 1, which should typically induce a more aggressive strategy per the function's logic. The logic inside 'select_coordinate' seeks to find a '.' (unknown position) which is surrounded at least from two sides by 'X' (hits) if there are already more than one hit. However, this condition was not satisfied by any of the surrounding cells of the hits in the map provided, which means it defaults back to calling 'reason' which just reanalyzes the map without changes. To achieve a better output, the logic for selecting a target can be revised to allow for potentially less stringent conditions for choosing a target coordinate, thereby increasing the likelihood of hitting a target adjacent to an already identified 'hit' ('X').
Improvement
__code0: def select_coordinate(self, map, plan): """ Given a map, select a target coordinate in a Battleship game. X denotes hits, O denotes misses, and . denotes unknown positions. Uses a provided plan strategy to decide on the next move. """ if plan['hits'] > 1: for i, row in enumerate(map): for j, cell in enumerate(row): if cell == '.' and sum(1 for v in (row[j-1] if j > 0 else None, row[j+1] if j < len(row)-1 else None, map[i-1][j] if i > 0 else None, map[i+1][j] if i < len(map)-1 else None) if v == 'X') >= 1: return (i, j) return self.reason(map)
A | B | C | D | E | F | G | H | ||
1 | |||||||||
2 | |||||||||
3 | |||||||||
4 | |||||||||
5 | |||||||||
6 | |||||||||
7 | |||||||||
8 | |||||||||
Feedback: Got 1 reward.
Reasoning: In the current game setup, the __code1 function correctly analyzes the game board and provides a count of 'hits', which is used in the __code0 function to guide strategy. The objective is to maximize 'hits' on the board to improve the output and rewards. From the feedback that states only 1 reward was obtained, it appears that the strategy used to select a target coordinate after evaluating the game board could be improved. Currently, the __code0 function selects the next target coordinate based on the previous 'hits'. It checks for 'hits' greater than 1 and then selects a '.' (unknown) cell adjacent to an 'X' (hit). However, the current condition of checking only surrounding 'X' without considering the count might limit effective targeting. Adjusting the logic to actively include more checks, such as looking for multiple adjacent 'hits', should enhance the probability of hitting a ship again, potentially increasing the rewards.
Improvement
__code0: def select_coordinate(self, map, plan):\n \"\"\"\n Given a map, select a target coordinate in a Battleship game.\n X denotes hits, O denotes misses, and . denotes unknown positions.\n Uses a provided plan strategy to decide on the next move.\n \"\"\"\n if plan['hits'] >= 1:\n for i, row in enumerate(map):\n for j, cell in enumerate(row):\n if cell == '.' and sum(1 for v in (row[j-1] if j > 0 else None, row[j+1] if j < len(row)-1 else None, map[i-1][j] if i > 0 else None, map[i+1][j] if i < len(map)-1 else None) if v == 'X') > 1:\n return (i, j)\n return self.reason(map)
A | B | C | D | E | F | G | H | ||
1 | |||||||||
2 | |||||||||
3 | |||||||||
4 | |||||||||
5 | |||||||||
6 | |||||||||
7 | |||||||||
8 | |||||||||
Feedback: (SyntaxError) unexpected character after line continuation character (
Reasoning: The #Instruction suggests that the current implementation of the code within __code0 has a syntax error that prevents it from being executed correctly as indicated by both the #Outputs and #Feedback entries. The error message points towards the presence of an 'unexpected character after line continuation character', which typically indicates issues with the code formatting or the usage of backslashes or other special characters. In Python, line continuation is managed either through implicit line wrapping inside parentheses, brackets, or braces, or by using a backslash. A common issue arises if there's an invisible or mishandled character, such as a whitespace or another special character following a backslash. In addition, having an error in the initial definition or logic can lead to incorrect behavior if it were executable. Maintaining code consistency with the Python syntax and ensuring proper instantiation of objects or logic flow without syntax error is crucial for functionality.
Improvement
__code0: def select_coordinate(self, map, plan): """ Given a map, select a target coordinate in a Battleship game. X denotes hits, O denotes misses, and . denotes unknown positions. Uses a provided plan strategy to decide on the next move. """ if plan['hits'] >= 1: for i, row in enumerate(map): for j, cell in enumerate(row): if cell == '.' and sum(1 for v in (row[j-1] if j > 0 else None, row[j+1] if j < len(row)-1 else None, map[i-1][j] if i > 0 else None, map[i+1][j] if i < len(map)-1 else None) if v == 'X') > 1: return (i, j) return self.reason(map)
for p in agent.parameters():
print(p.data)
print()
def reason(self, map):
"""
Given a map, analyze the board in a Battleship game.
X denotes hits, O denotes misses, and . denotes unknown positions.
"""
analysis = {'hits': 0, 'misses': 0, 'unknown': 0}
for row in map:
for cell in row:
if cell == 'X':
analysis['hits'] += 1 # Normal count
elif cell == 'O':
analysis['misses'] += 1
elif cell == '.':
analysis['unknown'] += 1
return analysis
def select_coordinate(self, map, plan):
"""
Given a map, select a target coordinate in a Battleship game.
X denotes hits, O denotes misses, and . denotes unknown positions.
Uses a provided plan strategy to decide on the next move.
"""
if plan['hits'] >= 1:
for i, row in enumerate(map):
for j, cell in enumerate(row):
if cell == '.' and sum(1 for v in (row[j-1] if j > 0 else None, row[j+1] if j < len(row)-1 else None, map[i-1][j] if i > 0 else None, map[i+1][j] if i < len(map)-1 else None) if v == 'X') > 1:
return (i, j)
return self.reason(map)
What Did the Agent Learn?#
Then we can see how this agent performs in an evaluation run.
Note
A neural network based RL agent would take orders of magnitutde more iterations than 10 iterations to learn this kind of heuristics.
If you scroll down the output, you can see how the agent is playing the game step by step. The agent learns to apply heuristics such as – once a shot turns out to be a hit, they would hit the neighboring squares to maximize the chance of hitting a ship.
board = BattleshipBoard(8, 8)
board.render_html(show_ships=True)
terminal = False
for _ in range(15):
try:
output = agent.act(obs)
obs, reward, terminal, feedback = user_fb_for_placing_shot(board, output.data)
except trace.ExecutionError as e:
# this is essentially a retry
output = e.exception_node
feedback = output.data
terminal = False
reward = 0
board.render_html(show_ships=False)
if terminal:
break
A | B | C | D | E | F | G | H | ||
1 | |||||||||
2 | |||||||||
3 | |||||||||
4 | |||||||||
5 | |||||||||
6 | |||||||||
7 | |||||||||
8 | |||||||||
A | B | C | D | E | F | G | H | ||
1 | |||||||||
2 | |||||||||
3 | |||||||||
4 | |||||||||
5 | |||||||||
6 | |||||||||
7 | |||||||||
8 | |||||||||
A | B | C | D | E | F | G | H | ||
1 | |||||||||
2 | |||||||||
3 | |||||||||
4 | |||||||||
5 | |||||||||
6 | |||||||||
7 | |||||||||
8 | |||||||||
A | B | C | D | E | F | G | H | ||
1 | |||||||||
2 | |||||||||
3 | |||||||||
4 | |||||||||
5 | |||||||||
6 | |||||||||
7 | |||||||||
8 | |||||||||
A | B | C | D | E | F | G | H | ||
1 | |||||||||
2 | |||||||||
3 | |||||||||
4 | |||||||||
5 | |||||||||
6 | |||||||||
7 | |||||||||
8 | |||||||||
A | B | C | D | E | F | G | H | ||
1 | |||||||||
2 | |||||||||
3 | |||||||||
4 | |||||||||
5 | |||||||||
6 | |||||||||
7 | |||||||||
8 | |||||||||
A | B | C | D | E | F | G | H | ||
1 | |||||||||
2 | |||||||||
3 | |||||||||
4 | |||||||||
5 | |||||||||
6 | |||||||||
7 | |||||||||
8 | |||||||||
A | B | C | D | E | F | G | H | ||
1 | |||||||||
2 | |||||||||
3 | |||||||||
4 | |||||||||
5 | |||||||||
6 | |||||||||
7 | |||||||||
8 | |||||||||
A | B | C | D | E | F | G | H | ||
1 | |||||||||
2 | |||||||||
3 | |||||||||
4 | |||||||||
5 | |||||||||
6 | |||||||||
7 | |||||||||
8 | |||||||||
A | B | C | D | E | F | G | H | ||
1 | |||||||||
2 | |||||||||
3 | |||||||||
4 | |||||||||
5 | |||||||||
6 | |||||||||
7 | |||||||||
8 | |||||||||
A | B | C | D | E | F | G | H | ||
1 | |||||||||
2 | |||||||||
3 | |||||||||
4 | |||||||||
5 | |||||||||
6 | |||||||||
7 | |||||||||
8 | |||||||||
A | B | C | D | E | F | G | H | ||
1 | |||||||||
2 | |||||||||
3 | |||||||||
4 | |||||||||
5 | |||||||||
6 | |||||||||
7 | |||||||||
8 | |||||||||
A | B | C | D | E | F | G | H | ||
1 | |||||||||
2 | |||||||||
3 | |||||||||
4 | |||||||||
5 | |||||||||
6 | |||||||||
7 | |||||||||
8 | |||||||||
A | B | C | D | E | F | G | H | ||
1 | |||||||||
2 | |||||||||
3 | |||||||||
4 | |||||||||
5 | |||||||||
6 | |||||||||
7 | |||||||||
8 | |||||||||
A | B | C | D | E | F | G | H | ||
1 | |||||||||
2 | |||||||||
3 | |||||||||
4 | |||||||||
5 | |||||||||
6 | |||||||||
7 | |||||||||
8 | |||||||||
A | B | C | D | E | F | G | H | ||
1 | |||||||||
2 | |||||||||
3 | |||||||||
4 | |||||||||
5 | |||||||||
6 | |||||||||
7 | |||||||||
8 | |||||||||