LLM Agents
GenAIScript defines an agent as a tool that runs an inline prompt to accomplish a task. The agent LLM is typically augmented with additional tools.
In this blog post, we’ll walk through building a user interaction agent
that enables the agent to ask questions to the user.
Let’s dive into understanding how to create an “Agent that can ask questions to the user.”
You can find the full script on GitHub right here.
Metadata
The script is written in JavaScript. It starts by declaring the metadata to make the script available as a system script, which can be reused in other scripts.
This line sets up the title for our system, making it clear that it’s intended to interact with the user by asking questions.
title and description
The defAgent
function defines the behavior of our agent. It takes an agent identifier and a description. These two are quite important,
as they will help the “host” LLM choose to use this agent.
GenAIScript will automatically append a description of all the tools used by the agent prompt so you don’t have to worry about that part in the description.
prompt
The third argument is a string or a function to craft prompt instructions for the agent LLM call. The agent implementation already contains generic prompting to make the prompt behave like an agent, but you can add more to specify a role, tone, and dos and don’ts.
model configuration
The last argument is a set of model options, similar to runPrompt, to configure the LLM call made by the agent. In particular, this is where you list the tools that the agent can use.
How to use the agent
The agent is used like any other tool by referencing it in the script
options.
Let’s try it!
Let’s try the agent with:
and let’s look at the results…
✔ What would be the most unexpected thing to find inside a refrigerator? toaster
✔ Based on your answer, which of the following would also be unexpected to find inside a refrigerator? A television
✔ Is your selection of ‘A television’ the correct unexpected item to find inside a refrigerator? yes