Not all LLM models support tools, in those cases, GenAIScript also support a fallback mechanism to implement tool call through system prompts (see Ad Hoc Tools).
defTool
defTool is used to define a tool that can be called by the LLM.
It takes a JSON schema to define the input and expects a string output.
The LLM decides to call this tool on its own!
In the example above, we define a tool called current_weather
that takes a location as input and returns the weather.
Weather tool example
This example uses the current_weather tool to get the weather for Brussels.
Some LLM models do not have built-in model support.
For those model, it is possible to enable tool support through system prompts. The performance may be lower than built-in tools, but it is still possible to use tools.
The tool support is implemented in system.tool_calls
and “teaches” the LLM how to call tools. When this mode is enabled, you will see
the tool call tokens being responded by the LLM.
GenAIScript maintains a list of well-known models that do not support
tools so it will happen automatically for those models.
To enable this mode, you can either
add the fallbackTools option to the script
or add the --fallack-tools flag to the CLI
Packaging as System scripts
To pick and choose which tools to include in a script,
you can group them in system scripts. For example,
the current_weather tool can be included the system.current_weather.genai.mjs script.
then use the script id in the tools field.
Example
Let’s illustrate how tools come together with a question answering script.
In the script below, we add the system.retrieval_web_search which registers the retrieval_web_search tool. This tool
will call into retrieval.webSearch as needed.
We can then apply this script to the questions.md file blow.
After the first request, the LLM requests to call the web_search for each questions.
The web search answers are then added to the LLM message history and the request is made again.
The second yields the final result which includes the web search results.
Builtin tools
fs_ask_fileRuns a LLM query over the content of a file. Use this tool to extract information from a file.
fs_diff_filesComputes a diff between two different files. Use git diff instead to compare versions of a file.
fs_find_filesFinds file matching a glob pattern. Use pattern to specify a regular expression to search for in the file content. Be careful about asking too many files.
fs_read_fileReads a file as text from the file system. Returns undefined if the file does not exist.
md_find_filesGet the file structure of the documentation markdown/MDX files. Retursn filename, title, description for each match. Use pattern to specify a regular expression to search for in the file content.
meta_promptTool that applies OpenAI's meta prompt guidelines to a user prompt. Modified from https://platform.openai.com/docs/guides/prompt-generation?context=text-out.
meta_schemaGenerate a valid JSON schema for the described JSON. Source https://platform.openai.com/docs/guides/prompt-generation?context=structured-output-schema.
node_testbuild and test current project using `npm test`
python_code_interpreter_runExecutes python 3.12 code for Data Analysis tasks in a docker container. The process output is returned. Do not generate visualizations. The only packages available are numpy===2.1.3, pandas===2.2.3, scipy===1.14.1, matplotlib===3.9.2. There is NO network connectivity. Do not attempt to install other packages or make web requests. You must copy all the necessary files or pass all the data because the python code runs in a separate container.
python_code_interpreter_copy_files_to_containerCopy files from the workspace file system to the container file system. NO absolute paths. Returns the path of each file copied in the python container.