Skip to content
A small, square digital illustration in 8-bit flat style showing a simplified computer window. Inside are separated areas formed by rectangles and circles, each sectioned with bold, bright colors to represent different configuration settings—model, tokens, temperature, and group options—depicted with shapes like sliders, toggles, and labeled blocks. The design is clean with no text, people, backgrounds, or visual effects, emphasizing a clear, easy-to-distinguish layout.

Metadata

Prompts use script({ ... }) function call to configure the title and other user interface elements.

The call to script is optional and can be omitted if you don’t need to configure the prompt. However, the script argument should a valid JSON5 literal as the script is parsed and not executed when mining metadata.

The title, description and group are (optionally) used in the UI to display the prompt.

script({
title: "Shorten", // displayed in UI
// also displayed but grayed out:
description:
"A prompt that shrinks the size of text without losing meaning",
group: "shorten", // see Inline prompts later
})

Override the system prompts included with the script. The default set of system prompts is inferred dynamically from the script content.

script({
...
system: ["system.files"],
})

You can specify the LLM model identifier in the script. The IntelliSense provided by genaiscript.g.ts will assist in discovering the list of supported models. Use large and small aliases to select default models regardless of the configuration.

script({
...,
model: "openai:gpt-4o",
})

You can specify the LLM maximum completion tokens in the script. The default is unspecified.

script({
...,
maxTokens: 2000,
})

Limits the amount of allowed function/tool call during a generation. This is useful to prevent infinite loops.

script({
...,
maxToolCalls: 100,
})

You can specify the LLM temperature in the script, between 0 and 2. The default is 0.8.

script({
...,
temperature: 0.8,
})

You can specify the LLM top_p in the script. The default is not specified

script({
...,
top_p: 0.5,
})

For some models, you can specify the LLM seed in the script, for models that support it. The default is unspecified.

script({
...,
seed: 12345678,
})

You can specify a set of metadata key-value pairs in the script. This will enable stored completions in OpenAI and Azure OpenAI. This is used for distillation and evaluation purposes.

script({
...,
metadata: {
name: "my_script",
}
})

You can configure retry behavior for failed LLM requests to improve reliability:

script({
...,
retries: 3, // Number of retry attempts (default: 2)
retryDelay: 1000, // Initial delay in ms between retries (default: 1000)
maxDelay: 5000, // Maximum delay in ms with exponential backoff (default: 10000)
maxRetryAfter: 10000, // Maximum time in ms to respect retry-after headers (default: 10000)
retryOn: [429, 500, 502, 503, 504], // HTTP status codes to retry on (default: [429, 500, 502, 503, 504])
})

These retry options help handle:

  • Rate limiting (HTTP 429): Automatically waits for rate limit windows
  • Server errors (HTTP 5xx): Retries on temporary server issues
  • Network failures: Uses exponential backoff to avoid overwhelming services

Retry options can also be passed to runPrompt() calls to override script-level settings:

const { text } = await runPrompt(
(_) => _.$`Summarize this text.`,
{
model: "small",
retries: 2, // Override script retry settings
retryDelay: 500, // Faster initial retry
maxDelay: 3000, // Lower maximum delay
}
)
  • unlisted: true, don’t show it to the user in lists. Template system.* are automatically unlisted.

See genaiscript.d.ts in the sources for details.

You can consult the metadata of the top level script in the env.meta object.

const { model } = env.meta

Use the host.resolveModel function to resolve a model name or alias to its provider and model name.

const info = await host.resolveModel("large")
console.log(info)
{
"provider": "openai",
"model": "gpt-4o"
}