Skip to content

Inline prompts

The prompt or runPrompt function allows to build an inner LLM invocation. It returns the output of the prompt.

prompt is a syntactic sugar for runPrompt that takes a template string literal as the prompt text.

const { text } = await prompt`Write a short poem.`

You can pass a function to runPrompt that takes a single argument _ which is the prompt builder. It defines the same helpers like $, def, but applies to the inner prompt.

const { text } = await runPrompt((_) => {
// use def, $ and other helpers
_.def("FILE", file)
_.$`Summarize the FILE. Be concise.`
})

You can also shortcut the function and pass the prompt text directly

const { text } = await runPrompt(
`Select all the image files in ${env.files.map((f) => f.filename)}`
)

Options

Both prompt and runPrompt support various options similar to the script function.

const { text } = await prompt`Write a short poem.`.options({ temperature: 1.5 })
const { text } = await runPrompt((_) => { ...}, { temperature: 1.5 })

Tools

You can use inner prompts in tools.

defTool(
"poet",
"Writes 4 line poem about a given theme",
{
theme: {
type: "string",
description: "Theme of the poem",
}
},
(({theme})) => prompt`Write a ${4} line ${"poem"} about ${theme}`
)

Concurrency

prompt and runPrompt are async functions that can be used in a loop to run multiple prompts concurrently.

await Promise.all(env.files, file => prompt`Summarize the ${file}`)

Internally, GenAIScript applies a concurrent limit of 8 per model by default. You can change this limit using the modelConcurrency option.

script({
...,
modelConcurrency: {
"openai:gpt-4o": 20
}
})

If you need more control over concurrent queues, you can try the p-all, p-limit or similar libraries.

Example: Summary of file summaries using gpt-3.5

The snippet below uses gpt-3.5 to summarize files individually before adding them to the main prompt.

script({
title: "summary of summary - gp35",
model: "small",
files: ["src/rag/*"],
tests: {
files: ["src/rag/*"],
keywords: ["markdown", "lorem", "microsoft"],
},
})
// map each file to its summary
for (const file of env.files) {
const { text } = await runPrompt(
(_) => {
_.def("FILE", file)
_.$`Summarize FILE. Be concise.`
},
{ model: "gpt-3.5-turbo", cache: "summary_gpt35" }
)
// save the summary in the main prompt
def("FILE", { filename: file.filename, content: text })
}
// reduce all summaries to a single summary
$`Summarize all the FILE.`

Example: Summary of file summaries using Phi-3

The snippet below uses Phi-3 through Ollama to summarize files individually before adding them to the main prompt.

script({
model: "small",
title: "summary of summary - phi3",
files: ["src/rag/*.md"],
tests: {
files: ["src/rag/*.md"],
keywords: ["markdown", "lorem", "microsoft"],
},
})
// summarize each files individually
for (const file of env.files) {
const { text } = await runPrompt(
(_) => {
_.def("FILE", file)
_.$`Extract keywords for the contents of FILE.`
},
{ model: "ollama:phi3", cache: "summary_phi3" }
)
def("FILE", { ...file, content: text })
}
// use summary
$`Extract keywords for the contents of FILE.`