Skip to content

Inline prompts

The runPrompt function allows to build an inner LLM invocation. It returns the output of the prompt.

const { text } = await runPrompt(_ => {
// use def, $ and other helpers
_.def("FILE", file)
_.$`Summarize the FILE. Be concise.`
})

You can also shortcut the function and pass the prompt text directly

const { text } = await runPrompt(`Select all the image files in ${env.files.map(f => f.filename)}`)

Limitations

  • Nested functions are not supported in the inner prompt.

Example: Summary of file summaries using gpt-3.5

The snippet below uses gpt-3.5 to summarize files individually before adding them to the main prompt.

script({
title: "summary of summary - gp35",
tests: {
files: ["src/rag/*"],
keywords: ["markdown", "lorem", "microsoft"],
},
})
// map each file to its summary
for (const file of env.files) {
const { text } = await runPrompt(
(_) => {
_.def("FILE", file)
_.$`Summarize FILE. Be concise.`
},
{ model: "gpt-3.5-turbo", cacheName: "summary_gpt35" }
)
// save the summary in the main prompt
def("FILE", { filename: file.filename, content: text })
}
// reduce all summaries to a single summary
$`Summarize all the FILE.`

Example: Summary of file summaries using Phi-3

The snippet below uses Phi-3 through Ollama to summarize files individually before adding them to the main prompt.

script({
title: "summary of summary - phi3",
tests: {
files: ["src/rag/*"],
keywords: ["markdown", "lorem", "microsoft"],
}
})
// summarize each files individually
for (const file of env.files) {
const { text } = await runPrompt(
(_) => {
_.def("FILE", file)
_.$`Summarize the FILE and respond in plain text with one paragraph. Be consice. Ensure that summary is consistent with the content of FILE.`
},
{ model: "ollama:phi3", cacheName: "summary_phi3" }
)
def("FILE", { ...file, content: text })
}
// use summary
$`Summarize all the FILE.`