Inline prompts
The prompt
or runPrompt
function allows to build an inner LLM invocation. It returns the output of the prompt.
prompt
is a syntactic sugar for runPrompt
that takes a template string literal as the prompt text.
You can pass a function to runPrompt
that takes a single argument _
which is the prompt builder.
It defines the same helpers like $
, def
, but applies to the inner prompt.
You can also shortcut the function and pass the prompt text directly
Options
Both prompt
and runPrompt
support various options similar to the script
function.
Tools
You can use inner prompts in tools.
Concurrency
prompt
and runPrompt
are async functions that can be used in a loop to run multiple prompts concurrently.
Internally, GenAIScript applies a concurrent limit of 8 per model by default. You can change this limit using the modelConcurrency
option.
If you need more control over concurrent queues, you can try the p-all, p-limit or similar libraries.
Example: Summary of file summaries using Phi-3
The snippet below uses Phi-3 through Ollama to summarize files individually before adding them to the main prompt.