Skip to content

Response Priming

It is possible to provide the start of the LLM response (assistant message) in the script. This allows to steer the answer of the LLM to a specify syntax or format.

Use writeText with the {assistant: true} option to provide the assistant text.

$`List 5 colors. Answer with a JSON array. Do not emit the enclosing markdown.`
// help the LLM by starting the JSON array syntax
// in the assistant response
writeText(`[`, { assistant: true })
👤 user
List 5 colors. Answer with a JSON array. Do not emit the enclosing markdown.
🤖 assistant
[
🤖 assistant
"red",
"blue",
"green",
"yellow",
"purple"
]

How does it work?

Internally when invoking the LLM, an additional message is added to the query as if the LLM had generated this content.

{
"messages": [
...,
{
"role": "assistant",
"content": "[\n"
}
]
}