Response Priming
It is possible to provide the start of the LLM response (assistant
message) in the script.
This allows steering the answer of the LLM to a specific syntax or format.
Use assistant
function to provide the assistant text.
$`List 5 colors. Answer with a JSON array. Do not emit the enclosing markdown.`
// help the LLM by starting the JSON array syntax// in the assistant responseassistant(`[`)
👤 user
List 5 colors. Answer with a JSON array. Do not emit the enclosing markdown.
🤖 assistant
[
🤖 assistant
"red","blue","green","yellow","purple"]
How does it work?
Internally when invoking the LLM, an additional message is added to the query as if the LLM had generated this content.
{ "messages": [ ..., { "role": "assistant", "content": "[\n" } ]}