Serve
Launch a local web server that is used to run the playground or Visual Studio Code.
Run from the workspace root:
npx genaiscript serveThe default port is 8003. You can specify the port by setting the --port flag.
npx genaiscript serve --port 8004API key
Section titled “API key”The API key is used to authenticate the requests to the server.
You can specify an API key by setting the --api-key flag or the GENAISCRIPT_API_KEY environment variable.
npx genaiscript serve --api-key my-api-keyor
GENAISCRIPT_API_KEY=my-api-keyThe API key can be set in the Authorization header of a request or in the URL query parameter api-key (http://localhost:8003/#api-key=my-api-key)
You can enable Cross Origin Shared Resource by setting the --cors flag or setting the GENAISCRIPT_CORS_ORIGIN environment variable.
npx genaiscript serve --cors contoso.comNetwork
Section titled “Network”You can bind the server to 0.0.0.0 and make it accessible from the network by setting the --network flag. You need this flag to make the server accessible from a container.
npx genaiscript serve --networkWe highly recommend setting the API key when running the server on the network.
Dockerized
Section titled “Dockerized”To run a minimal docker image with the server, first create a docker image with genaiscript and any required tool.
docker build -t genaiscript -<<EOFFROM node:alpineRUN apk add --no-cache git && npm install -g genaiscriptEOFThis creates a genaiscript image locally that you can use to launch the server.
docker run --env GITHUB_TOKEN --env-file .env --name genaiscript --rm -it --expose 8003 -p 8003:8003 -v ${PWD}:/workspace -w /workspace genaiscript genaiscript serve --networkthen open http://localhost:8003 in your browser.
OpenAI API endpoints
Section titled “OpenAI API endpoints”The server implements various OpenAI API compatible endpoints. You can use the server as a proxy to the OpenAI API by setting the --openai flag.
The routes can be used to provide a stable access to the configured LLMs to other tools like promptfoo.
npx genaiscript serve --openaiThis will enable the following routes:
/v1/chat/completions
Section titled “/v1/chat/completions”Mostly compatible with OpenAI’s chat completions API. The server will forward the requests to the OpenAI API and return the response.
streamis not supported.
/v1/models
Section titled “/v1/models”Returns the list of models and aliases available in the server.