Environment Setup
When you clone the repo there is a sample environment setup in sample.env. You can configure variables for the models used in your simulation, your database, and to control the level of concurrency.
Required Model Configuration
- API keys
LLM_PROVIDERsets the api (options are"openai","gemini","anthropic")LLM_MODELsets the model (e.g."gpt-4.1")
Optional Extra Model Configuration
LLM_REASONING_EFFORTcontrols the depth of reasoning for models that support it (options:"minimal","standard","high")LLM_TEMPERATUREcontrols response randomnessLLM_MAX_TOKENSsets maximum tokens generated per responseLLM_MAX_CONCURRENCYlimits concurrent requests happening at once to prevent rate limiting
Database Setup
There are several variables in the sample.env that must be set to determine database login. We encourage you to use the defaults. In addition you can set:
POSTGRES_MAX_CONNECTIONSlimits the number of simultaneous connections in the pool
FAQ
How can I prevent rate limiting errors?
Try reducing the
LLM_MAX_CONCURRENCYto something like 10 (setting to 1 means that each LLM call will happen sequentially).How can I fix errors related to too many database connections?
Try reducing your
POSTGRES_MAX_CONNECTIONS.How can I run more simulations in parallel with the same database?
Try reducing your
POSTGRES_MAX_CONNECTIONS.