Make Jest Fast
jest
is arguably the fastest test runner in the JavaScript ecosystem. Other monorepo task runners tip toe around jest
because it has its own worker pool / multi-threaded capability. In fact, even monorepo task runners like nx
and turbo
all have to work around the fact that we only have a certain finite number of CPU cores!
The solution given usually is to switch to using --runInBand
when running jest
via those tools. This will result in the task runners scheduling the work across CPU cores instead of jest
doing so. However, this causes its own problems:
- For packages containing a substantially large number of tests - those packages are punished to be running all their tests in serial against one single core.
- In the case of local development - if you are modifying a package with lots of tests, you would end up hitting the serialized test run case for every change!
Thankfully, lage
has learned to play well with jest
via a concept called weighted targets!
Weighty targets and workers
First off, we will configure our lage.config.js
like this:
/lage.config.jsjs
const glob = require("glob");module.exports = {pipeline: {test: {type: "worker",weight: (target) => {// glob for the target.cwd and return the number of test filesreturn glob.sync("**/*.test.js", { cwd: target.cwd }).length;},maxWorkers: 8,options: {worker: path.join(__dirname, "scripts/workers/jest-worker.js")}}}};
/lage.config.jsjs
const glob = require("glob");module.exports = {pipeline: {test: {type: "worker",weight: (target) => {// glob for the target.cwd and return the number of test filesreturn glob.sync("**/*.test.js", { cwd: target.cwd }).length;},maxWorkers: 8,options: {worker: path.join(__dirname, "scripts/workers/jest-worker.js")}}}};
Notice above that we have added a weight
key to the target configuration. It can be a constant number weight: 4
, or weight: os.cpus().length - 1
. But it is really good to help the scheduler know how many workers are really needed given a package's count of test files (this is to make sure packages without many tests do not arbitrarily take up more cores than needed).
The jest-worker.js
implementation will use the calculated weight
in its call into jest
APIs:
/scripts/workers/jest-worker.jsjs
const { runCLI } = require("jest");module.exports = async function jest(data) {const { target, weight } = data;console.log(`Running ${target.id} with a maxWorker setting of ${weight}`);const { results } = await runCLI({maxWorkers: weight,rootDir: target.cwd,passWithNoTests: true,verbose: true},[target.cwd]);if (results.success) {console.log("Tests passed");} else {throw new Error("FAILED");}};
/scripts/workers/jest-worker.jsjs
const { runCLI } = require("jest");module.exports = async function jest(data) {const { target, weight } = data;console.log(`Running ${target.id} with a maxWorker setting of ${weight}`);const { results } = await runCLI({maxWorkers: weight,rootDir: target.cwd,passWithNoTests: true,verbose: true},[target.cwd]);if (results.success) {console.log("Tests passed");} else {throw new Error("FAILED");}};
And just like that, lage
and jest
work in harmony to provide the best developer experience via remote cache, scoped test skipping by package dependencies, and cooperating worker pool via weighted targets.