Large test suites in Node.js and React projects can significantly slow down your continuous integration (CI) pipeline. Jest already runs tests in parallel on a single machine using worker processes, but for very large suites you may need to parallelize across multiple machines. Jest 28 introduced a new --shard
option that makes it easy to split your test suite into chunks and run them concurrently on separate CI runners. This post will explain how --shard
works, how to use it for parallelization, and how to configure it in GitHub Actions and Jenkins. We’ll also cover best practices to maximize performance and minimize total runtime.
Overview of Jest’s --shard
Option
Jest’s --shard
CLI flag allows you to run part of your test suite, specified as a fraction of the whole suite. This feature was added in Jest v28 and was one of the most requested enhancements (Jest 28: Shedding weight and improving compatibility · Jest). In essence, you provide a shard index and a shard count, and Jest will execute only the tests belonging to that shard. For example:
# Split tests into 3 shards and run each shard:
jest --shard=1/3 # Run the first third of tests
jest --shard=2/3 # Run the second third of tests
jest --shard=3/3 # Run the final third of tests
The format is jest --shard=<shardIndex>/<shardCount>
. Shard indices start at 1 (1-based indexing) and the index must be ≤ the total count (Jest CLI Options · Jest). Under the hood, Jest will collect all your test files and split them into the specified number of groups (shards). Each shard is essentially a subset of test files. By default, the division is evenly by number of test files, not based on test duration (Maximizing CI/CD Performance with Nx Workspace, Jest or Vitest and Jenkins | Satellytes). This means if one test file is extremely slow, it could make its shard slower than others – we’ll discuss strategies to handle that later.
Because --shard
is implemented within Jest’s test sequencer, it requires using the default test sequencer (which supports sharding) or a custom sequencer that implements the shard logic (Jest CLI Options · Jest). The vast majority of projects use the default, so you likely don’t need to worry about this unless you have a custom test ordering mechanism.
Why use sharding? In short, to speed up total test execution time by leveraging multiple machines. Jest’s own test suite saw a massive performance improvement on CI by using shards (from ~10 minutes down to ~3 on Linux) (Jest 28: Shedding weight and improving compatibility · Jest). While Jest already parallelizes tests on one machine, there are limits to scaling vertically. Eventually, adding more CPU cores yields diminishing returns (due to I/O, memory contention, etc.) (How to run Jest tests faster in GitHub Actions). Sharding lets you scale horizontally – split the workload across machines or containers – to cut down the wall-clock time of your test suite.
Parallelizing Tests with the --shard
Option
To leverage sharding for parallelization, you will run multiple Jest processes in parallel, each on a different shard. In a CI/CD context, this usually means multiple jobs or executors running concurrently. For example, if you split into 3 shards, you’ll have 3 CI jobs that run at the same time, each executing roughly one third of the tests.
How it works: Suppose you have N shards. When you invoke jest --shard=i/N
for each i
from 1 to N (in separate processes), each process will run a disjoint subset of test files. Together, the N shards cover the entire test suite. Because the shards run simultaneously on separate machines or threads, the overall test time can shrink to about 1/N of the original (plus some overhead). Essentially, you trade additional compute resources for faster results.
This horizontal scaling is very effective for large test suites. One report found that running 8 shards in parallel reduced total test time from 19 minutes down to under 3 minutes (How to run Jest tests faster in GitHub Actions). In general, the more shards you use, the faster the wall-clock time up to a point. Keep in mind each shard still internally uses Jest’s normal parallelization (worker threads), so you might combine sharding with per-machine concurrency for maximum speed. For example, if each CI agent has 2 CPU cores, you could run Jest with --maxWorkers=2
on each shard to use both cores on that machine.
Important: All shards must run with the same codebase and configuration. Before splitting, ensure each parallel job checks out the same commit and has identical environment setup. The tests should be written to not depend on one another globally (as a best practice anyway), since any given test might run in a different shard than another. Sharding simply partitions tests; it doesn’t provide any synchronization between them.
One caveat: while sharding reduces the overall time to get results, it increases the total computing time (since you’re doing more work in parallel). In cloud CI services, this can mean higher usage of minutes or CPUs, i.e. potentially higher cost (How to shard Jest tests in GitHub Actions | remarkablemark). Most teams find the faster feedback worth it, but it’s something to be aware of (we’ll touch on optimizing this trade-off in best practices).
Next, let’s look at how to set up sharded test runs in two common CI environments: GitHub Actions and Jenkins.
Configuring Jest Shards in GitHub Actions
GitHub Actions makes it straightforward to run jobs in parallel using a matrix strategy. We can use the matrix to define multiple shard indexes and spawn a job for each. Below is an example workflow configuration for a Node.js or React project that splits tests into 3 shards:
name: CI Tests
on: [push, pull_request]
jobs:
tests:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
shard_index: [1, 2, 3] # three shards
steps:
- uses: actions/checkout@v3
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: 18
- name: Install dependencies
run: npm install
- name: Run Jest (Shard ${{ matrix.shard_index }} of 3)
run: npm test -- --shard=${{ matrix.shard_index }}/3
In this YAML, the matrix.shard_index
takes values 1, 2, 3, resulting in three parallel jobs. Each job executes the Jest test command with a different shard (1/3, 2/3, 3/3 respectively). We hard-coded 3
as the total shard count in the command for simplicity. The fail-fast: false
ensures that all shards run to completion even if one fails (so you get a full picture of failures; you could enable fail-fast to stop others if one shard fails, but usually it’s useful to run all tests).
What happens in each job? All steps before running Jest (checkout, setup Node, install deps) will run in each shard job. This means each shard job installs the project’s dependencies and then runs a portion of the tests. Make sure to use caching for dependencies if possible (e.g., actions/cache) to speed up the install step, since it’s repeated for every shard.
After these jobs complete, you will have run the entire test suite across them. If you collect coverage or artifacts, you’ll need an extra step to combine results. For example, you might upload coverage from each shard as build artifacts and then have a follow-up job to download and merge them. GitHub Actions provides built-in artifact upload/download which you can use for this purpose (How to shard Jest tests in GitHub Actions | remarkablemark) (How to shard Jest tests in GitHub Actions | remarkablemark). (Merging test coverage is outside the scope of this post, but be aware that sharding splits coverage data by shard, requiring post-processing to get a unified report.)
Configuring Jest Shards in Jenkins Pipelines
Many teams use Jenkins for CI, which also supports parallel execution in its pipelines. To run Jest shards in parallel on Jenkins, you can define multiple parallel stages or use a loop to spawn stages dynamically. Here’s an example of a Jenkins declarative pipeline snippet that runs 3 shards in parallel:
pipeline {
agent any
stages {
stage('Test in Parallel') {
parallel {
stage('Shard 1') {
steps {
sh 'npm test -- --shard=1/3'
}
}
stage('Shard 2') {
steps {
sh 'npm test -- --shard=2/3'
}
}
stage('Shard 3') {
steps {
sh 'npm test -- --shard=3/3'
}
}
}
}
}
}
In the above Jenkinsfile, the parallel
block defines three sub-stages (Shard 1, 2, 3) that will execute concurrently. Each runs the test command with a different shard index. Jenkins will schedule these in parallel, potentially on multiple build agents (nodes) if your Jenkins cluster is configured with multiple executors. Ensure each parallel branch performs necessary setup (checkout code, install dependencies, etc.) – in a simplified example we omitted those steps, assuming perhaps a prior stage prepared the workspace or using a shared workspace. In practice, if each shard runs on a separate agent, you’ll need to include steps to fetch the code and install deps in each branch or use a stash/unstash
to transfer files.
This approach scales to any number of shards by adding more parallel branches. For a dynamic solution, you can generate the parallel branches in a scripted pipeline loop (as shown in the example from Satellytes (Maximizing CI/CD Performance with Nx Workspace, Jest or Vitest and Jenkins | Satellytes) where they loop from 1 to N and create a stage for each shard). The result in Jenkins UI (especially with Blue Ocean) is that you will see a group of parallel test stages, one per shard, all executing at the same time (Maximizing CI/CD Performance with Nx Workspace, Jest or Vitest and Jenkins | Satellytes). Once all finish, the pipeline can move on to subsequent stages (like combining reports or deploying).
Just as with GitHub Actions, if you need to aggregate coverage or test result files (e.g., JUnit XML) from shards, you can use Jenkins plugins or pipeline steps. One common pattern is to have each shard write results with a shard identifier in the filename, then in a post step combine them or use the JUnit plugin with a file glob that picks up all results.
Best Practices for Maximizing Jest Test Performance
Using --shard
can dramatically reduce your test suite’s wall-clock time. To get the most out of it while avoiding pitfalls, consider these best practices:
- Upgrade to Jest 28+: The sharding feature is only available in Jest v28 and above. If you are on older versions, consider upgrading. (On older versions, you’d have to split tests manually via naming conventions or use external tools – not ideal compared to built-in sharding.)
- Choose the Right Number of Shards: Ideally, match the number of shards to the number of parallel runners you have available. For example, if your CI service allows 4 concurrent jobs, use 4 shards. More shards than available executors won’t speed up the overall time (they would just queue up). Conversely, using fewer shards than available capacity leaves performance on the table. In practice, start with a small number (2-4) and increase if the suite is still too slow. Keep in mind each shard carries some overhead (job startup, dependency install, etc.), so there is a point of diminishing returns.
- Balance Shard Workloads: Since Jest splits tests by file count (not runtime), there’s a risk one shard contains slower tests and becomes a bottleneck. Monitor your shard execution times; if one consistently runs much longer, you might need to balance them. This could involve moving some tests into a different file or using a custom test sequencer that distributes tests based on historical timing. As an advanced approach, you could implement a custom Sequencer that overrides the
shard
method to distribute tests more evenly by past runtime. If you prefer not to custom-code this, some CI tools (e.g., Knapsack Pro or Jenkins Parallel Test Executor plugin) can split tests by timing data — but those are external solutions. With pure Jest, you might manually adjust test groupings if needed. - Combine Sharding with In-Process Parallelism: Sharding shines when you have multiple machines, but remember that each Jest process can also use multiple worker threads. Use the
--maxWorkers
option to control this. If each shard runs on an isolated 2-core container, for instance, setting--maxWorkers=2
(or leaving it default, which uses cores-1) will utilize both cores (How to run Jest tests faster in GitHub Actions). On the other hand, if your tests are very heavy and you’ve already maxed out CPU, you could limit--maxWorkers
to avoid straining the machine. The key is to utilize resources efficiently at both levels (within a machine and across machines). - Minimize Per-Shard Overhead: The speedup from N shards can be undermined if each shard spends a lot of time on setup. Common setup tasks are installing packages, building the application, seeding databases, etc. Try to streamline these:
- Use caching for dependencies (in GitHub Actions, cache
~/.npm
ornode_modules
; in Jenkins, use persistent workspace or a caching plugin). - If building or compiling before tests, see if that can be done once and the results shared to all shards (for example, build in a prior stage and distribute the build output to shard jobs).
- Avoid very high shard counts if setup time is significant. For instance, 10 shards with 1 minute of overhead each means ~10 minutes of total overhead (though parallel, it’s duplicated work). In such cases, fewer shards might be more efficient overall. Always measure to find the sweet spot.
- Use caching for dependencies (in GitHub Actions, cache
- Test Independence: Ensure that each test file can run in isolation. Sharding may reveal implicit dependencies between tests if, for example, one test file seeds some global state that another relies on. Each shard runs a subset of tests with no knowledge of others, so tests must be self-contained and order-independent (which is a general best practice for Jest or any test runner). If you find tests failing only when sharded, it might indicate an order dependency that needs fixing.
- Handling Test Reports and Coverage: When running shards, each shard produces its own results. If you generate a coverage report in each shard, you’ll need to merge them to get a complete picture. Similarly, if you use JUnit XML or other test reports for CI, configure the outputs with distinct file names per shard and have the CI aggregate them. Some Jest reporters or CI plugins can merge results, or you can do it manually (for example,
coverage-final.json
files can be merged using tools likeistanbul-combine
for coverage). The key is don’t forget that each shard is only partial. Many CI systems will simply mark the build as passed if all shards pass, which is fine, but for coverage thresholds or reporting you need to combine data. The Jest GitHub Action in the marketplace is an interesting solution that automates running shards and merging coverage (Jest GitHub Action with shard support – GitHub Marketplace) (GitHub – imadx/jest-action: Allows Jest to be run in shards, and to …) if you want a turnkey approach. - Monitor and Tune: Treat the introduction of sharding as a performance experiment. Monitor your CI times and resource usage. You might find that 3 shards cuts time enough, or you might benefit from going to 5 or 6. Watch out for any shard that lags (as mentioned earlier) or any infrastructure issues (e.g., if you run too many heavy jobs at once, you could saturate network or memory on your CI hosts). Tweak the number of shards and
maxWorkers
based on empirical data.
By following these practices, you can maximize the speed gains from Jest’s sharding while maintaining reliability and controlling costs. Sharding is a powerful technique especially as your test suite grows — it allows your CI time to remain roughly constant even as you add more tests, by throwing more parallelism at the problem (How to run Jest tests faster in GitHub Actions).
Conclusion
Jest’s --shard
option is a game-changer for speeding up large test suites. It enables you to easily split tests across multiple machines or CI jobs, drastically reducing the total runtime for feedback on your code changes. We discussed how it works (splitting test files into fractions) and walked through examples on GitHub Actions and Jenkins, two common CI platforms for Node.js and React projects. The setup involves orchestrating parallel jobs each running a portion of the tests. With a proper configuration, teams have seen test durations drop from double-digit minutes to just a few minutes by using shards (Jest 28: Shedding weight and improving compatibility · Jest) (How to run Jest tests faster in GitHub Actions).
When implementing sharding, plan out your strategy: pick an appropriate number of shards, ensure each environment is prepared to run tests independently, and account for merging results if needed. Always remember that faster wall-clock time might mean more total compute time — which is usually a worthwhile trade-off for quicker iteration, as long as you optimize the process. By applying the best practices outlined above, you can achieve significant performance gains in your Jest test suites.
Leverage Jest’s built-in parallelization and sharding to keep your CI fast and efficient. Happy testing!
Sources:
- Jest 28 official blog – introduction of
--shard
and performance impact (Jest 28: Shedding weight and improving compatibility · Jest) - Blacksmith Engineering – scaling Jest tests vertically vs horizontally (sharding) (How to run Jest tests faster in GitHub Actions) (How to run Jest tests faster in GitHub Actions)
- Jest Documentation – CLI options for sharding (Jest CLI Options · Jest) (Jest CLI Options · Jest)
- Satellytes Blog – Jenkins parallelization with Jest sharding (example and explanation) (Maximizing CI/CD Performance with Nx Workspace, Jest or Vitest and Jenkins | Satellytes) (Maximizing CI/CD Performance with Nx Workspace, Jest or Vitest and Jenkins | Satellytes)
- RemarkableMark Blog – GitHub Actions workflow for Jest shards and coverage merging (How to shard Jest tests in GitHub Actions | remarkablemark) (How to shard Jest tests in GitHub Actions | remarkablemark)
- Apidog Blog – note on test sharding for large test suites