The CI pipeline for our primary TypeScript monorepo was averaging 45 minutes per run. This wasn’t just an inconvenience; it was a direct impediment to developer velocity and a significant drain on runner resources. The repository housed over 50 individual packages—a mix of shared UI components, utility libraries, and several front-end applications. The existing process was naive: on every commit, it would clean, reinstall, lint, test, and build every single package, regardless of what had changed. This monolithic approach was unsustainable.
Our initial attempt to solve this was a blind jump toward speed. The proposal was to replace our battle-tested but slower Rollup configuration with esbuild for everything. The initial results on a local machine were staggering, with build times for individual packages dropping by 90%. However, pushing this directly into our CI configuration revealed a critical oversight. While esbuild is exceptionally fast at transpilation and basic bundling, our production library builds relied heavily on Rollup’s mature plugin ecosystem and meticulous output control. We needed specific functionality for generating multiple formats (ESM, CJS), handling external peer dependencies gracefully, emitting precise TypeScript declaration files via rollup-plugin-typescript2
, and fine-grained code-splitting that esbuild
‘s bundler, at the time, couldn’t match for our library use-case. A full switch was a non-starter; it traded production-grade correctness for raw speed, a bargain we couldn’t afford for our published packages.
This led to the real solution: a hybrid, two-stage approach orchestrated within GitLab CI. The core principle is to use the right tool for the right job. We would leverage esbuild’s raw speed for rapid, pipeline-level validation tasks (like type-checking and running tests) and reserve the slower, more deliberate Rollup builds exclusively for packages that were actually modified and destined for publishing.
The architecture hinges on three key components:
- A Smart Change-Detection Script: A mechanism within the CI pipeline to identify which packages have been modified relative to the target branch.
- A Stratified CI Configuration: GitLab CI jobs structured into distinct stages, where early stages use
esbuild
for fast, repository-wide checks, and later stages operate selectively on the subset of changed packages usingRollup
. - Intelligent Caching: Aggressive caching of
node_modules
and build artifacts to ensure that unchanged dependencies and packages are never re-processed.
Let’s dissect the implementation, starting with the monorepo structure itself. We use pnpm
workspaces for managing dependencies, which is critical for efficiency.
graph TD A[Monorepo Root] --> B(package.json); A --> C(pnpm-workspace.yaml); A --> D(.gitlab-ci.yml); A --> E(scripts/); E --> F(find-changed.sh); E --> G(build-package.sh); A --> H(packages/); H --> I(ui-core/); I --> J(package.json); I --> K(rollup.config.mjs); H --> L(utils-lib/); L --> M(package.json); L --> N(rollup.config.mjs); H --> O(another-package/...); A --> P(apps/); P --> Q(main-app/); Q --> R(package.json); Q --> S(vite.config.ts);
The pnpm-workspace.yaml
is straightforward, defining where our packages reside:
# pnpm-workspace.yaml
packages:
- 'packages/*'
- 'apps/*'
The core of the selective build logic is a shell script. This script is responsible for comparing the current commit with the merge target branch (CI_MERGE_REQUEST_TARGET_BRANCH_NAME
in GitLab CI) and determining which workspace packages contain changed files.
#!/bin/bash
# scripts/find-changed.sh
# Exit immediately if a command exits with a non-zero status.
set -e
# Determine the base for the diff. In a merge request pipeline, compare against the target branch.
# Otherwise, in a branch pipeline, compare against the previous commit.
if [[ -n "$CI_MERGE_REQUEST_TARGET_BRANCH_NAME" ]]; then
# Fetch the target branch so we can diff against it.
git fetch origin "$CI_MERGE_REQUEST_TARGET_BRANCH_NAME"
BASE_SHA=$(git merge-base HEAD "origin/$CI_MERGE_REQUEST_TARGET_BRANCH_NAME")
echo "Running in MR pipeline. Comparing against merge base with target branch '$CI_MERGE_REQUEST_TARGET_BRANCH_NAME' ($BASE_SHA)..."
else
# In a regular branch push, just compare against the parent commit.
BASE_SHA="HEAD~1"
echo "Running in branch pipeline. Comparing against parent commit ($BASE_SHA)..."
fi
# Get a list of all changed files against the base commit.
CHANGED_FILES=$(git diff --name-only "$BASE_SHA" HEAD)
if [[ -z "$CHANGED_FILES" ]]; then
echo "No files changed. Nothing to do."
exit 0
fi
echo "--- Changed Files ---"
echo "$CHANGED_FILES"
echo "---------------------"
# Get a list of all package directories defined in the pnpm workspace.
# This command uses pnpm to list all packages and their paths in JSON format,
# then uses jq to extract just the path for each.
PACKAGE_DIRS=$(pnpm m ls --json --depth -1 | jq -r 'map(.path) | .[]')
CHANGED_PACKAGES=""
# Iterate over each package directory to see if it contains any changed files.
for PKG_DIR in $PACKAGE_DIRS; do
# Check if any of the changed files start with the package directory path.
# The grep command checks for a match. The return code is 0 if a match is found.
if echo "$CHANGED_FILES" | grep -q "^$PKG_DIR/"; then
# The package name is the basename of the directory.
PKG_NAME=$(basename "$PKG_DIR")
echo "Detected change in package: $PKG_NAME at path $PKG_DIR"
# Append the package directory path to our list of changed packages.
CHANGED_PACKAGES="$CHANGED_PACKAGES $PKG_DIR"
fi
done
# Check root configuration files that should trigger a full rebuild of all packages.
# A common mistake is to only check for package-specific changes.
# A change to the root tsconfig.json or .npmrc could affect all packages.
ROOT_CONFIG_FILES=("tsconfig.base.json" "pnpm-lock.yaml" ".npmrc")
for CONFIG_FILE in "${ROOT_CONFIG_FILES[@]}"; do
if echo "$CHANGED_FILES" | grep -q "^$CONFIG_FILE$"; then
echo "Root configuration file '$CONFIG_FILE' changed. Forcing rebuild of all packages."
# If a root config changes, we discard the selective list and use all packages.
CHANGED_PACKAGES=$(echo "$PACKAGE_DIRS" | tr '\n' ' ')
break
fi
done
if [[ -z "$CHANGED_PACKAGES" ]]; then
echo "No changes detected within any known packages."
else
echo "--- Final list of changed package directories ---"
# Trim leading/trailing whitespace before printing and exporting.
TRIMMED_CHANGED_PACKAGES=$(echo "$CHANGED_PACKAGES" | xargs)
echo "$TRIMMED_CHANGED_PACKAGES"
# Output the list of changed package paths to a file that can be used as a GitLab artifact.
# This allows subsequent jobs in the pipeline to know which packages to build.
echo "$TRIMMED_CHANGED_PACKAGES" > changed_packages.txt
fi
This script is the brain of the operation. It’s robust enough to handle both merge request pipelines and regular branch pushes. It outputs the paths of changed packages into changed_packages.txt
, which we’ll pass as an artifact to later stages.
Now, let’s look at the .gitlab-ci.yml
file that orchestrates this.
# .gitlab-ci.yml
variables:
# Use the pnpm store in the project directory. This is crucial for caching.
PNPM_STORE_PATH: ".pnpm-store"
stages:
- setup
- validate
- build
- test
# This job installs dependencies and caches them.
# It runs in the first stage to ensure all subsequent jobs have access to node_modules.
install_deps:
stage: setup
image: node:18-alpine
script:
- npm install -g pnpm
- pnpm config set store-dir $PNPM_STORE_PATH
- pnpm install --frozen-lockfile
cache:
key:
files:
- pnpm-lock.yaml
paths:
- .pnpm-store
- node_modules
policy: pull-push
artifacts:
paths:
- node_modules/
# This job performs a fast, repository-wide validation using esbuild-based tools.
# It does NOT build production bundles. Its purpose is to fail fast.
fast_validate:
stage: validate
image: node:18-alpine
needs:
- job: install_deps
artifacts: true
cache:
key:
files:
- pnpm-lock.yaml
paths:
- .pnpm-store
policy: pull
script:
- echo "Running fast validation: Linting and Type Checking..."
# Linting the entire repository.
- pnpm lint:all
# Type checking using TypeScript's compiler without emitting files.
# This is much faster than running a build and leverages TypeScript's incremental compilation features.
- pnpm typecheck:all
# This job identifies which packages have changed and prepares for the selective build.
determine_changes:
stage: build
image: node:18-alpine
needs:
- job: install_deps
artifacts: true
script:
- apk add --no-cache bash git jq
- chmod +x ./scripts/find-changed.sh
- ./scripts/find-changed.sh
artifacts:
paths:
- changed_packages.txt
# The core job for building production-ready libraries.
# It reads the artifact from `determine_changes` and only builds what's necessary.
build_changed_libs:
stage: build
image: node:18-alpine
needs:
- job: determine_changes
artifacts: true
- job: install_deps
artifacts: true
cache:
key:
files:
- pnpm-lock.yaml
paths:
- .pnpm-store
policy: pull
script:
- |
if [ ! -f changed_packages.txt ] || [ ! -s changed_packages.txt ]; then
echo "No changed packages to build. Skipping."
exit 0
fi
CHANGED_DIRS=$(cat changed_packages.txt)
echo "Building the following packages with Rollup:"
echo "$CHANGED_DIRS"
for PKG_DIR in $CHANGED_DIRS; do
# Check if it's a library package (in our convention, they live in 'packages/')
if [[ "$PKG_DIR" == packages/* ]]; then
echo "--- Building library: $(basename $PKG_DIR) ---"
# Using pnpm's filtering to run the build script in the context of a specific package.
# The --filter flag is a powerful feature of pnpm workspaces.
pnpm --filter "./${PKG_DIR}" build
else
echo "Skipping non-library package: $(basename $PKG_DIR)"
fi
done
artifacts:
paths:
# We must pass the build artifacts (dist folders) of the changed packages.
- "packages/*/dist"
- "packages/*/package.json" # Needed for downstream jobs that might check dependencies
# The test job now becomes more intelligent. It runs tests on changed packages AND
# packages that depend on them. For this example, we'll keep it simple and just test
# the changed packages. A more advanced setup would build a dependency graph.
test_changed:
stage: test
image: node:18-alpine
needs:
- job: build_changed_libs
artifacts: true
- job: install_deps
artifacts: true
cache:
key:
files:
- pnpm-lock.yaml
paths:
- .pnpm-store
policy: pull
script:
- |
if [ ! -f changed_packages.txt ] || [ ! -s changed_packages.txt ]; then
echo "No changed packages to test. Skipping."
exit 0
fi
# For testing, we want to filter based on package name, not path.
# Create a filter string for pnpm.
PNPM_FILTER_ARGS=""
for PKG_DIR in $(cat changed_packages.txt); do
PKG_NAME=$(jq -r .name "${PKG_DIR}/package.json")
PNPM_FILTER_ARGS="$PNPM_FILTER_ARGS --filter $PKG_NAME"
done
if [ -z "$PNPM_FILTER_ARGS" ]; then
echo "Could not resolve package names for testing. Skipping."
exit 0
fi
echo "Running tests for changed packages: $PNPM_FILTER_ARGS"
# The `...` syntax in pnpm filter includes all dependencies of the matched packages.
# For unit tests this might not be necessary, but for integration tests it's vital.
# Here we just test the packages themselves.
pnpm m $PNPM_FILTER_ARGS test
This CI configuration embodies the hybrid philosophy. The fast_validate
job acts as a rapid gatekeeper. A linting error or a type mismatch is caught within the first 2-3 minutes of the pipeline, providing immediate feedback without waiting for a full build. It doesn’t use Rollup at all.
The magic happens in the build
stage. determine_changes
runs our script, and build_changed_libs
consumes its output. A critical detail here is the use of pnpm --filter
. This command is workspace-aware and executes the build
script (defined in the package’s own package.json
) only within the scope of the specified package. This avoids any ambiguity or path issues.
Let’s examine a representative rollup.config.mjs
for one of our library packages to understand why we couldn’t just use esbuild.
// packages/ui-core/rollup.config.mjs
import typescript from '@rollup/plugin-typescript';
import { dts } from 'rollup-plugin-dts';
import terser from '@rollup/plugin-terser';
import peerDepsExternal from 'rollup-plugin-peer-deps-external';
import resolve from '@rollup/plugin-node-resolve';
import commonjs from '@rollup/plugin-commonjs';
import pkg from './package.json' assert { type: 'json' };
// This configuration is designed for building a distributable library,
// not an application bundle. The distinction is critical.
export default [
{
input: 'src/index.ts',
output: [
{
file: pkg.main, // CommonJS output
format: 'cjs',
sourcemap: true,
},
{
file: pkg.module, // ES Module output
format: 'esm',
sourcemap: true,
},
],
plugins: [
// This plugin is essential for libraries. It automatically marks
// all `peerDependencies` in package.json as external. This prevents
// Rollup from bundling React, for example, into our UI component library.
// esbuild requires manual configuration for this, which is more error-prone
// across a large number of packages.
peerDepsExternal(),
// Resolves node modules.
resolve(),
// Converts CommonJS modules to ES6, so they can be included in a Rollup bundle.
commonjs(),
// Transpiles TypeScript to JavaScript.
typescript({
tsconfig: './tsconfig.build.json',
// Important: Exclude test files from the production build.
exclude: ['**/*.test.ts', '**/*.stories.ts'],
}),
// Minifies the output for production.
terser(),
],
// A list of external dependencies that should not be bundled.
// This goes beyond peerDependencies and includes any other direct dependencies
// that the consuming application is expected to provide.
external: Object.keys(pkg.dependencies || {}),
},
{
// This separate build step is dedicated to generating a single, bundled
// TypeScript declaration file (index.d.ts). This is a best practice for
// library authors, as it provides a clean and simple API surface for
// TypeScript consumers. The `rollup-plugin-dts` plugin is purpose-built for this
// and handles complex type re-exports gracefully.
input: 'dist/esm/types/src/index.d.ts',
output: [{ file: 'dist/index.d.ts', format: 'esm' }],
plugins: [dts()],
},
];
The comments in this Rollup configuration highlight the nuanced requirements of library bundling. The rollup-plugin-peer-deps-external
plugin and the dedicated dts
bundling step are perfect examples of functionality that is mature and robust in the Rollup ecosystem. Replicating this behavior reliably across 50 packages with esbuild’s plugin API would have been a significant engineering effort in itself, negating the time saved in CI execution.
The final results of this transition were transformative. The average pipeline duration for a typical merge request (changing 1-3 packages) dropped from over 45 minutes to approximately 7 minutes.
- Setup: ~2 minutes (dominated by
pnpm install
, but heavily cached). - Validate: ~2 minutes (linting and full monorepo type-check).
- Build: ~2.5 minutes (most of this is runner overhead, with Rollup building only a few packages).
- Test: ~30 seconds (running tests only on the changed packages).
For changes to root configuration files that trigger a full rebuild, the pipeline time is still around 20 minutes, which is a more than 50% improvement over the original, but this is now the exception rather than the rule.
The implementation is not without its limitations and potential future enhancements. The current find-changed.sh
script does not perform dependency graph analysis. If package-a
is changed, and package-b
depends on package-a
, our current script will only rebuild package-a
. This is a significant gap. A proper implementation would require a tool like pnpm
, nx
, or turborepo
to analyze the workspace dependency graph and expand the set of “changed” packages to include all downstream dependents. This would be the logical next iteration, replacing our custom shell script with a more robust, framework-provided command like pnpm list --filter="...[${BASE_SHA}]"
.
Furthermore, while GitLab’s built-in caching is effective, for a monorepo of this scale, a remote build cache (like the one offered by Turborepo or Nx Cloud) could further reduce build times, especially for developers working on the same branches. This would allow build artifacts from one CI run to be instantly available to another, even on a different runner, eliminating redundant computation entirely. The current setup is a pragmatic and powerful middle-ground, achieving massive efficiency gains without introducing new third-party services.