
PR Review and CI Are Two Different Systems


Naive path filters miss cross-package dependencies. Dependency-aware selective builds with Turborepo, Nx, and conditional job execution fix monorepo CI without over-building or under-building.
Monorepo CI has two failure modes, and most teams hit both before they find the middle ground.
The first is over-building. Every push triggers lint, test, and build for every package. A one-line typo fix in a README kicks off 45 minutes of CI across five packages, four of which weren't touched. It works, technically. It's also burning through Actions minutes and making developers wait for feedback that isn't relevant to their change.
The second is under-building. Someone adds paths: ['packages/api/**'] filters to the workflow, and now the API only builds when its own files change. Except the API imports a shared utils package, and when someone changes a validation function in utils, the API's tests don't run. The bug ships.
The fix is dependency-aware selective builds: a CI strategy that knows which packages depend on each other and uses that graph to decide what needs to run. This article walks through how to set that up on GitHub Actions, from basic path filtering through Turborepo and Nx integration to dynamic matrix strategies that scale with your repo.
GitHub Actions has built-in path filtering at the workflow level. You can scope a workflow to only trigger when specific files change:
on:
push:
paths:
- 'packages/api/**'
- 'packages/shared/**'This works for simple cases, but it has a fundamental limitation: it operates at the workflow level, not the job level. The entire workflow either runs or it doesn't. You can't say "run the API tests if the API changed, run the web tests if the web app changed, both in the same workflow."
The bigger problem is that path filters don't understand your dependency graph. Consider a typical monorepo:
packages/
api/ # depends on utils, db
web/ # depends on utils, ui
utils/ # shared utilities, no deps
db/ # database layer, depends on utils
ui/ # component library, no depsChange utils and you need to test api, web, and db too, because they all depend on it. A path filter on packages/api/** won't catch that. You could manually list every transitive dependency path, but that's fragile. Every time someone adds a dependency between packages, the path filters need updating. Someone will forget.
The first step toward selective builds is moving change detection from the workflow trigger into the workflow itself, where you can conditionally run individual jobs.
dorny/paths-filter is the standard tool for this, used by over 49,000 repositories including Sentry. It checks which files changed and sets output variables that downstream jobs can reference in if conditionals:
name: CI
on:
pull_request:
push:
branches: [main]
jobs:
detect-changes:
runs-on: ubuntu-latest
outputs:
api: ${{ steps.filter.outputs.api }}
web: ${{ steps.filter.outputs.web }}
steps:
- uses: actions/checkout@v4
- uses: dorny/paths-filter@v4
id: filter
with:
filters: |
api:
- 'packages/api/**'
- 'packages/utils/**'
- 'packages/db/**'
web:
- 'packages/web/**'
- 'packages/utils/**'
- 'packages/ui/**'
test-api:
needs: detect-changes
if: needs.detect-changes.outputs.api == 'true'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: npm ci
- run: npm test --workspace=packages/api
test-web:
needs: detect-changes
if: needs.detect-changes.outputs.web == 'true'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: npm ci
- run: npm test --workspace=packages/webThis is a big improvement over workflow-level filters. Jobs that don't match get skipped instantly, and you can see in the Actions UI exactly which ones ran. The changes output is a JSON array of matching filter names, which opens the door to dynamic matrix strategies.
But notice the duplication. I had to manually list packages/utils/** under both api and web because both import from utils. That mapping lives in YAML, not in the package manager's dependency graph. It's better than workflow-level filters, but it's still a manual mirror of your dependency tree.
Turborepo solves this by reading your package manager's workspace configuration. It already knows that api depends on utils because the relationship is declared in package.json. When you run tasks, Turborepo walks the dependency graph and only executes in packages that are affected by your changes.
The key flag is --affected. In GitHub Actions, Turborepo automatically detects the CI environment by reading GITHUB_BASE_REF (for PRs) or GITHUB_EVENT_PATH (for push events). It compares the current commit against the base, identifies changed files, maps them to packages, then walks the dependency graph upward to find every affected package.
name: CI
on:
pull_request:
push:
branches: [main]
jobs:
ci:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- uses: actions/setup-node@v4
with:
node-version: 22
cache: 'npm'
- run: npm ci
- run: npx turbo run lint test build --affectedNo manual path mapping. No filter config to maintain. Change a file in utils and Turborepo runs tasks for utils, api, web, and db. Change only ui and only ui and web get tested.
One important detail: fetch-depth: 0 in the checkout step. Turborepo needs git history to compare commits. With a shallow clone (the default), there's nothing to diff against. If history isn't available, --affected falls back to running all tasks, which is safe but defeats the purpose.
--affected reduces which packages are touched. Remote cache reduces work within those packages. They complement each other. Even if a package is marked affected because its dependency changed, the remote cache can still return a hit if the inputs hash to the same value.
- run: npx turbo run lint test build --affected
env:
TURBO_TOKEN: ${{ secrets.TURBO_TOKEN }}
TURBO_TEAM: ${{ vars.TURBO_TEAM }}In large repos, this combination matters. --affected reduces the set of packages Turborepo even considers, which means fewer cache lookups across the network. Without it, Turborepo queries the cache for every package, and data transfer from cache restores can add up in repos with dozens of packages.
Nx takes a similar core approach but pushes it further. Like Turborepo, nx affected compares against a base ref and determines which projects need work. The difference is in how it picks the comparison point and what it does with the result.
Nx uses the nrwl/nx-set-shas action to find the last successful CI run on main and compare against that, rather than the PR base. This matters when multiple PRs merge in sequence: you want to test against what actually passed CI last, not just the branch point.
name: CI
on:
push:
branches: [main]
pull_request:
permissions:
actions: read
contents: read
jobs:
main:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
filter: tree:0
fetch-depth: 0
- uses: actions/setup-node@v4
with:
node-version: 22
cache: 'npm'
- run: npm ci
- uses: nrwl/nx-set-shas@v4
- run: npx nx affected -t lint test buildNx Cloud adds task distribution on top. Instead of running all affected tasks on a single runner, Nx Cloud spreads them across multiple agents. That's a paid feature, but for large monorepos where "affected" still means 10+ packages, distributing work across machines cuts wall-clock time significantly.
In practice, the strongest setups use both layers. Workflow-level or job-level path filtering handles coarse gating, and Turborepo or Nx handles fine-grained dependency-aware filtering within the build.
Why bother with path filters if Turborepo handles it? Because dependency installation isn't free. Even with a warm npm cache, running npm ci in a large monorepo takes 30-90 seconds. If someone pushes a docs-only change, you don't want to spin up a runner and install dependencies just so Turborepo can decide there's nothing to do.
Turborepo's turbo-ignore is built for this. It runs before dependency installation and checks whether a package has changes worth building for. If not, it exits 0 and you skip the rest:
deploy-api:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 2
- name: Check for changes
id: check
run: |
npx turbo-ignore api \
&& echo "skip=true" >> $GITHUB_OUTPUT \
|| echo "skip=false" >> $GITHUB_OUTPUT
- if: steps.check.outputs.skip != 'true'
run: npm ci
- if: steps.check.outputs.skip != 'true'
run: npx turbo run build deploy --filter=apiThis pattern is especially useful for deployment jobs where you definitely don't want to redeploy a service that hasn't changed.
For repos with many packages, hardcoding a separate job per package doesn't scale. Dynamic matrix strategies generate the job list at runtime based on what actually changed.
The changes output from dorny/paths-filter is a JSON array you can feed directly into a matrix:
jobs:
detect:
runs-on: ubuntu-latest
outputs:
packages: ${{ steps.filter.outputs.changes }}
steps:
- uses: actions/checkout@v4
- uses: dorny/paths-filter@v4
id: filter
with:
filters: |
api:
- 'packages/api/**'
web:
- 'packages/web/**'
ui:
- 'packages/ui/**'
test:
needs: detect
if: needs.detect.outputs.packages != '[]'
strategy:
matrix:
package: ${{ fromJson(needs.detect.outputs.packages) }}
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 22
cache: 'npm'
- run: npm ci
- run: npm test --workspace=packages/${{ matrix.package }}One job per changed package, each running in parallel. The if: needs.detect.outputs.packages != '[]' guard prevents the matrix from failing when no packages matched. Without it, an empty array causes a workflow error.
You can also build the matrix from a script if you want to incorporate dependency-aware logic without Turborepo or Nx:
- name: Determine affected packages
id: affected
run: |
CHANGED=$(git diff --name-only ${{ github.event.before }} \
${{ github.sha }} | grep '^packages/' | \
cut -d'/' -f2 | sort -u | \
jq -R -s -c 'split("\n") | map(select(. != ""))')
echo "packages=$CHANGED" >> $GITHUB_OUTPUTThis won't catch transitive dependencies the way Turborepo does, but it's zero-dependency and works if your packages are mostly independent.
Monorepo caching has a wrinkle that single-package repos don't. If your matrix strategy runs separate jobs for each package, each job installs dependencies independently. In a monorepo with hoisted node_modules, that means every matrix leg downloads and installs the entire dependency tree.
actions/setup-node with cache: 'npm' handles this well enough for most teams. It caches the download cache, so npm ci still runs a clean install but reads packages from a local cache instead of downloading them. The key is based on your lockfile, so it's shared across all matrix legs and across PRs.
For tighter control, you can cache node_modules directly and skip npm ci on a hit. This saves 20-40 seconds per job. The tradeoff: npm ci guarantees your install matches the lockfile, while restoring a cached node_modules might not. Choose based on your tolerance for reproducibility drift.
With Turborepo's remote cache the calculus changes again. Turborepo caches task outputs, not dependencies. Multiple matrix legs might all install dependencies, but Turborepo skips the actual build/test work on cache hit. The install is duplicated but the expensive computation isn't.
When you split CI into separate jobs per package, you often need one job's output as another job's input. The classic case: build produces compiled artifacts that deploy needs.
GitHub Actions artifacts handle this. In a monorepo context, name your artifacts by package to avoid collisions:
build:
needs: detect
if: needs.detect.outputs.packages != '[]'
strategy:
matrix:
package: ${{ fromJson(needs.detect.outputs.packages) }}
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 22
cache: 'npm'
- run: npm ci
- run: npm run build --workspace=packages/${{ matrix.package }}
- uses: actions/upload-artifact@v4
with:
name: build-${{ matrix.package }}
path: packages/${{ matrix.package }}/dist
retention-days: 1
deploy:
needs: build
strategy:
matrix:
package: ${{ fromJson(needs.detect.outputs.packages) }}
runs-on: ubuntu-latest
steps:
- uses: actions/download-artifact@v4
with:
name: build-${{ matrix.package }}
path: dist/
- run: echo "Deploying ${{ matrix.package }}"Set retention-days: 1 for artifacts that only exist to pass data between jobs. No reason to keep build outputs for 90 days (the default), and artifact storage counts against your GitHub storage quota.
Here's a complete workflow that ties everything together. Turborepo handles dependency-aware task execution, a detect step gates deployments, and quality checks are separate from build/deploy:
name: Monorepo CI
on:
pull_request:
push:
branches: [main]
env:
TURBO_TOKEN: ${{ secrets.TURBO_TOKEN }}
TURBO_TEAM: ${{ vars.TURBO_TEAM }}
jobs:
quality:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- uses: actions/setup-node@v4
with:
node-version: 22
cache: 'npm'
- run: npm ci
- run: npx turbo run lint check-types test --affected
detect-deployables:
runs-on: ubuntu-latest
outputs:
api: ${{ steps.check-api.outputs.result }}
web: ${{ steps.check-web.outputs.result }}
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 2
- id: check-api
run: |
npx turbo-ignore api \
&& echo "result=skip" >> $GITHUB_OUTPUT \
|| echo "result=deploy" >> $GITHUB_OUTPUT
- id: check-web
run: |
npx turbo-ignore web \
&& echo "result=skip" >> $GITHUB_OUTPUT \
|| echo "result=deploy" >> $GITHUB_OUTPUT
build-api:
needs: [quality, detect-deployables]
if: >
github.ref == 'refs/heads/main' &&
needs.detect-deployables.outputs.api == 'deploy'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 22
cache: 'npm'
- run: npm ci
- run: npx turbo run build --filter=api
- uses: actions/upload-artifact@v4
with:
name: api-build
path: packages/api/dist
retention-days: 1
build-web:
needs: [quality, detect-deployables]
if: >
github.ref == 'refs/heads/main' &&
needs.detect-deployables.outputs.web == 'deploy'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 22
cache: 'npm'
- run: npm ci
- run: npx turbo run build --filter=web
- uses: actions/upload-artifact@v4
with:
name: web-build
path: packages/web/dist
retention-days: 1
deploy-api:
needs: build-api
runs-on: ubuntu-latest
environment: production
steps:
- uses: actions/download-artifact@v4
with:
name: api-build
path: dist/
- run: echo "Deploy API to production"
deploy-web:
needs: build-web
runs-on: ubuntu-latest
environment: production
steps:
- uses: actions/download-artifact@v4
with:
name: web-build
path: dist/
- run: echo "Deploy web to production"The quality job runs on every push and PR. It lints, type-checks, and tests all affected packages in a single job, leveraging Turborepo's parallelization and remote cache. This is the fast feedback loop.
Build and deploy jobs only run on main, and only for packages that changed. The turbo-ignore checks use fetch-depth: 2 (the parent commit is enough) and don't need npm ci, keeping the detection step fast.
The right strategy depends on your repo's size and how tangled the dependency graph is.
2-3 packages with minimal cross-dependencies? dorny/paths-filter with manual dependency paths is probably enough. Simple, no build tool dependency, and the manual mapping is manageable.
5-15 packages with shared libraries? Turborepo's --affected flag is the sweet spot. Minimal config, automatic dependency awareness, and remote cache for tasks that hit across PRs.
15+ packages or polyglot repos? Nx with its project graph, task distribution, and affected analysis handles scale. The tradeoff is more configuration and a steeper learning curve, but the tooling is built for enterprise-scale monorepos.
Regardless of which tool you pick, the principle is the same: let the dependency graph drive your CI decisions. Hardcoding paths in YAML is a maintenance liability. The closer your CI strategy is to your actual code structure, the less it breaks when that structure evolves.

PR Review and CI Are Two Different Systems

From Provision to Shutdown: The Lifecycle of a Tenki Runner

What Are GitHub Actions Runners? A Complete Beginner’s Guide to CI/CD Workflows
Get Tenki
Change 1 line of YAML for faster runners. Install a GitHub App for AI code reviews. No credit card, no contract. Takes about 2 minutes.