
PR Review and CI Are Two Different Systems


A complete guide to building, tagging, caching, and pushing Docker images to GHCR, ECR, and Docker Hub from GitHub Actions workflows.
Getting a Docker build to work in GitHub Actions takes about ten minutes. Getting it to run fast, authenticate securely to multiple registries, produce multi-arch images, and reject vulnerable base layers before anything reaches production? That takes considerably longer. The gap between a hello-world docker build and a production-grade container CI pipeline is poorly documented end-to-end.
This guide covers all of it: Dockerfile context, layer caching strategies, multi-platform builds, registry authentication for GHCR, ECR, and Docker Hub, tagging conventions, and vulnerability scanning. Every example uses current action versions (build-push-action@v7, setup-buildx-action@v4) and the v2 GitHub cache API that became mandatory in April 2025.
Every container CI workflow in GitHub Actions revolves around three official Docker actions maintained by Docker, Inc.:
docker/setup-buildx-action@v4 creates a BuildKit builder instance. Without it, you're stuck with the legacy builder and no cache export support.docker/build-push-action@v7 runs the actual build. It wraps docker buildx build with inputs for context path, Dockerfile location, build args, target platforms, cache backends, and push behavior.docker/login-action@v4 authenticates to a container registry before push.Here's a minimal workflow that builds and pushes to Docker Hub on every commit to main:
name: Build and push
on:
push:
branches: [main]
jobs:
docker:
runs-on: ubuntu-latest
steps:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v4
- name: Login to Docker Hub
uses: docker/login-action@v4
with:
username: ${{ vars.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build and push
uses: docker/build-push-action@v7
with:
push: true
tags: myorg/myapp:latestA few things to notice. The action doesn't need actions/checkout if you don't specify a context input; by default, build-push-action uses a Git context (it clones the repo itself via BuildKit). If you set context: ., you'll need a checkout step first. Both approaches work, but Git context avoids shipping the entire working directory (including files ignored by .dockerignore) into the build context.
You'll also want to gate pushes on pull requests. Building PR images is fine for testing, but pushing them to a registry pollutes your tags. A common pattern:
- name: Build and push
uses: docker/build-push-action@v7
with:
push: ${{ github.event_name != 'pull_request' }}
tags: myorg/myapp:latestThis still runs the full build on PRs (good for catching Dockerfile errors early) but only pushes on merge to main.
Each registry has a different authentication model. Here's how to handle all three without storing long-lived credentials as secrets.
GHCR is the easiest. The built-in GITHUB_TOKEN has packages: write permission, so you don't need any repository secrets at all:
permissions:
packages: write
steps:
- name: Login to GHCR
uses: docker/login-action@v4
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}Your image tags follow the pattern ghcr.io/OWNER/IMAGE:TAG. No PATs, no secret rotation. The token is scoped to the workflow run and expires automatically.
For ECR, the modern approach uses OIDC federation to assume an IAM role. No AWS_ACCESS_KEY_ID or AWS_SECRET_ACCESS_KEY stored in GitHub secrets. You configure an IAM identity provider for token.actions.githubusercontent.com in your AWS account, create a role with ECR push permissions, then reference it:
permissions:
id-token: write
contents: read
steps:
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: arn:aws:iam::123456789012:role/github-actions-ecr
aws-region: us-east-1
- name: Login to Amazon ECR
id: ecr-login
uses: aws-actions/amazon-ecr-login@v2
- name: Build and push
uses: docker/build-push-action@v7
with:
push: true
tags: ${{ steps.ecr-login.outputs.registry }}/myapp:${{ github.sha }}The id-token: write permission is what allows the runner to request an OIDC token. AWS validates it against the trust policy on the role, and the resulting session credentials last only for the duration of the job. No static keys to rotate, no secrets to leak.
Docker Hub still requires a username and access token stored in GitHub secrets. Create a read/write token on Docker Hub, then:
steps:
- name: Login to Docker Hub
uses: docker/login-action@v4
with:
username: ${{ vars.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}Use a repository variable (vars) for the username and a secret for the token. The username isn't sensitive, and keeping it in vars means you can read it in the Actions UI without re-entering it each time you update the token.
To push to multiple registries in the same workflow, just add multiple login steps and list all the image refs in tags. Build-push-action will push to every registry in a single step.
Without caching, every build starts from scratch. For a Node.js or Go project with a multi-stage Dockerfile, that can mean three to eight minutes just reinstalling dependencies that haven't changed. BuildKit supports several cache backends through the cache-from and cache-to inputs. Here are the three that matter in GitHub Actions.
This is the simplest option and the right default for most teams. It stores cache blobs in GitHub's cache service (the same one actions/cache uses), requires no extra authentication, and works out of the box:
- name: Build and push
uses: docker/build-push-action@v7
with:
push: true
tags: myorg/myapp:latest
cache-from: type=gha
cache-to: type=gha,mode=maxThe mode=max flag is critical. Without it, BuildKit only exports the layers from the final stage of a multi-stage Dockerfile. With mode=max, all intermediate layers get cached too, which is what you want for multi-stage builds where the build stage changes much less often than the final stage.
Limitations: GitHub gives each repository 10 GB of cache space. Older entries get evicted automatically. If your image layers are large (500 MB+), you may churn through that budget quickly. Also, cache is scoped to the branch by default, so a feature branch can't reuse the main branch's cache unless you adjust the scope.
The registry cache pushes cache layers as a separate image tag (like myapp:buildcache) to your container registry. This decouples cache from GitHub's storage limits and makes cache available outside GitHub Actions entirely.
- name: Build and push
uses: docker/build-push-action@v7
with:
push: true
tags: myorg/myapp:latest
cache-from: type=registry,ref=myorg/myapp:buildcache
cache-to: type=registry,ref=myorg/myapp:buildcache,mode=maxThis is the best option if your images are large, you have multiple CI systems, or you're hitting GitHub's 10 GB limit. The downside is that it requires push access to the registry for the cache image, and the initial cache push adds a few seconds to each build. For most teams the tradeoff is worth it.
Inline cache embeds cache metadata directly in the image itself. It's the simplest registry-backed option, but it only supports mode=min (final stage only). For single-stage Dockerfiles this works fine. For multi-stage builds, it misses the intermediate layers that often take the longest, so you'll see poor cache hit rates.
- name: Build and push
uses: docker/build-push-action@v7
with:
push: true
tags: myorg/myapp:latest
cache-from: type=registry,ref=myorg/myapp:latest
cache-to: type=inlineWhich should you pick? Start with type=gha. If you're hitting cache eviction issues or need shared cache across systems, switch to type=registry. Avoid type=inline unless you have a simple single-stage build and want zero configuration.
If you're deploying to AWS Graviton instances, Apple Silicon dev machines, or any ARM64 infrastructure, you need multi-platform images. BuildKit handles this with docker/setup-qemu-action for CPU emulation and the platforms input on build-push-action.
steps:
- name: Set up QEMU
uses: docker/setup-qemu-action@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v4
- name: Build and push
uses: docker/build-push-action@v7
with:
platforms: linux/amd64,linux/arm64
push: true
tags: myorg/myapp:latest
cache-from: type=gha
cache-to: type=gha,mode=maxQEMU emulation is straightforward but slow. An ARM64 build that takes two minutes natively can take ten or more under QEMU on an AMD64 runner. For occasional builds that's acceptable. For high-frequency pushes, you have two better options.
Option 1: Split builds across runners. Use a matrix strategy with ubuntu-latest (AMD64) and ubuntu-24.04-arm (ARM64) runners to build each platform natively, then merge the manifests in a final job. Each architecture builds at native speed with no emulation overhead.
Option 2: Docker GitHub Builder. Docker recently released a reusable workflow (docker/github-builder/.github/workflows/build.yml@v1) that handles the matrix split and manifest merge for you. You pass in your platforms and registry credentials, and it does the rest:
jobs:
build:
uses: docker/github-builder/.github/workflows/build.yml@v1
permissions:
contents: read
id-token: write
with:
output: image
push: true
platforms: linux/amd64,linux/arm64
meta-images: myorg/myapp
meta-tags: |
type=ref,event=branch
type=semver,pattern={{version}}
secrets:
registry-auths: |
- registry: docker.io
username: ${{ vars.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}With default settings, the builder distributes one platform per runner and assembles the final multi-platform manifest automatically.
Tagging :latest on every push works for development but breaks down fast. You can't trace a running container back to a commit, can't roll back to a known good version, and can't tell whether two environments are running the same code. Use docker/metadata-action to generate tags automatically from Git context:
steps:
- name: Docker meta
id: meta
uses: docker/metadata-action@v6
with:
images: |
myorg/myapp
ghcr.io/myorg/myapp
tags: |
type=ref,event=branch
type=ref,event=pr
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=sha
- name: Build and push
uses: docker/build-push-action@v7
with:
push: ${{ github.event_name != 'pull_request' }}
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}Here's what each rule gives you:
type=sha produces a tag like sha-a1b2c3d. Every build gets a unique, immutable tag tied to its source commit. This is the tag you should deploy with.type=semver extracts version numbers from Git tags. If you push a v1.2.3 tag, you get 1.2.3 and 1.2 as Docker tags.type=ref,event=branch gives you a mutable tag like main. Useful for "always pull the latest from this branch" scenarios, but don't depend on it for production deployments.The metadata action also injects OCI labels like org.opencontainers.image.source and org.opencontainers.image.revision into the image. These make it trivial to trace a running container back to its source code and build workflow.
Pushing a vulnerable image to a registry and then scanning it defeats the purpose. The scan should happen before the push, and the workflow should fail if HIGH or CRITICAL CVEs are found. Two tools dominate this space: Trivy (by Aqua Security) and Grype (by Anchore).
The trick is to build the image first without pushing, export it as a tarball, scan it, then push only if the scan passes. Here's the pattern with Trivy:
steps:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v4
- name: Build image (no push)
uses: docker/build-push-action@v7
with:
load: true
tags: myorg/myapp:scan
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Scan for vulnerabilities
uses: aquasecurity/trivy-action@master
with:
image-ref: myorg/myapp:scan
format: table
exit-code: 1
severity: HIGH,CRITICAL
- name: Login and push
if: success()
uses: docker/login-action@v4
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Push image
if: success()
uses: docker/build-push-action@v7
with:
push: true
tags: ghcr.io/myorg/myapp:${{ github.sha }}
cache-from: type=ghaThe exit-code: 1 setting makes Trivy return a non-zero exit code when it finds vulnerabilities matching the severity filter. The subsequent push steps only run if the scan passes.
Grype works the same way but uses anchore/scan-action instead. Both produce SARIF output that you can upload to GitHub's code scanning dashboard for persistent tracking across builds.
Here's a production-grade workflow that builds a multi-platform image, caches layers via the GitHub Actions cache, generates proper tags from Git context, scans for vulnerabilities, and pushes to both GHCR and Docker Hub:
name: Container CI
on:
push:
branches: [main]
tags: ['v*']
pull_request:
branches: [main]
permissions:
contents: read
packages: write
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Set up QEMU
uses: docker/setup-qemu-action@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v4
- name: Docker meta
id: meta
uses: docker/metadata-action@v6
with:
images: |
myorg/myapp
ghcr.io/${{ github.repository }}
tags: |
type=ref,event=branch
type=ref,event=pr
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=sha
- name: Build image for scanning
uses: docker/build-push-action@v7
with:
context: .
load: true
tags: myorg/myapp:scan
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Scan with Trivy
uses: aquasecurity/trivy-action@master
with:
image-ref: myorg/myapp:scan
format: table
exit-code: 1
severity: HIGH,CRITICAL
- name: Login to GHCR
if: github.event_name != 'pull_request'
uses: docker/login-action@v4
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Login to Docker Hub
if: github.event_name != 'pull_request'
uses: docker/login-action@v4
with:
username: ${{ vars.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build and push
if: github.event_name != 'pull_request'
uses: docker/build-push-action@v7
with:
context: .
platforms: linux/amd64,linux/arm64
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha
cache-to: type=gha,mode=maxA few things to note about this workflow. The scan step uses load: true to build a single-platform image into the local Docker daemon for Trivy to inspect. The final push step then rebuilds with platforms for multi-arch, but because of the GHA cache, it reuses all the layers from the scan build. The overhead of building "twice" is negligible.
PRs get the full build and scan, but nothing gets pushed. Pushes to main get a branch tag and a SHA tag. Pushing a v1.2.3 Git tag additionally produces semver Docker tags. All images land in both GHCR and Docker Hub with identical digests.
Swap type=gha for type=registry when you outgrow the 10 GB cache budget, add ECR login if you need a third registry, and consider splitting the multi-platform build across native runners once QEMU becomes the bottleneck. The workflow structure stays the same.

PR Review and CI Are Two Different Systems

From Provision to Shutdown: The Lifecycle of a Tenki Runner

What Are GitHub Actions Runners? A Complete Beginner’s Guide to CI/CD Workflows
Get Tenki
Change 1 line of YAML for faster runners. Install a GitHub App for AI code reviews. No credit card, no contract. Takes about 2 minutes.