
PR Review and CI Are Two Different Systems


JavaScript actions start fast and run everywhere, Docker actions control the full environment, and composite actions glue shell steps together. Here's how to choose.
You've probably consumed hundreds of GitHub Actions. The checkout action, the setup-node action, that one caching action everyone copies from Stack Overflow. But at some point, a reusable workflow isn't cutting it anymore. You're copying the same 30-line shell block across twelve repositories, and every time someone finds a bug, they fix it in one place and forget the other eleven.
That's when you need a custom action. A self-contained, versioned, testable unit of CI logic that any workflow can reference with a single uses: line. GitHub gives you three ways to build one: JavaScript, Docker container, and composite. They solve the same problem but make very different tradeoffs around startup speed, environment control, and portability.
Reusable workflows are great for sharing entire job definitions across repos. But they operate at the job level, not the step level. You can't embed a reusable workflow as a step inside another job. You can't call one mid-job between your checkout and your test run. And the caller workflow can only pass inputs and secrets to it, not share the runner filesystem.
Custom actions fill that gap. They're step-level abstractions. They can read files from the workspace, set environment variables for subsequent steps, produce outputs that other steps consume, and run on the same runner as the rest of the job. If your shared logic needs to interact with the job's filesystem or needs to be a step in a larger sequence, an action is the right choice.
The other signal: you want versioned interfaces. An action has declared inputs and outputs in its action.yml metadata file. Consumers pin to a tag or SHA. You can change the internals without breaking callers, as long as the interface stays stable. That's a much cleaner contract than "copy this YAML block and hope it still works next month."
All three action types share the same metadata structure. Every action needs an action.yml (or action.yaml) at the repository root. It declares the action's name, description, inputs, outputs, and how to run it. The runs section is where the three types diverge.
name: 'My Custom Action'
description: 'Does something useful for the org'
inputs:
target:
description: 'Deployment target'
required: true
dry-run:
description: 'Skip actual deployment'
required: false
default: 'false'
outputs:
artifact-url:
description: 'URL of the deployed artifact'
runs:
# This section changes depending on action type
using: 'node20' # JavaScript
# using: 'docker' # Docker
# using: 'composite' # CompositeInputs are available to your action code via environment variables (INPUT_TARGET, INPUT_DRY-RUN) or through the Actions toolkit in JavaScript. Outputs are set by writing to $GITHUB_OUTPUT or using core.setOutput() in JavaScript.
JavaScript actions run directly on the runner's Node.js runtime. No container pull, no image build. The runner downloads your action's repository (or the specific ref), and immediately executes the entry point you specify. Cold start is effectively zero beyond the git clone.
The runs section for a JavaScript action looks like this:
runs:
using: 'node20'
main: 'dist/index.js'
pre: 'dist/setup.js' # optional: runs before main
post: 'dist/cleanup.js' # optional: runs after job completesThe using: 'node20' field tells the runner which Node.js version to use. As of early 2026, node20 is the current supported runtime. GitHub deprecated node16 and node12 previously, so expect to update this when Node 22 support arrives.
GitHub provides the @actions/toolkit set of npm packages for JavaScript actions. The two you'll use in every action are @actions/core (for reading inputs, setting outputs, logging, and failure handling) and @actions/github (for authenticated Octokit REST/GraphQL access and event context).
import * as core from '@actions/core';
import * as github from '@actions/github';
try {
const target = core.getInput('target', { required: true });
const dryRun = core.getBooleanInput('dry-run');
core.info(`Deploying to ${target}`);
if (dryRun) {
core.notice('Dry run mode — skipping actual deployment');
core.setOutput('artifact-url', 'https://example.com/dry-run');
return;
}
// Your deployment logic here
const url = await deploy(target);
core.setOutput('artifact-url', url);
} catch (error) {
core.setFailed(`Action failed: ${error.message}`);
}The toolkit also includes @actions/exec for running shell commands, @actions/tool-cache for downloading and caching binary tools, @actions/cache for the Actions cache API, and @actions/artifact for uploading workflow artifacts. You won't need all of these, but they save you from writing HTTP calls against GitHub's internal APIs.
Here's the catch that trips up first-time action authors: the runner doesn't run npm install for your action. It downloads the repository contents and executes the entry point directly. That means your node_modules need to either be committed (don't do this) or your code needs to be bundled into a single file.
The standard approach is to use @vercel/ncc or rollup to compile everything into a single dist/index.js file that includes all dependencies. You commit the dist/ directory to the repo. Yes, it feels wrong to commit build artifacts. It's how the ecosystem works.
# Using @vercel/ncc
npm install --save-dev @vercel/ncc
npx ncc build src/index.ts -o dist
# Using rollup
npm install --save-dev rollup @rollup/plugin-commonjs @rollup/plugin-node-resolve
npx rollup --config rollup.config.jsGitHub's official template repos (actions/javascript-action and actions/typescript-action) come pre-configured with this bundling setup, including a CI workflow that checks the dist/ folder is up to date.
JavaScript actions run on all three GitHub-hosted runner OSes: Ubuntu, macOS, and Windows. That's the biggest advantage over Docker actions. If your org has teams running workflows on different platforms, JavaScript is the only action type that works everywhere without caveats.
The constraint is that your code must be pure JavaScript. You can't depend on native binaries that aren't already in the runner image. If you need ffmpeg or python3 or some specific version of terraform, you'd need to download it at runtime (the @actions/tool-cache package helps here) or accept that you're in Docker territory.
Docker container actions package the entire runtime environment. You define a Dockerfile, and GitHub builds (or pulls) the image at workflow runtime. Your action code can be written in any language: Python, Go, Rust, Bash, or anything else you can put in a container.
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.target }}
- ${{ inputs.dry-run }}You can point image at a local Dockerfile (built fresh each run) or at a pre-built image on a registry like docker://ghcr.io/my-org/my-action:v1. Using a pre-built image skips the build step and reduces cold-start time, but you need to maintain image publishing separately.
This is the tradeoff you accept with Docker actions. When using a local Dockerfile, the runner has to build the image from scratch on every workflow run. For a slim Alpine-based image, that might be 10-15 seconds. For something based on a large base image with lots of dependencies, you're looking at 30 seconds to over a minute before your action code even starts.
Pre-built images avoid the build step but still need to be pulled. A 200MB image pull adds 5-10 seconds depending on network conditions. Compare that to a JavaScript action that starts in under a second.
For actions that run once in a 10-minute workflow, this overhead is noise. For actions that run on every push to a busy monorepo, it adds up.
Docker container actions only run on Linux runners. No macOS, no Windows. If your org needs cross-platform support, Docker isn't an option for the action itself. For self-hosted runners, they need Docker installed and running, which is an additional infrastructure requirement.
The runner automatically mounts GITHUB_WORKSPACE into the container at /github/workspace. Files you write there persist to subsequent steps in the job. Outputs are written to $GITHUB_OUTPUT the same way as in any action.
FROM python:3.12-slim
COPY requirements.txt /requirements.txt
RUN pip install --no-cache-dir -r /requirements.txt
COPY entrypoint.py /entrypoint.py
ENTRYPOINT ["python", "/entrypoint.py"]Composite actions are the simplest type. They don't have their own runtime. Instead, they're a sequence of run steps (shell commands) and uses steps (other actions) defined directly in action.yml. They execute on the runner exactly like regular workflow steps, with zero overhead.
runs:
using: 'composite'
steps:
- name: Install dependencies
shell: bash
run: npm ci
- name: Run linter
shell: bash
run: npm run lint
- name: Run tests
shell: bash
run: npm test
- name: Upload coverage
uses: actions/upload-artifact@v4
with:
name: coverage
path: coverage/Every run step must declare a shell field. This is required in composite actions even though it's optional in regular workflows. You can use bash, pwsh, python, or cmd depending on the runner OS.
Composite actions that call other actions with uses: have a dependency versioning problem. If your composite action pins actions/checkout@v4 and a new major version drops, you have to update your action and cut a new release. Consumers of your action don't control which version of checkout runs internally. That can create surprising behavior if you let transitive dependency versions drift.
Best practice: pin all uses: references to full SHAs, not tags. Use Dependabot or Renovate to keep them updated. This gives you reproducibility without falling behind.
Composite actions don't support pre: or post: lifecycle hooks. If you need setup/teardown logic that runs before and after the job regardless of step failures, you'll need a JavaScript or Docker action. They also don't support conditionals (if:) at the action level, though individual steps within the composite action can use if: conditions. Error handling is also coarser: if one step fails, the whole action fails. There's no built-in try/catch equivalent like JavaScript's core.setFailed().
Here's how the three types stack up on the dimensions that matter most for platform teams:
If you're building an action for the public, the GitHub Marketplace is the discovery mechanism. Publishing is straightforward: your repository must be public, contain a single action.yml at the root, and have a unique name. When you create a GitHub Release, you'll see an option to publish to the Marketplace. Actions are listed immediately without a review process.
For internal org actions, you have two distribution models. The first is a public repo that anyone can use. Simple, but your action code is visible to the world. The second is an internal or private repository with organization-level access controls. Since 2022, GitHub lets you share actions from private repositories with other repositories in the same organization through repository settings.
The versioning convention that's become standard: maintain a major version tag (like v1) that floats to the latest minor/patch release. Consumers reference my-org/my-action@v1 and get non-breaking updates automatically. For maximum security, consumers can pin to a full commit SHA instead, which Dependabot and Renovate will keep updated.
Testing custom actions is where the three types diverge most in practice.
JavaScript actions have the easiest testing story. Your action logic is just Node.js code. You can unit test it with Jest or Vitest like any other module. Mock @actions/core to simulate inputs and capture outputs. The GitHub template repos include a test workflow that exercises the action end-to-end on actual runners.
// Example test with mocked @actions/core
import { describe, it, expect, vi } from 'vitest';
import * as core from '@actions/core';
vi.mock('@actions/core');
import { run } from '../src/main.js';
describe('deploy action', () => {
it('sets output on successful deploy', async () => {
vi.mocked(core.getInput).mockReturnValue('staging');
vi.mocked(core.getBooleanInput).mockReturnValue(false);
await run();
expect(core.setOutput).toHaveBeenCalledWith(
'artifact-url',
expect.stringContaining('staging')
);
});
it('skips deploy in dry-run mode', async () => {
vi.mocked(core.getInput).mockReturnValue('production');
vi.mocked(core.getBooleanInput).mockReturnValue(true);
await run();
expect(core.notice).toHaveBeenCalledWith(
expect.stringContaining('Dry run')
);
});
});Docker actions can be tested locally by building and running the container with the same environment variables that GitHub would set. You can also test the underlying script independently. For Python actions, pytest works. For shell scripts, bats-core is a solid framework. The integration test is running the full container with docker run and checking exit codes and output file contents.
Composite actions are the hardest to test in isolation because they're just YAML referencing other steps. There's no code to unit test. Your best bet is an integration test workflow in the same repository that calls the action and verifies outputs. Tools like act can run workflows locally, but it doesn't perfectly replicate the GitHub runner environment, especially for actions that depend on GitHub-specific context variables.
Regardless of action type, the most reliable end-to-end test is a workflow in the action's own repository that references the action from the current commit. GitHub's template repos include this pattern:
# .github/workflows/test-action.yml
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: ./ # Test the action from the current commit
id: test
with:
target: staging
dry-run: 'true'
- name: Verify output
run: |
if [ -z "${{ steps.test.outputs.artifact-url }}" ]; then
echo "::error::No artifact-url output set"
exit 1
fiStart with composite if your action is basically "run these shell commands in order" or "call these existing actions with specific inputs." Most internal platform actions fall into this category. You don't need a build step, you don't need a Dockerfile, and the YAML is readable by anyone on the team.
Move to JavaScript when you need real programming logic: complex conditionals, API calls, error handling with retries, or pre/post lifecycle hooks. The @actions/toolkit makes GitHub API interactions clean, the testing story is familiar to any Node.js developer, and you get cross-platform support.
Reach for Docker when you need a specific runtime environment that the runner doesn't provide: a particular Python version with specific native packages, a Go binary, a security scanner that requires its own OS-level dependencies. The cold-start cost is real but acceptable when the alternative is "install 15 things in the workflow and hope the versions are right."
One pattern I'd recommend for platform teams: keep a monorepo of your internal actions. Each action gets its own directory with an action.yml, and workflows reference them as my-org/actions/deploy@v1 (the subdirectory syntax). Mix and match types based on what each action needs. The deploy action might be a JavaScript action with lifecycle hooks, the lint-config action might be a simple composite, and the security-scan action might be Docker.

PR Review and CI Are Two Different Systems

From Provision to Shutdown: The Lifecycle of a Tenki Runner

What Are GitHub Actions Runners? A Complete Beginner’s Guide to CI/CD Workflows
Get Tenki
Change 1 line of YAML for faster runners. Install a GitHub App for AI code reviews. No credit card, no contract. Takes about 2 minutes.