Runners

Which Runner to use?

Learn which GitHub Actions runner CPU cores and memory combinations to use for each CI workload. Optimize performance and costs with the right runner specs.

Choosing the right runner is one of the simplest ways to improve CI performance, reliability, and cost efficiency. Yet it’s often treated as a guess: “bigger runner = faster pipeline”.

In reality, there is no universally “best” runner. The best runner is the one whose CPU and memory profile matches the way your jobs actually run.

Here we want to explains how to choose the right Tenki runner for your workload, based on how CPU cores and memory affect different types of GitHub Actions jobs.

The core principle: performance depends on your bottleneck

Every CI job is constrained by something:

  • CPU
  • Memory
  • Parallelism
  • Startup time
  • I/O

Runner performance improves only when you scale the resource that is limiting your job. Adding more of the wrong resource increases cost without reducing execution time.

Before choosing a runner, ask one simple question:

What is slowing my job down right now?

How CPU cores affect CI performance

CPU cores determine how much work can happen at the same time.

More cores improve performance when:

  • Tasks can run in parallel
  • The tooling supports concurrency

Common CPU-bound workloads

  • Compiling large codebases (C/C++, Rust, Go)
  • Java or Kotlin builds with Gradle/Maven parallel workers
  • Test suites that shard or run concurrently
  • Pipelines with multiple independent build steps

Rule of thumb

If your job can execute multiple tasks simultaneously, more cores will reduce total runtime.

When more cores don’t help

  • Single-threaded scripts
  • Sequential pipelines
  • Jobs waiting on network or I/O

In these cases, increasing CPU cores won’t make jobs faster, it only increases runner cost.

How memory impacts CI stability and speed

Memory affects how reliably and predictably jobs run.

More memory helps when:

  • Large dependency trees are loaded into RAM
  • Tools rely heavily on caching
  • Multiple processes run at the same time

Memory-intensive workloads

  • Node.js builds (npm, pnpm, Yarn)
  • Frontend builds (Webpack, Vite, Next.js)
  • JVM-based applications
  • Docker image builds
  • Integration and end-to-end tests

Rule of thumb

CPU makes jobs faster. Memory prevents jobs from being slow.

Insufficient memory leads to:

  • Random slowdowns
  • Out-of-memory errors
  • Process restarts
  • Flaky pipelines

Adding memory won’t speed up a CPU-bound job, but too little memory can slow down any job.

CPU vs memory: common CI workload profiles

Lightweight jobs (linting, formatting, scripts)

ESLint, Prettier, Shell scripts, Static analysis etc..

Best runner

  • Small runners: tenki-standard-small-2c-4g
  • Low core count, minimal memory

Why

  • Startup time matters more than raw power
  • Over-provisioning wastes resources

Build-heavy jobs (compilation, asset bundling)

Large backend builds, Frontend production builds, Monorepos

Best runner

  • Higher core count: tenki-standard-large-8c-16g/tenki-standard-large-plus-16c-32g

Why

  • Enough memory to avoid swapping
  • Builds scale well with parallelism
  • Memory ensures consistent performance

Test-heavy jobs (unit, integration, E2E)

Jest, PyTest, JUnit, Cypress, Playwright, Service-based test suites etc..

Best runner

  • Balanced CPU and memory: tenki-standard-medium-4c-8g/tenki-standard-large-8c-16g

Why

  • Tests benefit from parallel execution
  • Multiple processes consume RAM quickly

Docker and container workflows

Docker builds, Multi-stage images, Layer caching

Best runner

  • Memory-optimized runners: tenki-standard-large-plus-16c-32g

Why

  • Docker builds consume significant memory
  • Filesystem operations benefit from RAM

Data processing and analytics

ETL jobs, Large dataset validation, ML preprocessing

Best runner

  • High-memory runners: tenki-standard-xlarge-32c-128g/tenki-standard-xlarge-plus-64c-256g

Why

  • Dataset size is the primary bottleneck
  • CPU is secondary to memory availability

A practical way to choose the right Tenki runner

If you’re unsure where to start, follow this approach:

  1. Start with the smallest runner that supports your job
  2. Observe:
  • Job duration
  • Failures or retries
  • Variability between runs
  1. Increase CPU cores if jobs are consistently slow
  2. Increase memory if jobs are unstable or flaky
  3. Optimize costs by matching runner size to workload type

This approach ensures you get performance improvements without unnecessary cost increases.

Final takeaway

There is no single “best” runner.

The right runner depends on:

  • How parallel your job is
  • How much memory your tools consume
  • Whether performance or cost is your priority

By aligning runner resources with workload behavior, Tenki helps you achieve:

  • Faster GitHub Actions pipelines
  • More predictable CI runs
  • Better cost-to-performance ratios

Choosing the right runner isn’t about using more, it’s about using what fits.