この記事は現在、英語でのみご利用いただけます。

Image OptimizationNov 15, 20258 min read

Concurrency in Image Conversion: Faster Is Not Always Cleaner

Image conversion is a good candidate for parallel work. If a folder contains hundreds of independent images, multiple files can often be processed at the same time. But higher concurrency is not automatically better. Too many parallel conversions can overload CPU, memory, disk, CI runners, or the review process that follows.

The goal is not maximum speed at any cost. The goal is predictable output that your team can review and publish safely.

"Use more workers to go faster" is incomplete image-pipeline advice. A concurrency plan has to name the machine, the image mix, the tool limits, the log evidence, and the human review capacity after the run.

Know What Concurrency Changes#

Concurrency controls how many conversion tasks run at once. More workers can reduce total wall-clock time when the machine has enough CPU, memory, and disk bandwidth. But image encoding is resource-heavy, especially for large photos, lossless modes, and modern formats such as AVIF.

When concurrency is too high, you may see:

  • slower individual tasks
  • high fan noise or thermal throttling
  • memory pressure
  • disk contention
  • CI runner instability
  • harder-to-read logs
  • partial failures under load

The fastest setting on one machine may be the wrong setting on another.

Concurrency also does not change the meaning of quality settings. If the command uses auto quality, each file still needs to meet the tool's quality decision. If the command uses a fixed --quality, the same value is applied no matter how many workers are active. More workers produce files sooner; they do not make edges cleaner, screenshots more readable, or crops more appropriate.

Know the Tool Boundary#

In GetWebP CLI, custom conversion concurrency is a Pro feature. The CLI commands reference documents --concurrency <number> for the default getwebp <path> command, with a maximum of 32 workers. On the Free plan, --concurrency is ignored: processing is serial, includes the Free delay, and is limited to 20 files per run.

Use an explicit command when concurrency matters:

getwebp ./src/images \
  -o ./dist/images \
  --recursive \
  --concurrency 4 \
  --json > ./conversion.ndjson

That command gives you a stable output folder, recursive input handling, a visible worker setting, and machine-readable evidence. It is much better than relying on a terminal transcript that only says the run felt faster.

For watch mode, do not copy the same assumption blindly. The watch command has its own default worker calculation and is a long-running Pro workflow. This article focuses on batch conversion concurrency, where you can run a bounded job, inspect the final result, and repeat with a lower or higher worker count.

Start With the Environment#

A developer laptop, build server, and GitHub Actions runner have different constraints. Before increasing concurrency, ask:

  • how many CPU cores are available?
  • how much memory is free?
  • is the input on a local disk, network drive, or mounted volume?
  • are other build steps running at the same time?
  • does the job also generate responsive variants?
  • does the format require expensive encoding settings?

For local work, a small increase may be useful. For CI, stability and repeatable logs may matter more than shaving a few seconds from one job.

Document the runner alongside the result. "Concurrency 8 was fastest" is not useful unless the reader knows whether it happened on a 12-core desktop, a laptop on battery, a shared CI runner, or a container with restricted CPU shares.

Protect Small Machines#

Not every contributor has the same hardware. A command that runs well on a powerful workstation can make an older laptop unresponsive. If the team documents a conversion command, choose a setting that works for typical contributors, not only the fastest machine.

GetWebP CLI uses different limits depending on plan and context, including a single-worker free path and higher Pro concurrency with safety caps. Regardless of tool, the same principle applies: the default should be boring and reliable.

For repository scripts, prefer a conservative default and let power users override it locally:

getwebp ./src/images -o ./dist/images --recursive --concurrency 4

Then document when to raise it:

SituationSafer choice
Shared CI runnerKeep concurrency low enough that the rest of the job remains stable
Developer laptopsPick a setting that does not make the machine unusable
Network-mounted inputLower concurrency and watch for slow reads or permission errors
Large HEIC or AVIF sourcesStart low because decode memory can dominate
Mixed assets and uncertain qualitySplit the run before tuning worker count

Use Separate Runs for Different Asset Types#

Mixed folders are harder to tune. A batch containing tiny thumbnails, large hero photos, screenshots, and AVIF tests may not have one ideal concurrency setting.

Split work by asset type:

photos/
screenshots/
transparent-assets/

Large photos can run with one setting. Screenshot review can use another. This also makes logs easier to inspect when something fails.

Example:

getwebp ./photos -o ./dist/photos --recursive --concurrency 4 --json > photos.ndjson
getwebp ./screenshots -o ./dist/screenshots --recursive --concurrency 2 --json > screenshots.ndjson
getwebp ./transparent-assets -o ./dist/transparent-assets --recursive --concurrency 2 --json > transparent.ndjson

This keeps the benchmark honest. A folder of screenshots may finish quickly at low concurrency and still need careful text review. A folder of large photos may benefit from more workers but also create a larger visual QA queue.

Watch Output Completeness#

High concurrency can make partial failures harder to follow if logs from many files interleave. Use structured output where possible and inspect the result:

  • total files processed
  • total failures
  • skipped files
  • output paths
  • exit code

If a batch fails under high concurrency but passes at a lower setting, treat that as a workflow signal. The goal is not to force the high setting through; it is to find a stable setting.

With GetWebP CLI --json, parse the NDJSON stream instead of scanning human output. The first line is a version event, and the final conversion payload appears as convert.completed, convert.truncated, or convert.failed depending on the run.

Use a small inspection command after each benchmark:

jq -r '
  select(.type == "convert.completed")
  | .data
  | {
      processed,
      successCount,
      failedCount,
      worstSavings: ([.results[]? | select(.status == "success") | .savedRatio] | min),
      failures: [.results[]? | select(.status == "error") | {file, error}]
    }
' ./conversion.ndjson

The fields to watch are practical:

FieldWhy it matters
successCount and failedCountA fast run with failures is not a clean run
results[].outputPathReviewers need to know what was actually written
results[].savedRatioNegative values mean the output became larger than the input
results[].qualityConfirms the actual quality used for the file
results[].qualityModeDistinguishes auto quality from fixed settings
results[].statusSeparates success, skipped, and error states

If the Free plan limit is reached, the JSON event is convert.truncated. Do not treat a truncated run as a complete benchmark, because only the processed subset was measured.

Do Not Confuse Speed With Quality#

Concurrency does not improve visual quality. It only changes how quickly files are produced. A faster run still needs review for artifacts, text readability, transparent edges, and responsive crops.

This is especially important when a faster job produces hundreds of outputs. The review burden may become the real bottleneck. Generate at a pace the team can inspect.

Set a review capacity before the run:

Asset typeReview check
Product photosZoom detail, color shifts, edges, alternate backgrounds
Blog hero imagesDesktop crop, mobile crop, visible focal point, LCP candidate size
ScreenshotsText readability, line sharpness, UI contrast
Transparent assetsEdges on dark and light backgrounds
Existing WebP filesWhether re-encoding increased size or added artifacts

If the run creates more files than the team can review, the concurrency setting is operationally too high even if the machine handled it.

Benchmark With Your Own Images#

If conversion time matters, benchmark with your real image corpus. Include the formats, dimensions, and settings the team actually uses. Record:

  • input count
  • total input size
  • output format
  • quality settings
  • concurrency setting
  • total duration
  • failures
  • machine or runner type

Google's WebP documentation gives format background, while GitHub's Actions documentation is useful when CI runner behavior is part of the workflow.

Use a simple benchmark ladder:

getwebp ./bench-images -o ./bench-out/c1 --recursive --concurrency 1 --json > c1.ndjson
getwebp ./bench-images -o ./bench-out/c2 --recursive --concurrency 2 --json > c2.ndjson
getwebp ./bench-images -o ./bench-out/c4 --recursive --concurrency 4 --json > c4.ndjson
getwebp ./bench-images -o ./bench-out/c8 --recursive --concurrency 8 --json > c8.ndjson

Then compare the full result, not only elapsed time:

ConcurrencyDurationSuccessFailedWorst savedRatioNotes
100:00000.00Baseline
200:00000.00Usually safe first increase
400:00000.00Candidate default if logs stay clean
800:00000.00Keep only if the machine and review workflow can handle it

Replace the placeholder values with real measurements. A table of "savings percentages" without failures, machine details, or visual review notes does not support a publishing workflow.

Decide What Fails the Run#

A concurrency policy should say when to stop or lower the setting:

SignalResponse
Partial failures appear only at high concurrencyLower the worker count and inspect disk, memory, and permissions
The machine becomes unresponsiveLower the default, even if the final output technically succeeds
CI logs become hard to interpretUse --json and keep a lower worker count for shared jobs
savedRatio is negative for important filesReview those files before accepting the batch
Reviewers cannot keep upSplit the batch or reduce the generated output per run
Free plan truncation occursDo not benchmark concurrency from a truncated Free run

For CI, keep the script simple: fail hard for startup errors and authentication problems, treat partial conversion as a reviewable failure, and store the NDJSON artifact so a reviewer can see which files failed. The JSON output reference explains the event schema, and the CLI context document explains how scripts should react to the current process status model.

Choose the Clean Setting#

The best concurrency setting is the one that finishes reliably, produces readable logs, avoids unnecessary pressure on the machine, and leaves outputs ready for review. Sometimes that is not the highest available number.

Faster conversion is useful only when it does not make the pipeline harder to trust during real publishing work.

Jack avatar

Jack

GetWebP Editor

Jack writes GetWebP guides about local-first image conversion, WebP workflows, browser compatibility, and practical performance checks for teams that publish images on the web.