Batch image conversion rarely fails in only one way. A folder may contain 300 valid images, 2 corrupt files, 1 unsupported export, and 4 files the process cannot write because of permissions. The job did useful work, but it did not fully succeed. That is a partial failure.
Handling partial failures well matters because the wrong response can waste time or create publishing risk. Ignoring the failure can ship missing images. Deleting all successful outputs can throw away useful work. Retrying everything without understanding the failed files may repeat the same error.
"Check the logs and retry" is a poor response when a batch partially fails. The workflow must separate startup errors from per-file failures, preserve successful outputs for review, identify missing image families, and leave evidence that another developer can audit.
Treat Partial Failure as Its Own State#
In the current GetWebP CLI exit-code model, partial failure is exit code 3: at least one file succeeded and at least one file failed. Exit code 2 is reserved for usage errors such as invalid arguments. That difference matters in CI because a bad command and a mixed per-file result need different responses.
Check the installed release before standardizing a script:
getwebp --version
For current CLI behavior, treat these states separately:
| State | Meaning | Typical response |
|---|---|---|
Exit 0 | All matched files completed, or there was nothing to process | Continue to review or publish gate |
Exit 2 | Usage error such as a bad flag, missing input, or invalid value | Fix the command, do not retry the same job |
Exit 3 | Partial failure: mixed success and per-file errors | Block publishing, inspect failed files, keep successful outputs for review |
Exit 6 | Free-tier truncation, with skipped files | Do not treat the batch as complete |
| Startup/auth/network errors | The command did not complete normally | Fix the environment before judging image outputs |
The GetWebP CLI LLM context document lists the current exit-code table and tier behavior. The JSON output reference explains the machine-readable events used below.
In CI, the safe default is usually to fail the build and ask for review:
getwebp ./images -o ./dist --json > conversion.ndjson
code=$?
if [ "$code" -eq 3 ]; then
echo "Partial image conversion failure. Inspect conversion.ndjson." >&2
exit 1
fi
This prevents incomplete output sets from slipping into a release.
If the repository supports older CLI versions, do not rely only on the numeric code. Parse the JSON result and block when failedCount is greater than zero.
Keep Successful Outputs Until Reviewed#
Partial failure does not mean every output is bad. Some files may have converted correctly and passed mechanical checks. Keep those outputs in the review folder until you decide whether to use them, regenerate them, or rerun the whole batch.
Do not delete successful outputs automatically unless your workflow can recreate them exactly from preserved originals and documented settings.
Use a separate output folder so successful files are easy to inspect without mixing them into source images:
getwebp ./images \
-o ./dist/images \
--recursive \
--json > ./reports/conversion.ndjson
Original files are not modified or deleted by the CLI. That is useful during partial failure: you can preserve the source folder, inspect generated outputs, and retry only the problem inputs.
Identify the Failed Files#
Use structured output when possible. A JSON or NDJSON report can list failed inputs and errors more reliably than a long console log.
Look for patterns:
- corrupt input files
- unsupported file types
- permission errors
- missing source paths
- output folder conflicts
- license or limit issues
If all failures share a cause, fix that cause before retrying. If each failure is different, handle them individually.
The conversion run emits a version preamble first and then a conversion event such as convert.completed, convert.truncated, or convert.failed. A mixed per-file result is represented inside convert.completed with failedCount and results[] entries whose status is "error".
Extract failed files:
jq -r '
select(.type == "convert.completed")
| .data.results[]
| select(.status == "error")
| [.file, .error]
| @tsv
' ./reports/conversion.ndjson
Extract successful outputs for review:
jq -r '
select(.type == "convert.completed")
| .data.results[]
| select(.status == "success")
| [.file, .outputPath, .savedRatio, .quality, .qualityMode]
| @tsv
' ./reports/conversion.ndjson
Those two lists answer different questions. Failed files show what must be fixed. Successful outputs show what may already be reviewable, including outputs that became larger than the original through a negative savedRatio.
Classify the Failure Before Retrying#
Not every failure deserves the same next step:
| Pattern | Likely meaning | Next step |
|---|---|---|
| Many files fail with the same decode message | Corrupt export, unsupported variant, or bad source folder | Inspect one representative file, then fix the source export |
| Only files in one directory fail | Permissions, path length, or output directory issue | Fix folder access before retrying |
convert.failed appears before file results | Command could not start | Fix arguments or input path |
convert.truncated appears | Free plan processed only the allowed subset | Do not publish; rerun under the intended plan or smaller scope |
savedRatio is negative on important successes | Output is larger than source | Review whether that file should be skipped, kept original, or encoded differently |
| Failures appear only at high concurrency | Resource pressure or filesystem contention | Retry with lower --concurrency and preserve the first report |
This classification keeps retries small and evidence-driven.
Do Not Hide Failure Behind a High Success Count#
A batch that converted 997 of 1000 files still failed for 3 files. The success rate may be high, but the failed files could be the most important images in the release.
Report both numbers clearly:
Converted: 997
Failed: 3
Release impact: unknown until failed files are checked
This keeps the team from treating a large successful batch as automatic approval.
Retry Narrowly#
A narrow retry is often better than rerunning the full batch. Copy failed files into a small folder or target them explicitly after fixing the issue.
failed-inputs/
product-13.png
team-photo-corrupt.jpg
Then run conversion on that small set. This keeps the next result easy to inspect and avoids rewriting outputs that already passed.
If the retry requires different settings, write those outputs to a separate folder and review them before mixing them with the approved set.
Example narrow retry:
getwebp ./failed-inputs \
-o ./retry-output \
--json > ./reports/retry.ndjson
After the retry, merge only the outputs that pass review:
| Step | Check |
|---|---|
| Compare retry source list | Make sure every failed input was included |
| Inspect retry report | Confirm failedCount is 0 |
| Compare output paths | Make sure retry outputs land where the site expects them |
| Review visuals | Check artifacts, transparency, crops, and text readability |
| Update decision note | Record why the retry output replaced or joined the first batch |
Watch for Incomplete Responsive Sets#
Partial failures are especially risky when each source image should produce multiple outputs. If a product image needs 480px, 960px, and 1400px variants, a failure in one size can leave an incomplete srcset.
Before publishing, check that every image family is complete:
chair-480.webp
chair-960.webp
chair-1400.webp
If one variant is missing, the page may still render but load the wrong fallback or a larger-than-needed file.
Make completeness a mechanical check, not a memory task. For a product gallery, the expected output might be:
product-13-480.webp
product-13-960.webp
product-13-1400.webp
product-13-thumbnail.webp
If any member of that family is absent, the page is not ready even if the conversion report shows a high success rate.
Decide Whether to Block Release#
Not every failed file has the same importance. A failed homepage hero should block release. A failed unused archive image may not. The policy should be based on whether the image is used and whether the missing output affects a real page.
Still, do not decide from file names alone. Check references, templates, and CMS records before marking a failed file as low risk.
Use a short release-impact table:
| Failed file type | Default release decision |
|---|---|
| Homepage hero, landing-page hero, checkout image | Block |
| Product gallery or product variant image | Block until the product page is checked |
| Documentation screenshot linked from current docs | Block or remove the reference |
| Unused archive export | May defer after confirming it is not referenced |
| Duplicate or intentionally skipped file | Document and proceed only if the output set is complete |
Preserve the Evidence#
Keep the conversion report and a short decision note:
Batch date: 2025-11-09
Exit code: 3
Failed files: 3
Decision: release blocked until product variants are regenerated
GitHub's Actions documentation is useful for CI failure handling, and Google's WebP documentation explains the format used by many generated outputs.
The note should also include:
- input path
- output path
- exact command
- CLI version
- failed files
- retry command, if any
- final release decision
Partial failures are manageable when the workflow treats them as a review state. Keep successful outputs, inspect failed files, retry narrowly, and block publishing when the incomplete set affects real pages.

Jack
GetWebP EditorJack writes GetWebP guides about local-first image conversion, WebP workflows, browser compatibility, and practical performance checks for teams that publish images on the web.