Image optimization inside Docker can become harder than expected. A project starts with a simple goal: convert images to WebP during a build or release job. Then the container needs native libraries, system packages, image codecs, build tools, cache layers, and platform-specific fixes. What looked like a small optimization step becomes a source of CI instability.
Avoiding unnecessary native dependency setup can make the workflow easier to reproduce. The goal is not to avoid all native code in every situation. The goal is to choose an image conversion approach that fits Docker, CI, and team maintenance.
"Use Docker so it is reproducible" is only true when the conversion boundary is designed well. A container can still hide stale cache, install the latest package unexpectedly, carry build tools into production, or publish converted images that nobody reviewed.
Know Where Native Dependencies Hurt#
Native image libraries can be powerful, but they may introduce setup problems:
- missing system packages
- incompatible Linux distributions
- architecture differences between local and CI
- larger container images
- slower cold builds
- security patch obligations
- confusing runtime errors when codecs are absent
These issues are manageable for teams that already maintain native image stacks. They are frustrating when the only requirement is routine WebP or AVIF conversion for website assets.
Use this distinction before changing the Dockerfile:
| Requirement | Native stack may be justified | Narrow converter may be enough |
|---|---|---|
| Resize, crop, composite, watermark, and transform images | Yes | No |
| Convert approved source assets to WebP or AVIF | Sometimes | Yes |
| Run inside a minimal release job | Usually extra maintenance | Often simpler |
| Debug codec installation across architectures | Expected cost | Avoid if not needed |
| Produce structured conversion records | Depends on custom code | Use CLI --json if available |
The point is not that native tools are bad. The point is that a Docker image should not install a broad image stack when the build only needs a narrow conversion step.
Prefer a Narrow Tool for a Narrow Job#
If the build step only needs to convert approved images, a focused CLI can be easier than installing a broad image-processing environment. The tool should support the formats you need, write outputs to a known folder, and return clear exit codes.
For example:
FROM node:22-slim
RUN npm install -g getwebp
WORKDIR /app
COPY . .
RUN getwebp ./public/images \
-o ./public/images-optimized \
--recursive \
--format webp \
--quality 82
This is only a starting pattern. In a real project, you may want conversion in CI rather than inside the production image build.
The GetWebP CLI command reference documents the relevant flags: --output, --recursive, --format, --quality, --dry-run, --skip-existing, and --json. It also states that original files are never modified or deleted. That preservation rule matters in Docker because build layers are disposable and generated files should not become the only copy of the asset.
For reproducible release work, pin the installation method you use. A tutorial can show npm install -g getwebp, but a production Dockerfile should decide whether it is installing a pinned npm version, downloading a pinned binary in CI, or using an already-approved internal base image. Otherwise a rebuild can change the converter while the Dockerfile appears unchanged.
Keep Conversion Out of the Runtime Image#
If optimized images are static build artifacts, the converter does not need to live in the final runtime container. Use a build stage or CI job to generate assets, then copy only approved outputs into the final image.
This keeps the runtime image smaller and reduces the number of tools available in production.
FROM node:22-slim AS assets
RUN npm install -g getwebp
WORKDIR /app
COPY public/images ./images
RUN getwebp ./images -o ./optimized --quality 82
FROM nginx:alpine
COPY --from=assets /app/optimized /usr/share/nginx/html/images
Multi-stage builds are a standard Docker pattern for separating build tools from runtime artifacts. Docker's multi-stage build documentation explains the concept.
For image optimization, the boundary should be stricter than "the build passed":
| Stage | Should contain |
|---|---|
| Source repository | Original images and the conversion command |
| Asset build stage | Converter, source images, generated outputs, and report |
| Review artifact | Generated outputs plus structured report |
| Runtime image | Only approved assets and the app/server needed to serve them |
Do not copy the converter, license material, temporary reports, or unreviewed source folders into the runtime image unless the application actually needs them at runtime.
Preserve Originals Outside the Container#
Containers are disposable. Source images should live in the repository, asset storage, or an approved archive, not only inside a temporary build layer.
The container should read from a known source folder and write generated outputs. If the build fails, the team should still have the originals and the command needed to rerun conversion.
A useful release record looks like this:
Source folder: public/images
Output folder: public/images-optimized
Format: webp
Quality: 82
Recursive: yes
Originals preserved: yes
Report artifact: image-report.ndjson
Visual review: approved for 12 representative pages
That record is better than "Docker optimized images" because it tells the next developer what actually happened.
Be Careful With Build Cache#
Docker cache can make image jobs look faster while hiding whether a conversion step actually reran. If source images change, the Dockerfile should copy them before the conversion step so the cache invalidates correctly.
For release work, inspect the output timestamps or generated report instead of assuming the cache behaved as intended.
Keep the cache boundary obvious:
FROM node:22-slim AS assets
RUN npm install -g getwebp
WORKDIR /app
COPY public/images ./images
RUN getwebp ./images -o ./optimized --recursive --format webp --quality 82 --json > ./image-report.ndjson
If COPY public/images ./images is below unrelated application copies, a source image change invalidates only the layers that need to rerun. If the conversion command depends on files copied before the image inputs are isolated, the cache behavior becomes harder to explain.
For CI, add a cache check:
jq -r 'select(.type == "convert.completed") | .data | [.processed, .successCount, .failedCount] | @tsv' image-report.ndjson
The report proves whether the converter processed the expected batch. A cached Docker layer with no fresh report should not be treated as a new optimization run.
Use Structured Output for CI#
When image optimization runs in Docker as part of CI, use structured output where possible:
npx -y getwebp ./public/images \
-o ./dist/images \
--recursive \
--format webp \
--json > image-report.ndjson
Upload the report as an artifact or parse it in a later step. This helps distinguish conversion failures from Docker setup failures.
The GetWebP JSON output reference describes --json as NDJSON: one JSON object per line, not a single JSON array. The first line is a version event. Conversion results appear as convert.completed, convert.truncated, or convert.failed. A successful file record includes outputPath, originalSize, newSize, savedRatio, quality, qualityMode, and status.
Pair the report with the current exit-code model in the LLM context document:
| Exit code | Docker/CI decision |
|---|---|
0 | Continue to visual review and artifact upload |
1 | Treat as setup or command failure |
2 | Fix command arguments before rerunning |
3 | Parse per-file errors before deciding whether partial output is usable |
4 | Fix license or activation state |
5 | Retry with backoff if the job depends on network license checks |
6 | Treat the run as truncated and process the remaining files |
75 | Stop automation and refresh the license state |
76 | Free disk space before rerunning the container job |
130 | Treat the run as interrupted and incomplete |
143 | Treat the run as terminated and incomplete |
The GetWebP CI integration guide is the better reference for workflow-level secret handling, artifact upload, and parsing patterns.
Do Not Hide Visual Review#
A Docker build can prove that a command ran. It cannot prove that the hero image, product photo, or screenshot looks acceptable. Keep visual review in the workflow:
- generate outputs in CI or locally
- expose them as artifacts or pull request changes
- review important pages or snapshots
- approve before publishing
Google's WebP documentation explains the output format, but the visual decision belongs to the project.
Use a review matrix:
| Asset role | Review requirement |
|---|---|
| Hero or likely LCP image | Check crop, detail, and selected responsive file |
| Product image | Check texture, color, edges, and zoom state |
| Screenshot | Check text, icons, borders, and small UI elements |
| Transparent logo | Check edges on light and dark backgrounds |
| Repeated thumbnail | Check one component visually and count aggregate transfer impact |
Only approved outputs should move from the asset stage into the runtime image or release artifact. Smaller files that fail visual review are not wins.
Choose the Simplest Reliable Boundary#
There are three common boundaries:
- run conversion locally before commit
- run conversion in CI and commit or artifact the outputs
- run conversion during Docker build
The third option is not always the best. If it makes every deployment depend on image conversion, native packages, or network installation, consider moving the step earlier.
Use this decision table:
| Boundary | Use it when | Watch for |
|---|---|---|
| Local before commit | Small teams manually review generated files | Developer machines may drift unless the command is documented |
| CI artifact | Reviewers need outputs and reports before merge | Artifacts must be retained long enough for review |
| CI commit-back | The repo should store generated variants | Avoid noisy diffs and make failures explicit |
| Docker build stage | The app image is the only release artifact | Cache, network installation, and review artifacts need careful handling |
| Runtime conversion | Images are user-generated after deploy | The runtime container now owns converter updates and resource limits |
Also keep the privacy boundary honest. The GetWebP security whitepaper separates image-processing data plane from licensing and account control plane. In a Docker or CI setup, image bytes can stay local to the runner or repository while license activation, status checks, or package downloads still use the network. Do not describe that as "zero network"; describe it as "no image upload to the converter vendor" when that is the claim you can support.
Docker image optimization is successful when the build remains repeatable, source files are preserved, outputs are reviewable, and the final container does not carry tools it does not need at runtime.

Jack
GetWebP EditorJack writes GetWebP guides about local-first image conversion, WebP workflows, browser compatibility, and practical performance checks for teams that publish images on the web.