Artikel ini saat ini hanya tersedia dalam bahasa Inggris.

DockerDec 3, 20258 min read

Docker Image Optimization Without Native Dependencies

Image optimization inside Docker can become harder than expected. A project starts with a simple goal: convert images to WebP during a build or release job. Then the container needs native libraries, system packages, image codecs, build tools, cache layers, and platform-specific fixes. What looked like a small optimization step becomes a source of CI instability.

Avoiding unnecessary native dependency setup can make the workflow easier to reproduce. The goal is not to avoid all native code in every situation. The goal is to choose an image conversion approach that fits Docker, CI, and team maintenance.

"Use Docker so it is reproducible" is only true when the conversion boundary is designed well. A container can still hide stale cache, install the latest package unexpectedly, carry build tools into production, or publish converted images that nobody reviewed.

Know Where Native Dependencies Hurt#

Native image libraries can be powerful, but they may introduce setup problems:

  • missing system packages
  • incompatible Linux distributions
  • architecture differences between local and CI
  • larger container images
  • slower cold builds
  • security patch obligations
  • confusing runtime errors when codecs are absent

These issues are manageable for teams that already maintain native image stacks. They are frustrating when the only requirement is routine WebP or AVIF conversion for website assets.

Use this distinction before changing the Dockerfile:

RequirementNative stack may be justifiedNarrow converter may be enough
Resize, crop, composite, watermark, and transform imagesYesNo
Convert approved source assets to WebP or AVIFSometimesYes
Run inside a minimal release jobUsually extra maintenanceOften simpler
Debug codec installation across architecturesExpected costAvoid if not needed
Produce structured conversion recordsDepends on custom codeUse CLI --json if available

The point is not that native tools are bad. The point is that a Docker image should not install a broad image stack when the build only needs a narrow conversion step.

Prefer a Narrow Tool for a Narrow Job#

If the build step only needs to convert approved images, a focused CLI can be easier than installing a broad image-processing environment. The tool should support the formats you need, write outputs to a known folder, and return clear exit codes.

For example:

FROM node:22-slim

RUN npm install -g getwebp

WORKDIR /app
COPY . .

RUN getwebp ./public/images \
  -o ./public/images-optimized \
  --recursive \
  --format webp \
  --quality 82

This is only a starting pattern. In a real project, you may want conversion in CI rather than inside the production image build.

The GetWebP CLI command reference documents the relevant flags: --output, --recursive, --format, --quality, --dry-run, --skip-existing, and --json. It also states that original files are never modified or deleted. That preservation rule matters in Docker because build layers are disposable and generated files should not become the only copy of the asset.

For reproducible release work, pin the installation method you use. A tutorial can show npm install -g getwebp, but a production Dockerfile should decide whether it is installing a pinned npm version, downloading a pinned binary in CI, or using an already-approved internal base image. Otherwise a rebuild can change the converter while the Dockerfile appears unchanged.

Keep Conversion Out of the Runtime Image#

If optimized images are static build artifacts, the converter does not need to live in the final runtime container. Use a build stage or CI job to generate assets, then copy only approved outputs into the final image.

This keeps the runtime image smaller and reduces the number of tools available in production.

FROM node:22-slim AS assets
RUN npm install -g getwebp
WORKDIR /app
COPY public/images ./images
RUN getwebp ./images -o ./optimized --quality 82

FROM nginx:alpine
COPY --from=assets /app/optimized /usr/share/nginx/html/images

Multi-stage builds are a standard Docker pattern for separating build tools from runtime artifacts. Docker's multi-stage build documentation explains the concept.

For image optimization, the boundary should be stricter than "the build passed":

StageShould contain
Source repositoryOriginal images and the conversion command
Asset build stageConverter, source images, generated outputs, and report
Review artifactGenerated outputs plus structured report
Runtime imageOnly approved assets and the app/server needed to serve them

Do not copy the converter, license material, temporary reports, or unreviewed source folders into the runtime image unless the application actually needs them at runtime.

Preserve Originals Outside the Container#

Containers are disposable. Source images should live in the repository, asset storage, or an approved archive, not only inside a temporary build layer.

The container should read from a known source folder and write generated outputs. If the build fails, the team should still have the originals and the command needed to rerun conversion.

A useful release record looks like this:

Source folder: public/images
Output folder: public/images-optimized
Format: webp
Quality: 82
Recursive: yes
Originals preserved: yes
Report artifact: image-report.ndjson
Visual review: approved for 12 representative pages

That record is better than "Docker optimized images" because it tells the next developer what actually happened.

Be Careful With Build Cache#

Docker cache can make image jobs look faster while hiding whether a conversion step actually reran. If source images change, the Dockerfile should copy them before the conversion step so the cache invalidates correctly.

For release work, inspect the output timestamps or generated report instead of assuming the cache behaved as intended.

Keep the cache boundary obvious:

FROM node:22-slim AS assets
RUN npm install -g getwebp
WORKDIR /app

COPY public/images ./images
RUN getwebp ./images -o ./optimized --recursive --format webp --quality 82 --json > ./image-report.ndjson

If COPY public/images ./images is below unrelated application copies, a source image change invalidates only the layers that need to rerun. If the conversion command depends on files copied before the image inputs are isolated, the cache behavior becomes harder to explain.

For CI, add a cache check:

jq -r 'select(.type == "convert.completed") | .data | [.processed, .successCount, .failedCount] | @tsv' image-report.ndjson

The report proves whether the converter processed the expected batch. A cached Docker layer with no fresh report should not be treated as a new optimization run.

Use Structured Output for CI#

When image optimization runs in Docker as part of CI, use structured output where possible:

npx -y getwebp ./public/images \
  -o ./dist/images \
  --recursive \
  --format webp \
  --json > image-report.ndjson

Upload the report as an artifact or parse it in a later step. This helps distinguish conversion failures from Docker setup failures.

The GetWebP JSON output reference describes --json as NDJSON: one JSON object per line, not a single JSON array. The first line is a version event. Conversion results appear as convert.completed, convert.truncated, or convert.failed. A successful file record includes outputPath, originalSize, newSize, savedRatio, quality, qualityMode, and status.

Pair the report with the current exit-code model in the LLM context document:

Exit codeDocker/CI decision
0Continue to visual review and artifact upload
1Treat as setup or command failure
2Fix command arguments before rerunning
3Parse per-file errors before deciding whether partial output is usable
4Fix license or activation state
5Retry with backoff if the job depends on network license checks
6Treat the run as truncated and process the remaining files
75Stop automation and refresh the license state
76Free disk space before rerunning the container job
130Treat the run as interrupted and incomplete
143Treat the run as terminated and incomplete

The GetWebP CI integration guide is the better reference for workflow-level secret handling, artifact upload, and parsing patterns.

Do Not Hide Visual Review#

A Docker build can prove that a command ran. It cannot prove that the hero image, product photo, or screenshot looks acceptable. Keep visual review in the workflow:

  • generate outputs in CI or locally
  • expose them as artifacts or pull request changes
  • review important pages or snapshots
  • approve before publishing

Google's WebP documentation explains the output format, but the visual decision belongs to the project.

Use a review matrix:

Asset roleReview requirement
Hero or likely LCP imageCheck crop, detail, and selected responsive file
Product imageCheck texture, color, edges, and zoom state
ScreenshotCheck text, icons, borders, and small UI elements
Transparent logoCheck edges on light and dark backgrounds
Repeated thumbnailCheck one component visually and count aggregate transfer impact

Only approved outputs should move from the asset stage into the runtime image or release artifact. Smaller files that fail visual review are not wins.

Choose the Simplest Reliable Boundary#

There are three common boundaries:

  1. run conversion locally before commit
  2. run conversion in CI and commit or artifact the outputs
  3. run conversion during Docker build

The third option is not always the best. If it makes every deployment depend on image conversion, native packages, or network installation, consider moving the step earlier.

Use this decision table:

BoundaryUse it whenWatch for
Local before commitSmall teams manually review generated filesDeveloper machines may drift unless the command is documented
CI artifactReviewers need outputs and reports before mergeArtifacts must be retained long enough for review
CI commit-backThe repo should store generated variantsAvoid noisy diffs and make failures explicit
Docker build stageThe app image is the only release artifactCache, network installation, and review artifacts need careful handling
Runtime conversionImages are user-generated after deployThe runtime container now owns converter updates and resource limits

Also keep the privacy boundary honest. The GetWebP security whitepaper separates image-processing data plane from licensing and account control plane. In a Docker or CI setup, image bytes can stay local to the runner or repository while license activation, status checks, or package downloads still use the network. Do not describe that as "zero network"; describe it as "no image upload to the converter vendor" when that is the claim you can support.

Docker image optimization is successful when the build remains repeatable, source files are preserved, outputs are reviewable, and the final container does not carry tools it does not need at runtime.

Jack avatar

Jack

GetWebP Editor

Jack writes GetWebP guides about local-first image conversion, WebP workflows, browser compatibility, and practical performance checks for teams that publish images on the web.