Dit artikel is momenteel alleen beschikbaar in het Engels.

Image OptimizationAug 23, 20255 min read

Why Original Images Should Stay Untouched in Conversion Pipelines

The fastest image conversion workflow is not always the safest one. A command that overwrites JPEGs, PNGs, or source WebP files in place may look efficient, but it removes the evidence you need when quality problems appear later. Once the original is gone, every review becomes harder: Was the blur already in the source? Did resizing soften the product edge? Did the new encoder remove useful metadata? Did a CMS transform the file after upload?

A reliable conversion pipeline treats originals as source material, not disposable input. New WebP, AVIF, or resized files should be written beside the originals or into a separate output directory. That small discipline makes optimization easier to audit, easier to rollback, and less stressful for teams that publish client, ecommerce, or editorial assets.

"Keep backups" is not a pipeline policy by itself. A real pipeline proves that originals were not overwritten, records the generated output path, and keeps enough conversion evidence to explain a later rollback or quality complaint.

Originals Are the Quality Reference#

Compression review needs a reference point. If the source file has been overwritten, reviewers can only compare the output with memory, screenshots, or a production copy that may already have been transformed by another system.

Keep the original file available for direct comparison. This is especially important for:

  • product photos with texture or fine edges
  • screenshots with small text
  • transparent graphics
  • brand assets
  • photography portfolios
  • legal, medical, or financial documents used as page images

Visual review should answer a specific question: did the conversion introduce visible damage in the context where the image will be used? That question requires the source and output to both be available.

Rollback Should Not Depend on Backups Alone#

Backups are useful, but they are not a good everyday rollback mechanism for image optimization. Restoring from backups may require a developer, may restore too much, or may not preserve the exact file that was overwritten.

A safer pattern is:

images/
  originals/
    hero.jpg
    product-boot.png
  optimized/
    hero.webp
    product-boot.webp

With this structure, rollback can be as simple as changing a reference, removing a generated file, or re-running conversion with different settings. The team does not need to reconstruct the original state from a server snapshot.

Conversion Is Often a Multi-Step Process#

Format conversion is only one part of image optimization. A production pipeline may also include resizing, metadata removal, filename normalization, responsive image generation, CDN upload, and CMS registration.

If every step overwrites the previous file, it becomes difficult to understand what caused a problem. A blurry image might come from resizing. A color shift might come from a color profile issue. A larger-than-expected file might come from re-encoding an already compressed WebP. Separate outputs make those stages easier to inspect.

For a small team, the structure can stay simple:

  1. Store originals in one folder.
  2. Generate resized working files in another folder.
  3. Generate WebP or AVIF outputs from the working files.
  4. Publish only the approved outputs.

That chain is easier to explain than a folder full of overwritten assets.

Metadata Decisions Need Review#

Image metadata can be useful or risky depending on the file. Camera EXIF data may include location, device, time, and orientation information. Editorial teams may also rely on copyright, caption, or asset-management metadata.

Overwriting originals during optimization can remove metadata before anyone decides what should be kept. Keeping source files untouched gives the team time to choose a policy: remove sensitive metadata from public outputs while preserving archival metadata internally.

MDN's image file type guide is a useful reference for understanding common web image formats, while Google's WebP documentation explains the WebP format and encoder family. Neither source removes the need for a team-level policy about source preservation.

Re-Encoding Can Make Results Worse#

An already optimized image is not raw material. Re-encoding it repeatedly can create new artifacts or sometimes even increase the file size. This happens when a pipeline treats every input as if it were a high-quality source.

The safest workflow records whether a file is original, resized, converted, or previously optimized. If a WebP file is already the approved output, do not run it through another lossy conversion just because it is in the folder.

This matters for content teams that reuse assets across launches. Without source tracking, a hero image may be compressed once by a designer, again by a website build step, and again by a CMS plugin.

A Good Pipeline Is Reproducible#

A local conversion tool should make it easy to write outputs without touching inputs. For example:

mkdir -p ./reports

getwebp ./images/originals \
  --recursive \
  --output ./images/optimized \
  --dry-run

getwebp ./images/originals \
  --recursive \
  --output ./images/optimized \
  --quality 82 \
  --json \
  --manifest ./reports/images-manifest.json \
  > ./reports/images-conversion.ndjson

The dry run previews the input set. The real run writes generated files to ./images/optimized, keeps the original files in place, and leaves a structured report for review.

jq -r '
  select(.type == "convert.completed")
  | .data.results[]
  | [.status, .file, .outputPath, .originalSize, .newSize, .quality, .qualityMode, (.error // "")]
  | @tsv
' ./reports/images-conversion.ndjson

The manifest records successful output fingerprints, while the NDJSON report is still needed for failures, skipped files, or truncation events. The GetWebP CLI command reference documents --output, --recursive, --dry-run, --quality, --json, and --manifest; the JSON output guide explains the per-file fields.

If you omit --output, converted files are written next to the source files as new output files. That still preserves the original bytes, but it can make a source folder harder to audit. For client handoff, CMS upload, and rollback, a separate output directory is usually cleaner.

If a team later decides that product zoom images need a higher quality setting, the originals are still available and the prior conversion report explains what changed.

Preserving originals is not just a cautious habit. It is an engineering control. It keeps quality review grounded, keeps rollback practical, and gives the team a cleaner record of how each published image was produced.

Jack avatar

Jack

GetWebP Editor

Jack writes GetWebP guides about local-first image conversion, WebP workflows, browser compatibility, and practical performance checks for teams that publish images on the web.