Atualmente, este artigo está disponível apenas em inglês.

WebPJan 14, 20267 min read

Why Re-Encoding WebP Can Increase File Size

It feels logical that running an image through a modern optimizer should make it smaller. In practice, re-encoding an existing WebP file can produce a larger file, reduce visual quality, or both. That does not mean WebP is unreliable. It means the input, settings, and asset role matter.

Before treating "optimize all WebP files" as a maintenance task, understand why a second conversion pass can disappoint.

"WebP is already compressed, so do not re-compress it" is not practical enough. Some existing WebP files should be left alone, some should be regenerated from a better source, some should be resized, and a small number may benefit from a new encode. The quality issue is deciding which case you have before running a library-wide job.

Compression Is Not a One-Way Ratchet#

Compression does not always keep finding free savings. A lossy WebP file may already be near the useful limit for its image content. When you decode it and encode it again, the encoder has to describe the already-compressed pixels. Those pixels may contain artifacts, softened edges, or altered color transitions that are harder to compress efficiently.

The second file can be larger because the new encoder settings preserve details that were introduced by the first compression pass. It can also be smaller but visibly worse. Neither result is a useful win.

The right baseline is the best available source file, not the current published derivative.

Use a decision table before conversion:

Current fileBest available sourceLikely action
Existing lossy WebP, no sourceinspect and usually skip unless page evidence justifies a trial
Existing WebP exported too largeresize from source if possible; do not only re-encode
Existing WebP from old product screenshotregenerate from current screenshot source
Existing lossless WebP logokeep lossless or compare against PNG source; avoid lossy edges
Existing WebP with visible artifactsrecover cleaner source before another encode

This avoids the common mistake of treating format name as the optimization strategy.

Settings May Not Match the Original#

WebP encoding has many choices: quality level, lossless mode, near-lossless behavior, method, alpha handling, metadata, and more. Google's cwebp documentation shows how many options can influence output.

If the original WebP was created with one set of choices and the new pass uses another, file size can move in either direction. For example, a new setting may preserve alpha more carefully, keep metadata, use a different tradeoff between speed and compression, or avoid damage that the old file accepted.

That does not mean the new setting is wrong. It means the result needs visual and byte review, not an assumption.

If you run a controlled trial with GetWebP, write outputs beside the originals or into a staging folder so the comparison is reversible:

npx -y getwebp ./published-webp -o ./reencode-trial --recursive --format webp --json

The GetWebP CLI commands reference documents WebP as a supported input, separate output directories with --output, recursive scans with --recursive, and structured output with --json. It also states that original files are never modified or deleted, which is the right default for a re-encoding audit.

Lossless and Lossy Are Different Jobs#

A lossless WebP and a lossy WebP should not be pushed through the same pipeline without inspection. Google's WebP documentation describes both compression modes. A transparent logo, UI graphic, screenshot, and product photo can have very different requirements.

If a lossless transparent asset is re-encoded as lossy, it may develop edge artifacts. If a lossy photo is re-encoded as lossless, the output may preserve compression artifacts very faithfully and become much larger.

Classification comes before conversion. Ask what kind of content the image contains and why it was WebP in the first place.

Classify the file by visual role, not only by extension:

File: product-ui-dashboard.webp
Role: product screenshot
Current use: pricing page hero
Risk: small UI labels and chart lines
Source available: Figma export exists
Decision: regenerate from source at required dimensions; do not re-encode the published WebP

That level of record turns a simple warning into an experienced workflow.

Metadata Can Change the Result#

Metadata is not the main reason most image files are large, but it can still affect comparisons. If one workflow strips metadata and another preserves it, file size comparisons become less meaningful. Color profile handling can also affect whether the rendered image still matches expectations.

For product, brand, and editorial images, do not remove metadata blindly if the workflow depends on it. Instead, decide what should be preserved and make that consistent across the conversion process.

If metadata handling changes, report it as part of the optimization result.

For most web delivery assets, the visible result matters more than preserving every source-side field. For regulated, editorial, or brand workflows, though, metadata and color handling may be part of the approval path. A re-encoding report should say whether metadata policy changed instead of presenting size savings as the only outcome.

Dimensions May Be the Real Problem#

Sometimes a WebP file looks too large because it has too many pixels. Re-encoding at the same dimensions may save little or nothing. Resizing to the dimensions the page actually needs may create the real improvement.

Check:

  • intrinsic image width and height
  • rendered size in the browser
  • responsive candidates available
  • file selected on mobile and desktop
  • whether the CMS generated oversized derivatives

If the page displays a 2200-pixel WebP in a 500-pixel slot, compression settings are not the first issue.

A better report separates byte savings from right-sizing:

Asset: hero-dashboard.webp
Published intrinsic size: 2200 x 1400
Rendered slot: 720 x 458 desktop, 360 x 229 mobile
Re-encode at same dimensions: +8.6% larger
Regenerate candidate sizes: 720w and 1440w from source
Decision: do not re-encode original dimensions; fix responsive candidates

This explains why "larger after optimization" may be the wrong headline. The real finding may be that the page is serving the wrong dimensions.

Source Quality Can Be the Limiting Factor#

If the current WebP came from a low-quality JPEG, re-encoding it may preserve the JPEG's artifacts plus introduce new WebP artifacts. The file may not get meaningfully smaller because there is not much clean information left to compress well.

In that situation, the best improvement is often to recover a higher-quality source, export an appropriate size, and encode once. If the source is unavailable, keep the existing file unless visual review and measurement show a clear benefit.

Measure More Than the Final Size#

A useful re-encoding test records:

Input format and size
Input dimensions
Source availability
Output settings
Output size
Visual review notes
Rendered page context
Decision: keep, replace, resize, or skip

This prevents a larger output from being misread as failure when it is actually preserving quality, and it prevents a smaller output from being accepted when it damages the asset.

With GetWebP --json, the conversion stream is newline-delimited JSON, not one large JSON array. The JSON output reference documents per-file fields such as originalSize, newSize, savedRatio, saved, quality, qualityMode, status, and outputPath.

For this topic, pay special attention to negative savedRatio values. A negative saving is not automatically bad, but it is a review trigger:

File: logo-strip.webp
Status: success
Original size: 42 KB
New size: 47 KB
Saved ratio: -0.119
Visual review: edges cleaner than old file
Decision: keep larger output only if brand reviewer approves; otherwise restore original

The opposite case also needs review:

File: product-grid-01.webp
Status: success
Saved: 38.0%
Visual review: fabric texture smeared in category grid
Decision: reject smaller output

That is the core quality point: size movement is evidence, not a verdict.

A Practical Rule#

Do not re-encode WebP files just because they are WebP files. Re-encode from the best source when you have a clear reason: wrong dimensions, known poor settings, missing responsive variants, or a measured opportunity on important pages. If the only input is an existing lossy WebP, test a small sample before touching a whole library.

Use this approval rule:

Approve replacement only when:
- the page or asset role is known
- the best available source has been identified
- output dimensions match the rendered need
- JSON conversion data is stored
- a reviewer has checked the image in context
- rollback is possible because originals were not overwritten

If those facts are missing, the safe answer is usually "audit first, convert later."

Image optimization is most reliable when it works from clean sources and clear acceptance criteria. Re-encoding can help, but it can also add another layer of loss or produce larger files for defensible reasons.

Jack avatar

Jack

GetWebP Editor

Jack writes GetWebP guides about local-first image conversion, WebP workflows, browser compatibility, and practical performance checks for teams that publish images on the web.