Este artigo atualmente só está disponível em inglês.

WebPSep 10, 20255 min read

How to Test Visual Quality After WebP Conversion

WebP conversion should not be approved by file size alone. A 60 percent reduction can still be a bad outcome if a product edge looks dirty, a face looks waxy, a screenshot becomes hard to read, or a transparent shadow turns into a visible outline. Visual quality has to be tested in the context where the image will be used.

A good review process is simple enough for a small team but strict enough to catch real problems. It compares the converted file against the source, checks high-risk details, and confirms that the published page still looks right across breakpoints.

"Look at the image and make sure it looks good" is too subjective to guide a conversion rollout. A defensible QA process records the sample, the exact conversion command, the quality mode, the files that failed, and the page context where approval happened.

Use a Representative Sample First#

Do not test only one image and apply the setting to every asset. Build a sample that reflects the actual site or project:

  • one large hero photo
  • one product or portfolio image
  • one portrait
  • one screenshot with text
  • one transparent PNG replacement
  • one thumbnail
  • one image with gradients or shadows

This sample helps you find the formats and settings that are risky before a full migration. If the site has many product categories, include the difficult ones: glossy items, fabric, food, jewelry, cosmetics, or anything with fine texture.

Run a Reproducible Test Batch#

Create a small QA folder and convert it with an explicit setting before reviewing the full site. If you want to compare fixed WebP quality levels, pass --quality so the report does not mix auto-quality output with a fixed-number test.

mkdir -p ./reports

getwebp ./qa-sample \
  --recursive \
  --output ./qa-results/q82 \
  --quality 82 \
  --json > ./reports/qa-q82.ndjson

If you prefer GetWebP's default WebP auto-quality mode, omit --quality and label the review as auto mode. Do not describe an auto-quality run as "q82" just because another batch used that number.

Summarize the report before visual approval:

jq -r '
  select(.type == "convert.completed")
  | .data.results[]
  | [.status, .file, .outputPath, .originalSize, .newSize, .savedRatio, .quality, .qualityMode, (.error // "")]
  | @tsv
' ./reports/qa-q82.ndjson

This table gives reviewers the file list, output paths, size change, quality value, and quality mode. A large savedRatio is not an approval. It is only context for the visual decision.

Also check whether the run was truncated or had file-level failures:

jq -r 'select(.type=="convert.truncated" or .type=="convert.failed") | .data' ./reports/qa-q82.ndjson

The GetWebP CLI command reference covers --quality, --recursive, --output, and --json; the JSON output guide documents the NDJSON fields used in the review table.

Compare at Normal Viewing Size#

Start review at the size users will actually see. Open the source and WebP output in the page layout or in a viewer set to the rendered dimensions. If the difference is not visible in normal use, the conversion may be acceptable even if a pixel-level inspection shows tiny changes.

Then zoom in only after the normal-size check. The goal is not to reject every mathematical difference. Lossy compression changes pixels by design. The goal is to reject visible damage that affects trust, readability, or brand quality.

Inspect High-Risk Areas#

Some image areas reveal compression problems faster than others. Check:

  • faces, hands, and skin tones
  • product texture and material grain
  • fine text in screenshots or packaging
  • gradients in skies, shadows, and backgrounds
  • sharp edges around objects
  • transparent alpha edges
  • dark areas with subtle detail
  • brand colors and logos

If an image contains small text, be conservative. Text artifacts are noticed quickly and can make a page feel careless even when the rest of the image looks fine.

Test Against the Real Background#

Transparent and cutout images should be reviewed on the same background where they will render. A file may look clean on a checkerboard and show a halo on white, black, or a brand color.

For assets used in multiple contexts, test the common backgrounds:

  • light page background
  • dark mode background
  • card background
  • hover or selected state
  • mobile layout

Do not approve a transparent WebP from the export window alone. The edge quality is part of the page design.

Keep a Rejection Folder#

When a file fails review, keep the failed output in a rejected folder with a short note. This is useful evidence. It shows why a setting was changed and prevents the team from repeating the same failed experiment later.

Example note:

File: product-bag-q74.webp
Issue: visible edge halo on white background
Decision: retry q84 and compare lossless WebP

The rejected examples also help designers and developers agree on what "too compressed" means for the project.

Record the Approved Settings#

Once a sample passes, record the settings that were used. Include the input type, resize rule, format, quality value, and reviewer. For example:

Product photos: max 1600px, WebP q82 fixed, report qa-q82.ndjson, reviewed on PDP and zoom view by merchandising
Blog images: max 1200px, WebP auto-quality, report blog-auto.ndjson, reviewed in article template by editorial
Screenshots: max 1400px, WebP q90 fixed or keep PNG if text artifacts appear, reviewed by documentation

Quality values are not universal. They are only meaningful when tied to image type, source quality, and review context.

Record failed settings with the same discipline:

File: checkout-ui-q82.webp
Issue: sidebar labels blur at article width
Report: qa-q82.ndjson
Decision: retest q88 and compare PNG fallback
Reviewer: docs owner

Use Objective Tools as Support, Not the Verdict#

Metrics and visual diff tools can help find suspicious files, but they should not replace human review. A tool may flag harmless differences or miss a detail that matters in a brand image.

Google's WebP documentation explains the format and encoder background, while MDN's image file type guide gives broader context for choosing web image formats. Those references are useful, but final approval should still happen on your real images.

WebP quality testing is not complicated. Sample honestly, compare in context, inspect the risky areas, document rejections, and keep the source files available so every decision can be repeated.

Jack avatar

Jack

GetWebP Editor

Jack writes GetWebP guides about local-first image conversion, WebP workflows, browser compatibility, and practical performance checks for teams that publish images on the web.