Bài viết này hiện chỉ có bằng tiếng Anh.

WebPAug 5, 20255 min read

How to Choose WebP Quality Without Guessing

The least useful answer to “What WebP quality should I use?” is a single number with no context. You will see recommendations like 75, 80, 82, or 90, and some of them may work for a particular image set. The problem is that your site probably contains more than one kind of image.

A compressed product photo, a transparent badge, a UI screenshot, and a full-width hero image fail in different ways. If you choose one quality value by habit, you may get good savings on one group and visible damage on another.

The better approach is to build a small quality test and make the decision from real assets.

Swapping one magic number for another is still guessing. A defensible quality choice shows the corpus, the exact commands, the quality mode, the file-size evidence, the visual approval rule, and the exceptions that should not inherit the default.

Build a Representative Sample#

Start with a sample set of 12 to 20 images from the site or product you actually maintain. Do not use random stock photos unless your site is mostly stock photography. Include the images that represent your real risk:

  • large hero photos
  • small thumbnails
  • product detail images
  • UI screenshots with text
  • transparent PNG graphics
  • blog featured images
  • images with gradients or shadows
  • dark images and bright images

This sample becomes your review corpus. Keep the originals in one folder and write converted outputs to another folder so comparison stays simple.

mkdir -p ./reports

getwebp ./quality-sample \
  --recursive \
  --output ./quality-output/q76 \
  --quality 76 \
  --json > ./reports/webp-q76.ndjson

getwebp ./quality-sample \
  --recursive \
  --output ./quality-output/q82 \
  --quality 82 \
  --json > ./reports/webp-q82.ndjson

getwebp ./quality-sample \
  --recursive \
  --output ./quality-output/q88 \
  --quality 88 \
  --json > ./reports/webp-q88.ndjson

Run more than one quality value if the decision matters. For example, compare 76, 82, and 88 on the same files. You are not looking for the smallest output at any cost. You are looking for the lowest setting that still survives visual review.

Use fixed --quality values when comparing a quality ladder. If you omit --quality, GetWebP uses WebP auto-quality mode, which can be a good production default but is not the same as a fixed q82 comparison.

Summarize each report:

for report in ./reports/webp-q*.ndjson; do
  jq -r --arg report "$report" '
    select(.type == "convert.completed")
    | .data.results[]
    | [$report, .status, .file, .outputPath, .originalSize, .newSize, .savedRatio, .quality, .qualityMode, (.error // "")]
    | @tsv
  ' "$report"
done

The GetWebP CLI command reference documents --quality, --recursive, --output, and --json; the JSON output guide explains savedRatio, quality, and qualityMode.

Review at Real Display Size First#

Open the page or component where the image appears. A small artifact that is obvious at 200 percent zoom may be irrelevant on a 320-pixel card. The reverse is also true: a hero image can look acceptable in a file preview and weak when it fills the first viewport.

Review the converted image in its real layout before you inspect pixels. Ask practical questions:

  • Does the page still feel sharp?
  • Are faces, products, or text still clear?
  • Are gradients or shadows noisy?
  • Do transparent edges show halos?
  • Does the image still support the content?

If the answer is yes, then inspect the risky area at full size. The two-step review prevents you from approving bad output while also avoiding unnecessary pixel-level nitpicking.

Treat Screenshots Differently From Photos#

Screenshots often need a higher quality threshold than natural photos. Text, thin borders, icons, and flat color areas make compression artifacts easier to spot. A setting that is acceptable for a landscape photo may blur small UI labels or make a documentation screenshot look careless.

For documentation, SaaS marketing, and tutorials, review screenshots as editorial assets. If the reader needs to inspect UI text, keep quality conservative or keep PNG/SVG for assets where the tested WebP output makes text or edges worse.

Measure Savings, But Do Not Worship Them#

File-size savings matter. Google’s WebP documentation describes meaningful reductions compared with JPEG and PNG, and MDN lists WebP as a strong web image format because it supports compression, transparency, and animation. But savings are only useful when the image still works for users.

Track each candidate setting in a small table:

Image typeQuality testedVisual resultSize resultDecision
Hero photo82GoodStrong reductionUse
UI screenshot82Text slightly softModerate reductionRetest higher
Transparent badge82Edge haloGood reductionTry lossless or PNG

This kind of table gives the team a defensible default and records where the default should not apply.

Set Rules, Not Just a Number#

After review, write a short rule set:

  • Photos: use the chosen default quality.
  • Screenshots with small text: use a higher setting and manual review.
  • Transparent graphics: test against real backgrounds.
  • Tiny icons and SVG-like artwork: keep SVG or PNG when it is the right format.
  • High-value hero or product images: require human approval before publishing.

That rule set is more useful than a magic number because it survives new asset types.

Final Review#

Before adopting a WebP quality setting, confirm that the sample included your real image mix, outputs were reviewed in context, high-risk assets were inspected closely, and exceptions were documented. A good quality choice is not the most aggressive setting. It is the setting your team can reuse without silently damaging the site.

Further Reading#

Jack avatar

Jack

GetWebP Editor

Jack writes GetWebP guides about local-first image conversion, WebP workflows, browser compatibility, and practical performance checks for teams that publish images on the web.