Developers increasingly use AI agents inside code editors, terminals, and project workspaces. Image optimization fits that model when the agent can call a local tool with clear limits. The agent can inspect a docs folder, propose candidates, convert a sample, and produce a report without manually scripting every step.
The key is to expose image conversion as a narrow, auditable workflow rather than a vague "optimize everything" command.
"Connect an agent and automate images" misses the review risk. An auditable workflow should preserve the tool name, exact arguments, returned statuses, skipped files, manifest path, and human quality decision so a developer can audit what the agent actually changed.
Treat Conversion as a Tool#
The Model Context Protocol lets applications expose tools to AI systems. The official MCP tools specification describes the server-side tool primitive that a model can invoke. GetWebP's implementation is documented in the MCP server guide, and its local-processing boundary is described in the security overview.
For image conversion, a tool should do a specific job:
- scan images
- convert selected files
- return status
- report skipped files
- avoid modifying originals
- expose errors clearly
Small tools are easier for an agent to use correctly. A broad tool that reads and writes anywhere in a repository is harder to trust.
Start With Scan Before Convert#
A developer workflow should usually begin with a scan. The scan tells the agent what exists before any file changes happen.
Useful scan output includes:
{
"total": 1,
"files": [
{
"path": "/project/docs/images/setup-flow.png",
"size": 842120,
"format": "png",
"has_webp": false
}
]
}
The tool output should stay factual. Candidate labels such as "large screenshot" or "already optimized" can be added by the agent in its summary, but they should be traceable to fields returned by scan_images. That is safer than converting a folder blindly and easier to audit after the run.
Convert Into a Predictable Output Path#
Local conversion should write outputs somewhere reviewable. For example:
docs/images/
docs/images/optimized/
docs/images/optimization-manifest.json
Original files should remain untouched unless the developer explicitly requests replacement. This makes diffs easier to review and preserves source assets for future changes.
For teams, this is also useful in pull requests. Reviewers can see new WebP files and a report instead of wondering which originals changed.
A bounded GetWebP MCP call should make that output path explicit:
{
"input": "docs/images",
"output": "docs/images/optimized",
"quality": 82,
"recursive": true,
"manifest_path": "docs/images/optimization-manifest.json"
}
The important detail is not the exact directory name. It is that the agent cannot silently replace source files while the developer thinks it is only creating reviewable outputs.
Return Structured Results#
The agent should not parse a wall of text if the tool can return structured data. A conversion result should include counts, per-file status, original bytes, output bytes, savings, warnings, and structured error details when something fails.
The broader MCP architecture documentation describes how clients and servers communicate around tools and other primitives. For image workflows, structured tool responses make agent summaries more accurate and less speculative.
A useful conversion result:
{
"success": true,
"total": 1,
"succeeded": 1,
"failed": 0,
"skipped": 0,
"warnings": [],
"results": [
{
"file": "/project/docs/images/setup-flow.png",
"status": "success",
"original_size": 842120,
"new_size": 284410,
"saved_ratio": 0.6623
}
],
"manifest": {
"path": "docs/images/optimization-manifest.json",
"entries": 1,
"generated_at": "2026-04-02T10:15:00.000Z"
}
}
That shape gives the agent enough information to say what changed without inventing details. It can report "one file converted, 66% smaller, manifest written" and link the changed files in the pull request.
Keep Tool Names Honest#
Tool names should describe the real operation. For example:
scan_images
convert_images
get_status
Avoid names that imply broader ability than the tool has. If a tool does not watch folders, authenticate users, or rewrite markdown references, do not name or describe it as if it does.
Clear names help the agent choose the right action and help developers understand what happened.
Descriptions matter too. A tool description should mention whether paths are relative to the workspace, whether glob patterns are supported, and whether outputs overwrite existing files. That detail reduces agent mistakes and makes review easier when a conversion call appears in an execution log.
For GetWebP, the documented tool surface is deliberately small:
| Tool | Purpose |
|---|---|
scan_images | discover convertible images without modifying files |
convert_images | convert images to WebP or AVIF with bounded parameters |
get_status | report license status, plan limits, and cooldown state |
That is enough surface area for a developer workflow without giving the model a general file-rewriting API.
Add Human Review Gates#
Image conversion can be automated. Quality approval should not be fully automated for important assets.
The agent should flag images that need review:
- screenshots with small text
- transparent graphics
- product photos
- hero images
- existing WebP files
- outputs with low savings
- files that grew after conversion
- unsupported inputs
The developer can then inspect a smaller set instead of reviewing every generated file manually.
Be Explicit About Limits#
Every MCP server should document its boundaries:
- supported input formats
- supported output formats
- free or paid limits if relevant
- maximum files per call
- rate limits
- whether original files are changed
- whether image bytes leave the machine
This reduces bad assumptions. If a free tier limits conversion volume, the agent can handle that gracefully instead of retrying the same call repeatedly.
For GetWebP's MCP server, the boundaries worth showing in the prompt or runbook are concrete:
- Free plan: 20 files per
convert_imagescall; extra files are returned asskipped_by_limit, not treated as a failed conversion - Free plan rate limit: after three calls in a rolling 60-second window, the server returns
rate_limitedinstead of sleeping - Stable error codes:
rate_limited,input_not_found, andio_error - Supported outputs: WebP and AVIF
- State sharing: the MCP server shares license and rate-limit state with the CLI
- Originals: conversion output should be directed to a reviewable path unless replacement is explicitly requested
Those details make the workflow more trustworthy because the agent can explain whether it stopped because of quality review, plan limits, or a real conversion failure.
A Practical Developer Flow#
Use a simple loop:
- scan target folder
- review candidate summary
- convert a small sample
- inspect sensitive outputs
- convert approved batch
- commit optimized outputs and manifest
- update references only as a separate reviewed step
This keeps image conversion visible in the development process.
A good agent report should be specific enough for code review:
Scanned: docs/images
Converted: 12 files to docs/images/optimized
Skipped by limit: 0
Failed: 0
Largest savings: setup-flow.png, 842120 bytes to 284410 bytes
Manual review: hero images, transparent logos, screenshots with small text
Manifest: docs/images/optimization-manifest.json
Next reviewed step: update markdown references to optimized files
MCP image conversion works best when tools are narrow, local, structured, and honest. Give the agent the ability to inspect and convert, but keep output paths controlled and human review in the path for quality-sensitive images.

Jack
GetWebP EditorJack writes GetWebP guides about local-first image conversion, WebP workflows, browser compatibility, and practical performance checks for teams that publish images on the web.