Moderate Image
Analyze images for inappropriate visual content including nudity, violence, hate symbols, and more. Images are processed by Claude Vision (Tier 2) for high-accuracy classification.
Request
Section titled “Request”Provide the image as either a URL or base64-encoded data. Exactly one of url or data is required.
A publicly accessible URL pointing to the image. Supported formats: JPEG, PNG, GIF, WebP. Maximum file size: 10 MB.
Base64-encoded image data. Include the data URI prefix (e.g., data:image/png;base64,...). Maximum decoded size: 10 MB.
Optional list of categories to evaluate. Defaults to all available categories if omitted.
Supported values: sexual, violence, hate_symbols, self_harm, drugs, gore.
Response
Section titled “Response”Unique identifier for this moderation request.
Recommended action: allow, flag, or block.
true if any category score exceeded its threshold.
Per-category moderation results.
Confidence score from 0.0 to 1.0.
The configured threshold for this category.
true if the score exceeds the threshold.
Always 2 (Claude Vision) for image moderation.
Total processing time in milliseconds.
Examples
Section titled “Examples”Moderate by URL
Section titled “Moderate by URL”curl -X POST https://api.getsieve.dev/v1/moderate/image \ -H "Authorization: Bearer mod_live_abc123..." \ -H "Content-Type: application/json" \ -d '{ "url": "https://example.com/user-upload.jpg" }'const response = await fetch("https://api.getsieve.dev/v1/moderate/image", { method: "POST", headers: { "Authorization": "Bearer mod_live_abc123...", "Content-Type": "application/json", }, body: JSON.stringify({ url: "https://example.com/user-upload.jpg", }),});
const result = await response.json();import requests
response = requests.post( "https://api.getsieve.dev/v1/moderate/image", headers={"Authorization": "Bearer mod_live_abc123..."}, json={"url": "https://example.com/user-upload.jpg"},)
result = response.json()Moderate by Base64
Section titled “Moderate by Base64”curl -X POST https://api.getsieve.dev/v1/moderate/image \ -H "Authorization: Bearer mod_live_abc123..." \ -H "Content-Type: application/json" \ -d '{ "data": "data:image/png;base64,iVBORw0KGgo..." }'const response = await fetch("https://api.getsieve.dev/v1/moderate/image", { method: "POST", headers: { "Authorization": "Bearer mod_live_abc123...", "Content-Type": "application/json", }, body: JSON.stringify({ data: "data:image/png;base64,iVBORw0KGgo...", }),});import requests
response = requests.post( "https://api.getsieve.dev/v1/moderate/image", headers={"Authorization": "Bearer mod_live_abc123..."}, json={"data": "data:image/png;base64,iVBORw0KGgo..."},)Example Response
Section titled “Example Response”{ "request_id": "req_img_9d8e7f", "action": "allow", "flagged": false, "categories": { "sexual": { "score": 0.03, "threshold": 0.8, "flagged": false }, "violence": { "score": 0.01, "threshold": 0.7, "flagged": false }, "hate_symbols": { "score": 0.0, "threshold": 0.7, "flagged": false }, "self_harm": { "score": 0.0, "threshold": 0.7, "flagged": false }, "drugs": { "score": 0.02, "threshold": 0.8, "flagged": false }, "gore": { "score": 0.0, "threshold": 0.7, "flagged": false } }, "pipeline_tier": 2, "latency_ms": 420}