Multi-Modal Content Moderation
Vettly supports moderation across multiple content types, allowing you to protect your platform from harmful text, images, and videos with a unified API.
Supported Content Types
Text Moderation
Analyze text content for toxicity, hate speech, harassment, and other policy violations.
typescript
import { ModerationClient } from '@nextauralabs/vettly-sdk';
const client = new ModerationClient({ apiKey: 'your-api-key' });
const result = await client.check({
content: 'User submitted text here',
policyId: 'moderate',
contentType: 'text'
});
if (result.flagged) {
console.log('Content flagged:', result.categories);
console.log('Action:', result.action); // 'allow' | 'warn' | 'flag' | 'block'
}Image Moderation
Detect inappropriate imagery, NSFW content, violence, and other visual policy violations.
typescript
const result = await client.check({
content: 'https://example.com/image.jpg', // URL or base64 data URI
policyId: 'moderate',
contentType: 'image'
});
if (result.flagged) {
console.log('Image flagged:', result.categories);
}Video Moderation
Analyze video content frame-by-frame for policy violations. Video moderation is asynchronous due to processing time.
typescript
// For videos, use the REST API directly or the async batch endpoint
const result = await client.batchCheckAsync({
policyId: 'moderate',
items: [{
id: 'video-1',
content: 'https://example.com/video.mp4',
contentType: 'video'
}],
webhookUrl: 'https://your-app.com/webhooks/moderation'
});
// Results will be delivered to your webhook
console.log('Batch ID:', result.batchId);Multi-Modal Check (REST API)
For posts containing multiple content types, use the /v1/check/multimodal endpoint:
typescript
const response = await fetch('https://api.vettly.dev/v1/check/multimodal', {
method: 'POST',
headers: {
'Authorization': `Bearer ${apiKey}`,
'Content-Type': 'application/json'
},
body: JSON.stringify({
text: 'Caption for the post',
images: ['https://example.com/image1.jpg'],
policyId: 'moderate',
context: {
useCase: 'social_post',
userId: 'user-123'
}
})
});
const result = await response.json();
// Result includes analysis for all content types
console.log('Overall safe:', result.safe);
console.log('Overall action:', result.action); // Most severe action
console.log('Per-content results:', result.results);Response Structure
All moderation responses include:
typescript
interface CheckResponse {
decisionId: string; // Unique ID for this decision
safe: boolean; // True if content passes all checks
flagged: boolean; // True if any category was triggered
action: 'allow' | 'warn' | 'flag' | 'block';
categories: Array<{
category: string; // e.g., 'hate_speech', 'violence'
score: number; // 0-1 confidence score
triggered: boolean; // Whether this category exceeded threshold
}>;
provider: string; // Which AI provider was used
latency: number; // Processing time in ms
cost: number; // Cost in USD
}Best Practices
- Moderate on upload - Check content before it's stored or displayed
- Use appropriate policies - Different content types may need different strictness levels
- Handle async for videos - Video moderation takes longer; use webhooks for results
- Cache decisions - Store
decisionIdto avoid re-analyzing the same content - Check the action field - Use
actionto determine how to handle content:allow: Content is safewarn: Show content with a warningflag: Queue for manual reviewblock: Reject immediately
Next Steps
- Learn about Moderation Policies to customize detection rules
- See the SDK Reference for complete API documentation
- Check out Examples for implementation patterns