EDGE API Reference
Real-time video processing and AI inference at the edge
API Overview
Base URL
https://api.wave.dev/v1/edgeAuthentication
Bearer {api_key}Rate Limit
50,000 frame processing requests/minute
Processing SLA
<15ms latency per frame
/processors
Deploy new frame processor
Request Body
{
"name": "content-moderation",
"description": "Detects and blurs inappropriate content",
"code": "export default async function...",
"runtime": "javascript",
"model": "moderation-v2",
"gpu": true,
"regions": ["us-east", "us-west", "eu-west"],
"timeout": 12,
"resources": {
"cpu": "4-core",
"memory": "8GB",
"gpu": "nvidia-t4"
}
}Response (201 Created)
{
"success": true,
"data": {
"id": "processor_abc123",
"name": "content-moderation",
"status": "deploying",
"version": 1,
"deploymentUrl": "https://console.wave.dev/edge/processor_abc123",
"regions": ["us-east", "us-west", "eu-west"],
"createdAt": "2024-11-20T15:45:00Z"
}
}/processors
List deployed processors
Query Parameters
Response
{
"success": true,
"data": [
{
"id": "processor_abc123",
"name": "content-moderation",
"runtime": "javascript",
"status": "active",
"model": "moderation-v2",
"version": 1,
"gpu": true,
"regions": ["us-east", "us-west", "eu-west"],
"metrics": {
"framesProcessed": 125400,
"avgLatency": 8.2,
"p99Latency": 12.5,
"errorRate": 0.01,
"gpuUtilization": 0.65
},
"createdAt": "2024-11-15T08:30:00Z",
"updatedAt": "2024-11-20T14:22:15Z"
}
],
"pagination": {
"total": 3,
"offset": 0,
"limit": 50
}
}/processors/:id/process-frame
Process a single video frame
Request Body
{
"frame": {
"data": "base64_encoded_frame_data",
"width": 1920,
"height": 1080,
"format": "rgb24",
"timestamp": 1700500123456
},
"metadata": {
"streamId": "stream_xyz789",
"region": "us-east"
},
"options": {
"priority": "high",
"skipCache": false
}
}Response
{
"success": true,
"data": {
"processorId": "processor_abc123",
"timestamp": 1700500123456,
"latency": 8.2,
"result": {
"action": "blur",
"confidence": 0.92,
"regions": [
{
"x": 100,
"y": 50,
"width": 200,
"height": 150,
"label": "adult",
"confidence": 0.95
}
],
"metadata": {
"sceneChange": false,
"brightness": 0.72
}
},
"processedFrame": "base64_optional_processed_frame"
}
}Example Request
const frameData = await captureFrame();
const base64Frame = Buffer.from(frameData).toString('base64');
const response = await fetch(
'https://api.wave.dev/v1/edge/processors/processor_abc123/process-frame',
{
method: 'POST',
headers: {
'Authorization': 'Bearer sk_live_abc123',
'Content-Type': 'application/json',
},
body: JSON.stringify({
frame: {
data: base64Frame,
width: 1920,
height: 1080,
format: 'rgb24',
timestamp: Date.now(),
},
metadata: {
streamId: 'stream_xyz789',
region: 'us-east',
},
}),
}
);
const { data } = await response.json();/processors/:id/metrics
Get processor performance metrics
Query Parameters
Response
{
"success": true,
"data": {
"processorId": "processor_abc123",
"period": "5m",
"metrics": {
"framesProcessed": 125400,
"latency": {
"avg": 8.2,
"p50": 7.1,
"p95": 12.5,
"p99": 14.8,
"max": 15.0
},
"errorRate": 0.0089,
"throughput": 418,
"byRegion": {
"us-east": {
"latency": 5.1,
"framesProcessed": 52100,
"errorRate": 0.005
},
"us-west": {
"latency": 8.9,
"framesProcessed": 48200,
"errorRate": 0.012
}
},
"gpu": {
"utilization": 0.65,
"memory": 0.82,
"temperature": 72
}
}
}
}/processors/:id/test
Test processor with sample frame
Request Body
{
"frameUrl": "https://example.com/test-frame.jpg",
"expectedResult": {
"action": "blur",
"minConfidence": 0.8
}
}Response
{
"success": true,
"data": {
"testPassed": true,
"latency": 10.2,
"result": {
"action": "blur",
"confidence": 0.94,
"matchesExpected": true
}
}
}/processors/:id
Delete processor
Response (204 No Content)
Returns no body on successful deletion
Common Error Codes
400 Bad Request
Invalid frame format or missing required fields
408 Processing Timeout
Frame processing exceeded <15ms SLA. Check processor complexity.
429 Too Many Requests
Rate limit exceeded. Max 50,000 frames/minute.
507 Insufficient Resources
GPU/CPU unavailable. Check resource availability in regions.
Best Practices
🎯 Sample frames strategically
Don't process every frame. Sample at 1-5 fps for analysis, full rate for inference.
📊 Monitor latency metrics closely
Set alerts if P99 latency exceeds 14ms to stay within <15ms SLA
🔄 Cache model inference results
Avoid redundant processing of similar frames with intelligent caching
💾 Use RUNTIME for complex logic
Keep frame processing simple; move complex decisions to RUNTIME functions
🚀 Deploy to key regions first
Test with 2-3 regions before global rollout to validate performance