Deploy EDGE Processing
Real-time video processing in 5 minutes
Get started with WAVE EDGE for real-time video processing. Process frames at <15ms latency, run AI inference models, and apply custom effects across 200+ global edge locations.
📋Prerequisites
WAVE CLI installed
Install via: npm install -g @wave/cli
API Key configured
Get your key from dashboard.wave.dev and run: wave auth:login
Node.js 18+ or Python 3.9+
For running local development and testing
1Understanding EDGE Processing
EDGE processes video frames at the network edge with <15ms latency. Choose your processing type:
🎬 Frame Analysis
Detect scenes, extract metadata, identify objects, analyze motion
🤖 AI Inference
Run TensorFlow Lite models, object detection, facial recognition
🎨 Custom Effects
Apply overlays, watermarks, transitions, branded filters
💡 Tip: Most users start with frame analysis to understand their content, then add AI inference for moderation, then custom effects for viewer engagement.
2Create Your Processing Module
EDGE supports JavaScript for quick deployment or WASM (WebAssembly) for high-performance processing. Start with JavaScript:
edge-processor.js// Frame analysis with scene detection
export default async function processFrame(frame, ctx) {
// frame.data = raw video frame (Uint8Array)
// frame.timestamp = milliseconds since stream start
// ctx = execution context with stream metadata
try {
// Scene change detection (comparing frame histograms)
const histogram = analyzeFrame(frame.data);
const isSceneChange = histogram.distance > 0.3;
// Extract frame metadata
const metadata = {
timestamp: frame.timestamp,
width: frame.width,
height: frame.height,
sceneChange: isSceneChange,
dominantColors: extractColors(histogram),
};
// Send metadata to PULSE analytics
await ctx.emit('frame:analyzed', {
streamId: ctx.stream.id,
metadata,
processingMs: Date.now() - frame.timestamp,
});
// Return processed frame or null to skip
return {
action: isSceneChange ? 'capture_thumbnail' : 'skip',
metadata,
};
} catch (error) {
console.error('Frame processing error:', error);
ctx.metrics.increment('edge.processing_errors');
throw error;
}
}
function analyzeFrame(frameData) {
// Simple histogram analysis
const histogram = new Array(256).fill(0);
for (let i = 0; i < frameData.length; i += 3) {
const gray = frameData[i] * 0.299 + frameData[i+1] * 0.587 + frameData[i+2] * 0.114;
histogram[Math.floor(gray)]++;
}
return { data: histogram, distance: 0.25 };
}
function extractColors(histogram) {
return { r: 100, g: 150, b: 200 };
}3Deploy to EDGE Network
Deploy your processing module to the EDGE network using the CLI:
Terminal# Step 1: Create EDGE processor project wave edge:init my-frame-processor # Step 2: Navigate to project cd my-frame-processor # Step 3: Deploy to staging (100% safe to test) wave edge:deploy --stage=staging # Step 4: Get deployment details wave edge:status # Step 5: Deploy to production (replicate to 200+ POPs) wave edge:deploy --stage=production # View real-time logs from edge locations wave edge:logs --follow --stage=production
✓ Deployment: Your module is now running on all 200+ edge locations with automatic failover and geographic distribution.
4Monitor Frame Processing
Monitor real-time processing metrics and frame analysis results:
monitoring.js// Subscribe to frame processing events in real-time
const wave = require('@wave/sdk');
const client = new wave.Client({ apiKey: process.env.WAVE_API_KEY });
// Listen for frame analysis events
client.edge.onFrameAnalyzed(async (event) => {
const {
streamId,
timestamp,
metadata,
processingMs,
region,
} = event;
console.log(`
✓ Frame processed in ${processingMs}ms at ${region}
├─ Scene change: ${metadata.sceneChange}
├─ Dominant colors: RGB(${metadata.dominantColors.r}, ${metadata.dominantColors.g}, ${metadata.dominantColors.b})
└─ Histogram distance: ${metadata.histogram}`);
// Real-time metrics dashboard
await client.metrics.record('edge.frame.processed', {
value: 1,
tags: {
'stream_id': streamId,
'region': region,
'processing_ms': processingMs,
},
});
});
// Monitor processing errors across all regions
client.edge.onProcessingError(async (error) => {
console.error(`⚠️ Error in ${error.region}: ${error.message}`);
await client.metrics.record('edge.processing.error', {
region: error.region,
errorType: error.type,
timestamp: Date.now(),
});
});
// Fetch aggregated metrics
setInterval(async () => {
const metrics = await client.edge.getMetrics({
period: '5m',
rollup: 'avg',
});
console.log(`
📊 EDGE Metrics (Last 5 minutes):
├─ Frames processed: ${metrics.framesProcessed}
├─ Avg latency: ${metrics.avgLatencyMs}ms (target: <15ms)
├─ Error rate: ${metrics.errorRate}%
├─ Throughput: ${metrics.framesPerSecond} fps
└─ Geographic coverage: ${metrics.activeRegions} regions
`);
}, 30000);Advanced: GPU Acceleration
For AI inference at scale, enable GPU acceleration on NVIDIA T4 and A100 hardware:
edge.config.json{
"name": "ai-moderation-processor",
"runtime": "python3.11",
"resources": {
"cpu": "4-core",
"memory": "8GB",
"gpu": "nvidia-t4",
"gpuMemory": "16GB"
},
"models": [
{
"name": "moderation-model-v2",
"path": "s3://wave-models/moderation-v2.tflite",
"accelerator": "gpu",
"cacheFrames": 300
}
],
"regions": {
"primary": ["us-east", "us-west", "eu-west"],
"secondary": ["apac-ne", "apac-se"],
"tertiary": "*"
},
"timeouts": {
"frameSoftLimit": 12,
"frameHardLimit": 14
}
}⚡ Performance: GPU-accelerated models process frames 10-50x faster than CPU, enabling real-time analysis for complex AI workloads.
Advanced: Custom Effects
Apply branded overlays, watermarks, and real-time effects directly on the video stream:
custom-effects.js// Apply real-time effects to video stream
export default async function applyEffects(frame, ctx) {
const canvas = ctx.createCanvas(frame.width, frame.height);
const imageData = canvas.createImageData(frame.data);
// 1. Apply brand watermark (bottom-right corner)
applyWatermark(imageData, {
text: 'LIVE',
x: frame.width - 150,
y: frame.height - 50,
color: '#FF6B35',
});
// 2. Add engagement overlay (viewer count, live badge)
applyOverlay(imageData, {
type: 'engagement',
viewerCount: ctx.stream.viewerCount,
isLive: true,
});
// 3. Apply cinematic letterbox effect during highlights
if (ctx.metadata.isHighlight) {
applyLetterbox(imageData, {
height: 60,
color: '#000000',
});
}
// 4. Apply color correction (brightness, saturation)
applyColorCorrection(imageData, {
brightness: 1.1,
saturation: 1.2,
contrast: 1.0,
});
// 5. Render frame back to video
return canvas.toBuffer('video/h264');
}
function applyWatermark(imageData, options) {
// Watermark implementation
}
function applyOverlay(imageData, options) {
// Engagement overlay with live badge
}
function applyLetterbox(imageData, options) {
// Black bars for cinematic effect
}
function applyColorCorrection(imageData, options) {
// Color grade the entire frame
}Monitoring: Frame Events
Listen for real-time frame processing events and feed data to PULSE analytics:
event-listeners.js// Subscribe to frame analysis events in real-time
import { WaveClient } from '@wave/sdk';
const wave = new WaveClient();
// Listen for scene changes detected in EDGE
wave.on('edge:scene-change', (event) => {
console.log(`🎬 Scene change detected at ${event.timestamp}ms`);
// Emit to PULSE for analytics
wave.emit('viewer:scene-change', {
streamId: event.streamId,
timestamp: event.timestamp,
duration: event.duration,
sceneIndex: event.sceneIndex,
});
});
// Monitor moderation violations in real-time
wave.on('edge:moderation-violation', (event) => {
console.warn(`⚠️ Violation detected: ${event.label} (confidence: ${event.confidence})`);
// Trigger RUNTIME function to blur frame or notify moderator
wave.runtime.execute('handle-violation', {
streamId: event.streamId,
violation: event.label,
action: 'blur',
});
});
// Track processing latency per region
const latencyByRegion = {};
wave.on('edge:frame-processed', (event) => {
const region = event.region;
if (!latencyByRegion[region]) {
latencyByRegion[region] = [];
}
latencyByRegion[region].push(event.latencyMs);
// Alert if latency exceeds 15ms SLA
if (event.latencyMs > 15) {
console.error(`❌ SLA violation in ${region}: ${event.latencyMs}ms`);
wave.metrics.increment('edge.sla_violations', { region });
}
});Troubleshooting
❓ Processing latency exceeds 15ms target
Check if your processing logic is too heavy (e.g., unnecessary loops, network calls).
- • Reduce model inference complexity or use quantized models
- • Move non-critical processing to RUNTIME instead
- • Enable GPU acceleration for AI models
- • Check CPU/memory metrics:
wave edge:metrics
❓ GPU not available in specific regions
GPU hardware is available in primary regions (us-east, us-west, eu-west) but not secondary/tertiary.
- • Configure GPU as optional in edge.config.json
- • Use CPU fallback for non-critical processing
- • Check regional availability:
wave edge:regions - • Contact support for GPU availability in other regions
❓ Out of memory errors during model inference
Large AI models (2GB+) may exceed allocated memory on edge devices.
- • Increase memory allocation in edge.config.json
- • Use model quantization to reduce size (TFLite quantized: ~20% size)
- • Implement frame batching to process multiple frames at once
- • Split large models across regions
Next Steps
Integrate with RUNTIME orchestration
Use RUNTIME functions to handle violations, send notifications, or trigger downstream workflows based on frame analysis
Read RUNTIME quickstart →Route processed streams with MESH
Use MESH to route video streams based on processing results or geographic location
Read MESH quickstart →View full API documentation
See all EDGE endpoints, request/response schemas, error codes, and rate limits
Read API docs →Explore EDGE models marketplace
Browse pre-built TensorFlow Lite models for moderation, detection, and analysis
Browse models →Start Processing Video at the Edge
Deploy real-time processing in 5 minutes. <15ms latency, 200+ global locations, automatic failover.