Skip to main content

What is VibeStream?

VibeStream monitors live video streams and triggers webhooks when conditions you specify are met. No watching required.
Live Demo: Try the API at https://vibestream-production-64f3.up.railway.app
curl -X POST https://vibestream-production-64f3.up.railway.app/live-monitor \
  -H "Content-Type: application/json" \
  -d '{
    "youtube_url": "https://youtube.com/watch?v=...",
    "condition": "Is it snowing?",
    "webhook_url": "https://your-app.com/webhook"
  }'
When snow is detected:
Webhook Payload
{
  "type": "live_monitor_triggered",
  "timestamp": "2024-01-26T10:30:00Z",
  "data": {
    "condition": "Is it snowing?",
    "triggered": true,
    "explanation": "Heavy snowfall visible, accumulating on surfaces",
    "frame_b64": "..."
  }
}

Why VibeStream?

Cost-Optimized

Pre-filter pipeline (motion + YOLO) reduces VLM API costs by 70-90%

Multi-Provider

Gemini, OpenAI, Anthropic with automatic fallback on rate limits

Real-time

Webhook notifications the moment conditions are met

Self-Hostable

Deploy anywhere - Railway, Docker, or bare metal

Use Cases

  • Weather monitoring - Get alerts when it starts raining at outdoor events
  • Traffic analysis - Detect accidents or congestion on highway cams
  • Security - Monitor for specific activities in public spaces
  • Wildlife tracking - Alert when animals appear on nature cams
  • Event detection - Know when a concert starts or a game-winning play happens

How It Works

YouTube Live → yt-dlp → Stream URL Cache → Frames → Pre-filter → VLM → Webhook

                                        [Motion Detection → Gemini/Claude]
VibeStream supports YouTube Live and Twitch streams. The pre-filter uses motion detection to skip static frames. YOLO object detection is optional (disabled in Railway deployment to reduce image size).
  1. Stream URL caching - Stream URLs are cached and auto-refreshed to avoid redundant yt-dlp calls
  2. Frame capture - Extracts frames from live streams via OpenCV
  3. Pre-filtering - Motion detection skips static frames, YOLO filters non-interesting content
  4. VLM analysis - Only relevant frames hit the vision model API (Gemini 2.0 Flash by default)
  5. Webhook delivery - Instant notification when your condition is met
The pre-filter pipeline typically reduces VLM API calls by 70-90%, making continuous monitoring affordable.