Skip to main content
POST
/
check-once
curl -X POST https://vibestream-production-64f3.up.railway.app/check-once \
  -H "Content-Type: application/json" \
  -d '{
    "youtube_url": "https://youtube.com/watch?v=abc123",
    "condition": "Are there more than 5 people visible?"
  }'
{
  "triggered": true,
  "explanation": "Approximately 8-10 people are visible in the frame.",
  "model": "gemini-2.0-flash",
  "frame_b64": null
}

Request

youtube_url
string
required
Live stream URL to check. Supports YouTube (youtube.com/watch?v=, youtu.be/) and Twitch (twitch.tv/channel) formats.
condition
string
required
Natural language condition to evaluate. Be specific for better accuracy.
model
string
default:"gemini-2.5-flash"
VLM model to use. Options: gemini-2.5-flash, gemini-2.0-flash, gpt-4o-mini, gpt-4o
include_frame
boolean
default:"false"
Include the captured frame as base64 in the response.
skip_validation
boolean
default:"false"
Skip livestream validation if already validated by /validate-url. Use this to avoid redundant validation calls when pre-validating URLs in the frontend.

Response

triggered
boolean
Whether the condition was met.
explanation
string
VLM explanation of what was observed in the frame.
model
string
The model that was used for analysis.
frame_b64
string
Base64-encoded frame (only if include_frame was true).
curl -X POST https://vibestream-production-64f3.up.railway.app/check-once \
  -H "Content-Type: application/json" \
  -d '{
    "youtube_url": "https://youtube.com/watch?v=abc123",
    "condition": "Are there more than 5 people visible?"
  }'
{
  "triggered": true,
  "explanation": "Approximately 8-10 people are visible in the frame.",
  "model": "gemini-2.0-flash",
  "frame_b64": null
}

Use Cases

Quick Checks: Use /check-once for one-off queries without starting a continuous job.
# Is it currently raining?
curl -X POST https://vibestream-production-64f3.up.railway.app/check-once \
  -H "Content-Type: application/json" \
  -d '{"youtube_url": "https://youtube.com/watch?v=WEATHER_CAM", "condition": "Is it raining?"}'
Unlike /live-monitor, this endpoint is synchronous and blocks until the VLM returns a response. Typical response time is 2-5 seconds depending on the model.