Request
Live stream URL to check. Supports YouTube (youtube.com/watch?v=,
youtu.be/) and Twitch (twitch.tv/channel) formats.
Natural language condition to evaluate. Be specific for better accuracy.
model
string
default:"gemini-2.5-flash"
VLM model to use. Options: gemini-2.5-flash, gemini-2.0-flash,
gpt-4o-mini, gpt-4o
Include the captured frame as base64 in the response.
Skip livestream validation if already validated by /validate-url. Use this
to avoid redundant validation calls when pre-validating URLs in the frontend.
Response
Whether the condition was met.
VLM explanation of what was observed in the frame.
The model that was used for analysis.
Base64-encoded frame (only if include_frame was true).
curl -X POST https://vibestream-production-64f3.up.railway.app/check-once \
-H "Content-Type: application/json" \
-d '{
"youtube_url": "https://youtube.com/watch?v=abc123",
"condition": "Are there more than 5 people visible?"
}'
{
"triggered": true,
"explanation": "Approximately 8-10 people are visible in the frame.",
"model": "gemini-2.0-flash",
"frame_b64": null
}
Use Cases
Quick Checks: Use /check-once for one-off queries without starting a continuous job.
# Is it currently raining?
curl -X POST https://vibestream-production-64f3.up.railway.app/check-once \
-H "Content-Type: application/json" \
-d '{"youtube_url": "https://youtube.com/watch?v=WEATHER_CAM", "condition": "Is it raining?"}'
Unlike /live-monitor, this endpoint is synchronous and blocks until the VLM
returns a response. Typical response time is 2-5 seconds depending on the
model.