Model Selection
Q: How do I choose the right AI model?
A: Available models:
deepseek-ai/DeepSeek-V3-0324
Qwen/Qwen3-30B-A3B
deepseek-ai/DeepSeek-R1-0528
Qwen/Qwen3-Coder-480B-A35B-Instruct
zai-org/GLM-4.5
openai/gpt-oss-120b
google/gemma-3-12b-it
mistralai/Mistral-Nemo-Instruct-2407
Qwen/Qwen3-235B-A22B-Instruct-2507
meta-llama/Llama-3.3-70B-Instruct
google/gemma-3-27b-it
moonshotai/Kimi-K2-Instruct-0905
openai/gpt-oss-20b
meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8
Q: How do different models compare in response speed?
A: Model response speed varies based on factors like model size, complexity, and server load. We recommend testing with your specific use cases to determine the best performing models for your needs. Actual speed may vary based on network and load conditions.
Q: How can I check which models are supported?
A: Call the GET /v1/models endpoint to get a list of all currently available models.
Last updated

