LLM Configuration

Manage your Large Language Model providers, API keys, and generation parameters.

AI Provider
OpenAI
DeepSeek
Moonshot
Zhipu AI
Qwen
Ollama
Custom

The base endpoint for the API requests.

Connection successful. Latency: 124ms
Model Settings

Select the specific model version to use for generation.

System Health
Updated just now
Operational
Last Check 10:42:35 AM
Response Time 240ms
Parameters
0.7

Controls randomness: Lowering results in less random completions.

4096
1.0
0.0
Advanced
Stream Output
Receive partial results immediately.
Enable Caching
Cache responses to reduce latency.
Retry on Failure
Automatically retry failed requests.
Debug Mode
Log full request/response payloads.