babylon.config.llm_config
LLM API configuration for text generation and embeddings.
Supports hybrid setup: - Chat: DeepSeek API (cloud) or OpenAI (fallback) - Embeddings: Ollama (local, default) or OpenAI (cloud)
In production Babylon, we aim for full offline operation using local models (Ollama). Cloud APIs are transitional tools.
Classes
Configuration for LLM API integration. |
|
alias of |
- class babylon.config.llm_config.LLMConfig[source]
Bases:
objectConfiguration for LLM API integration.
Supports hybrid setup: - Chat: DeepSeek (primary) / OpenAI (fallback) - Embeddings: Ollama (local, default) / OpenAI (cloud)
The bourgeois cloud API is a transitional tool until local compute infrastructure is fully established.
- classmethod is_ollama_embeddings()[source]
Check if using Ollama for embeddings (local, no API key needed).
- Return type:
- classmethod get_embedding_headers()[source]
Get HTTP headers for Embedding API requests.
Ollama doesn’t need auth headers; OpenAI does.
- classmethod validate()[source]
Validate the Chat LLM configuration.
- Raises:
ValueError – If required configuration is missing
- Return type:
- classmethod validate_embeddings()[source]
Validate embedding configuration.
For Ollama: Just check the base URL is set. For OpenAI: Check API key is available.
- Raises:
ValueError – If required configuration is missing
- Return type: