babylon.config.llm_config

LLM API configuration for text generation and embeddings.

Supports hybrid setup: - Chat: DeepSeek API (cloud) or OpenAI (fallback) - Embeddings: Ollama (local, default) or OpenAI (cloud)

In production Babylon, we aim for full offline operation using local models (Ollama). Cloud APIs are transitional tools.

Classes

LLMConfig()

Configuration for LLM API integration.

OpenAIConfig

alias of LLMConfig

class babylon.config.llm_config.LLMConfig[source]

Bases: object

Configuration for LLM API integration.

Supports hybrid setup: - Chat: DeepSeek (primary) / OpenAI (fallback) - Embeddings: Ollama (local, default) / OpenAI (cloud)

The bourgeois cloud API is a transitional tool until local compute infrastructure is fully established.

API_KEY: Final[str] = ''
API_BASE: Final[str] = 'https://api.deepseek.com'
ORGANIZATION_ID: Final[str] = ''
CHAT_MODEL: Final[str] = 'deepseek-chat'
EMBEDDING_PROVIDER: Final[str] = 'ollama'
EMBEDDING_API_BASE: Final[str] = 'http://localhost:11434'
EMBEDDING_MODEL: Final[str] = 'embeddinggemma:latest'
MAX_RETRIES: Final[int] = 3
RETRY_DELAY: Final[float] = 1.0
RATE_LIMIT_RPM: Final[int] = 60
BATCH_SIZE: Final[int] = 8
REQUEST_TIMEOUT: Final[float] = 30.0
classmethod is_configured()[source]

Check if Chat LLM API is properly configured.

Return type:

bool

classmethod is_ollama_embeddings()[source]

Check if using Ollama for embeddings (local, no API key needed).

Return type:

bool

classmethod get_headers()[source]

Get HTTP headers for Chat API requests.

Return type:

dict[str, str]

classmethod get_embedding_headers()[source]

Get HTTP headers for Embedding API requests.

Ollama doesn’t need auth headers; OpenAI does.

Return type:

dict[str, str]

classmethod get_embedding_url()[source]

Get the embedding API URL based on provider.

Return type:

str

classmethod validate()[source]

Validate the Chat LLM configuration.

Raises:

ValueError – If required configuration is missing

Return type:

None

classmethod validate_embeddings()[source]

Validate embedding configuration.

For Ollama: Just check the base URL is set. For OpenAI: Check API key is available.

Raises:

ValueError – If required configuration is missing

Return type:

None

classmethod get_model_dimensions()[source]

Get the embedding dimensions for the configured model.

Return type:

int

Returns:

Number of dimensions for the embedding model

babylon.config.llm_config.OpenAIConfig

alias of LLMConfig