babylon.ai.llm_provider

LLM Provider strategy pattern for text generation.

This module provides the “Mouth” of the AI Observer - the interface through which the NarrativeDirector speaks. It follows the same Protocol pattern as SimulationObserver for loose coupling.

Components: - LLMProvider: Protocol defining the text generation interface - MockLLM: Deterministic mock for testing - DeepSeekClient: Production client using DeepSeek API

SYNC API: All providers implement synchronous generate() to match the SimulationObserver pattern. Async implementations wrap internally.

Classes

DeepSeekClient([config])

DeepSeek LLM client using OpenAI-compatible API.

LLMProvider(*args, **kwargs)

Protocol for LLM text generation providers.

MockLLM([responses, default_response])

Deterministic mock LLM for testing.

class babylon.ai.llm_provider.LLMProvider(*args, **kwargs)[source]

Bases: Protocol

Protocol for LLM text generation providers.

Follows the same pattern as SimulationObserver - loose coupling via Protocol enables easy testing and provider swapping.

SYNC API: All implementations use synchronous interfaces to avoid event loop conflicts with other asyncio.run() callers (e.g., RAG).

property name: str

Provider identifier for logging.

generate(prompt, system_prompt=None, temperature=0.7)[source]

Generate text from prompt (synchronous).

Parameters:
  • prompt (str) – User prompt / context

  • system_prompt (str | None) – Optional system instructions

  • temperature (float) – Sampling temperature (0.0-1.0)

Return type:

str

Returns:

Generated text response

Raises:

LLMGenerationError – On API or generation failure

__init__(*args, **kwargs)
class babylon.ai.llm_provider.MockLLM(responses=None, default_response='Mock LLM response')[source]

Bases: object

Deterministic mock LLM for testing.

Returns pre-configured responses in queue order, or a fixed default response. Synchronous API.

This is the primary testing tool for NarrativeDirector - it allows tests to verify behavior without network calls.

Parameters:
  • responses (list[str] | None)

  • default_response (str)

__init__(responses=None, default_response='Mock LLM response')[source]

Initialize MockLLM.

Parameters:
  • responses (list[str] | None) – Queue of responses to return in FIFO order

  • default_response (str) – Response when queue is empty

Return type:

None

property name: str

Provider identifier for logging.

property call_count: int

Number of times generate() was called.

property call_history: list[dict[str, Any]]

History of all calls with arguments.

Returns a copy to prevent external modification.

generate(prompt, system_prompt=None, temperature=0.7)[source]

Generate response synchronously.

Parameters:
  • prompt (str) – User prompt / context

  • system_prompt (str | None) – Optional system instructions

  • temperature (float) – Sampling temperature (ignored by mock)

Return type:

str

Returns:

Next queued response or default response

class babylon.ai.llm_provider.DeepSeekClient(config=None)[source]

Bases: object

DeepSeek LLM client using OpenAI-compatible API.

Primary LLM provider for Babylon narrative generation. Uses the openai Python package with custom base_url.

SYNC API: Uses synchronous OpenAI client to avoid event loop conflicts with RAG queries that use asyncio.run().

Parameters:

config (type[LLMConfig] | None)

__init__(config=None)[source]

Initialize DeepSeekClient.

Parameters:

config (type[LLMConfig] | None) – LLM configuration class (defaults to LLMConfig)

Raises:

LLMGenerationError – If API key is not configured

Return type:

None

property name: str

Provider identifier for logging.

generate(prompt, system_prompt=None, temperature=0.7)[source]

Generate text synchronously.

Uses the sync OpenAI client directly, avoiding event loop conflicts with other code that uses asyncio.run().

Parameters:
  • prompt (str) – User prompt / context

  • system_prompt (str | None) – Optional system instructions

  • temperature (float) – Sampling temperature (0.0-1.0)

Return type:

str

Returns:

Generated text response

Raises:

LLMGenerationError – On API or generation failure