babylon.engine.observers.metrics

MetricsCollector observer for unified simulation metrics.

Implements SimulationObserver protocol to collect comprehensive metrics during simulation runs. Supports two modes:

  • “interactive”: Rolling window of recent ticks (for dashboard)

  • “batch”: Accumulates all history (for parameter sweeps)

Sprint 4.1: Phase 4 Dashboard/Sweeper unification. Sprint 4.1C: Add JSON export for DAG structure preservation.

Classes

MetricsCollector([mode, rolling_window])

Observer that collects simulation metrics for analysis.

class babylon.engine.observers.metrics.MetricsCollector(mode='interactive', rolling_window=50)[source]

Bases: object

Observer that collects simulation metrics for analysis.

Implements SimulationObserver protocol. Extracts entity and edge metrics at each tick, with optional rolling window for memory efficiency in interactive mode.

Parameters:
  • mode (Literal['interactive', 'batch'])

  • rolling_window (int)

__init__(mode='interactive', rolling_window=50)[source]

Initialize the collector.

Parameters:
  • mode (Literal['interactive', 'batch']) – “interactive” uses rolling window, “batch” keeps all history

  • rolling_window (int) – Maximum ticks to keep in interactive mode

Return type:

None

property name: str

Return observer identifier.

property latest: TickMetrics | None

Return most recent tick metrics, or None if empty.

property history: list[TickMetrics]

Return metrics history as a list.

property summary: SweepSummary | None

Return sweep summary, or None if no data collected.

on_simulation_start(initial_state, config)[source]

Called when simulation begins. Clears history and records tick 0.

Return type:

None

Parameters:
on_tick(previous_state, new_state)[source]

Called after each tick completes. Records new state.

Return type:

None

Parameters:
on_simulation_end(final_state)[source]

Called when simulation ends. No-op for MetricsCollector.

Return type:

None

Parameters:

final_state (WorldState)

to_csv_rows()[source]

Export metrics history as list of dicts for CSV output.

Return type:

list[dict[str, Any]]

to_json(defines, config, csv_path=None)[source]

Export run metadata as structured JSON for reproducibility.

Captures the causal DAG hierarchy: - Level 1 (Fundamental): GameDefines parameters - Level 2 (Config): SimulationConfig settings - Level 3 (Emergent): SweepSummary computed from simulation

Parameters:
  • defines (GameDefines) – GameDefines with fundamental parameters

  • config (SimulationConfig) – SimulationConfig with run settings

  • csv_path (Path | None) – Optional path to associated CSV time-series file

Return type:

dict[str, Any]

Returns:

Structured dict ready for JSON serialization

export_json(path, defines, config, csv_path=None)[source]

Write JSON metadata to file.

Parameters:
  • path (Path) – Output path for JSON file

  • defines (GameDefines) – GameDefines with fundamental parameters

  • config (SimulationConfig) – SimulationConfig with run settings

  • csv_path (Path | None) – Optional path to associated CSV time-series file

Return type:

None