A minimal, deterministic demo showing how decisions can diverge solely due to internal history, even with identical input and a frozen policy. Designed as a causal audit artifact, not a learning or AI system.

This is a minimal, read-only Streamlit demo for inspecting
causal decision divergence under fixed input and seed.

The app is intentionally simple and deterministic.
It is shared as a community demo, not as a support request.

Source code (public): GitHub - rysawaki/sia-lab