Hi everyone,
I’ve built an experimental Streamlit dashboard that visualizes outputs from a seismic data analysis pipeline.
The setup is intentionally split into two parts:
-
An offline Python pipeline processes multistation waveform data
-
It applies a fixed, fully reproducible workflow (normalization, spectral whitening, coherence metrics, etc.)
-
The same pipeline is applied across all regions without parameter tuning, enabling direct comparison
-
Results are stored and then explored through a lightweight Streamlit frontend
The Streamlit app itself is purely a visualization layer:
-
browsing different regions
-
inspecting temporal evolution
-
comparing coherence patterns across stations
This is not a predictive system, just an exploratory monitoring tool.
I’d be especially interested in feedback on:
-
whether this offline pipeline + Streamlit frontend architecture makes sense long-term
-
how the UI/UX could be improved for exploring this kind of time-series data
Also open to suggestions on:
-
deployment (currently batch updates + static hosting)
-
scaling to more regions
Pipeline / reproducibility:
Thanks!