Hello guys, I developed a streamlit code to pull data from database of about 200k records but it loads for a while and when done , it breaks disconnects and closes before displaying summary of the data. Am not sure whats wrong because 200k dataset is so small to crash the app. Anyone experienced this and how can I solve the problem.
Here is a snippet of code pulling data and displaying. I personally dont see any problem with the code:
#import packages
import streamlit as st
import redcap
from redcap import Project
import pandas as pd
import numpy as np
st.title("Monthly Mortality Reports")
#add_selector for Hospitals
add_selector = st.sidebar.selectbox("Select hospital facility to view report","MMLY")
@st.cache
def pull_data():
api_url ="add database url"
api_key = "add project data key"
project = Project(api_url, api_key)
df = project.export_records(format='df')
return df
data = pull_data()
st.write(data)
Are you trying to display 200k records of dozens of columns in your browser with Streamlit ? I am not sure the browser is going to be happy with displaying so many , does it also crash if you display less, say st.write(data.head(100)) ?