Streamlit app disconnects and stops before displaying data

Hello guys, I developed a streamlit code to pull data from database of about 200k records but it loads for a while and when done , it breaks disconnects and closes before displaying summary of the data. Am not sure whats wrong because 200k dataset is so small to crash the app. Anyone experienced this and how can I solve the problem.

Here is a snippet of code pulling data and displaying. I personally dont see any problem with the code:

#import packages 
import streamlit as st
import redcap
from redcap import Project
import pandas as pd
import numpy as np

st.title("Monthly Mortality Reports")

#add_selector for Hospitals
add_selector = st.sidebar.selectbox("Select hospital facility to view report","MMLY")

@st.cache
def pull_data():
    api_url  ="add database url"
    api_key = "add project data key"
    project = Project(api_url, api_key)
    df = project.export_records(format='df')
    return df


data = pull_data()

st.write(data)

Hey @Livingstone90, it’s been a while :slight_smile:

Are you trying to display 200k records of dozens of columns in your browser with Streamlit ? I am not sure the browser is going to be happy with displaying so many :confused: , does it also crash if you display less, say st.write(data.head(100)) ?

Cheers,
Fanilo

Amazingly with less records like 100 its displays well. I will definitely be using less records for the table

1 Like

Adding to @andfanilo’s answer, when I have to show large dataframe to user I do something like this,

import pandas as pd
import numpy as np
import streamlit as st
from math import ceil


df = pd.DataFrame({"x": np.arange(1_000_000), "y": np.arange(1_000_000)})

page_size = 1000
page_number = st.number_input(
    label="Page Number",
    min_value=1,
    max_value=ceil(len(df)/page_size),
    step=1,
)
current_start = (page_number-1)*page_size
current_end = page_number*page_size
st.write(df[current_start:current_end])

Hope you find it useful for showing big dataframes! :slight_smile:

1 Like