Why my app don't show data

Hi,
I don’t know very well streamlit is my first time.
1.-my applic run well in streamlit local but non in the cloud streamlit
I try to deploy incloud streamlit
2.- here my applic’s link :https://mbtest1.streamlit.app/.
3.-here piece of my logs:

WARNING: You are using pip version 22.0.3; however, version 24.0 is available.
You should consider upgrading via the '/home/adminuser/venv/bin/python -m pip install --upgrade pip' command.
Checking if Streamlit is installed
Found Streamlit version 1.31.1 in the environment

────────────────────────────────────────────────────────────────────────────────────────

[14:37:01] 🐍 Python dependencies were installed from /mount/src/projet2/requirements.txt using pip.
Check if streamlit is installed
Streamlit is already installed
[14:37:05] 📦 Processed dependencies!

I think there is not a problem to run but there is a problem to show data. I want just show 5 rows but is impossible,my data is a file .csv.
here is what it displays in the firs row:
oid sha256:d702a8629d4de2914055b819013539640063636aa7d74859a7a4dda6fc709a34

Please somebody could help me, please, please.

Hi @MariaB, this is because that file is very large for GitHub, so it has been uploaded using Github LFS. Unfortunately, I don’t know of a good way to fetch files from Github LFS on Community Cloud.

The good news is, there are some workarounds:

  1. Zip your csv before adding it to your repo – pandas can read zipped csvs just fine
  2. Save your file as a more compressed format, like parquet
  3. Break your file into multiple parts

Hope that helps!

Hi Blackary, thank you for yours advice. I tried number 1 , but it’s not ok beacuse file compressed is not 25M(accepted by github).
if I arrived to zipped my file and reload in github, how can streamlit unzipped and read? or how unzipped on github?
for example df=pd.read_csv(‘test.zip’) or df=pd.read_csv(‘test.csv’), how I should write my progam to read data?.

If I should break my file, I should read eah file into dataframe then merge, is that right?.

Yes, df=pd.read_csv(‘test.zip’) should work great
and if you break it into multiple ones, than reading each one and then pd.concat is probably what you want.

Hi @MariaB

If you’d like to go the Git LFS route of uploading a large file to your GotHub repo, we actually have an FAQ article on this:

Hope this helps!

1 Like

This topic was automatically closed 180 days after the last reply. New replies are no longer allowed.