Whether streamlit can handle Big Data Analysis

Hi Community,

I am new to streamlit.io whether streamlit can handle big data analysis. Anybody using streamlit in production level, kindly comment. And Also we can built multiple user login web application like HTML,CSS and Javascript?

2 Likes

I donā€™t have the hard experience with it at scale that youā€™re looking for, but itā€™s so slow and unstable even with small data I donā€™t see how this could be used with Big Data at scale unless you want to pump a ton of money into paying for custom support.

Multi-user production use with logins and enterprise grade security? You saw the zero day that just happened this week right? I donā€™t think youā€™d want to use it for that unless itā€™s just doing viz of publicly available data (which is my use case but I am on the verge of giving up on it even for that).

I think itā€™s mostly good for making little tutorial posts and proof of concept examples for those who canā€™t be bothered to learn HTML, CSS and JSā€¦ I sound bitter right? Donā€™t go by meā€¦

1 Like

I have been personally using Streamlit in an enterprise setting processing Dataframes with hundreds of thousands of rows. We use ag-grid to display the data and we use Redis caching for our Dataframes that are generated via some Jupyter Notebooks. Some things to keep in mind:

  1. If possible avoid doing things like groupby in Streamlit as this is rather expensive to do in Streamlit.
  2. Compress your Dataframes when stored. In our case we use compressed pickled Dataframes that we load from Redis and only load from files when the data is unavailable in the cache.
  3. We donā€™t use memoization or streamlit caching as we ran into problems when our app is deployed in the cloud. We instead rely on Redis caching.
  4. If we need to load Dataframes from files we use compressed Parquet format and this seems to make loading and parsing much faster in Streamlit.
2 Likes

Hi @hack-r ,

You mean to say, better go with HTML, CSS, JS instead of Streamlit. Do you have any knowledge on Databricks. How to connect with streamlit to get data from databricks

1 Like

Hi @Steven_Atkin,

Thanks for support. Do you have knowledge on Databricks. I am looking for support in connecting Databricks and Streamlit. Like We are doing computation in Databricks, those results to be shown in streamlit.

1 Like

@sridharr Correct. Streamlit is just a Python library so itā€™s the same as connecting Python to Databricks:

2 Likes

@Steven_Atkin Quick question about your use of aggrid - did you determine how to set the default column widths and show/hide columns using code?

Itā€™s really worth $750 per app for you? Why not just use something free?

1 Like

@hack-r would love any examples you could share of Streamlit being slow and unstable with small data. Also can you elaborate on what you mean by slow? There are a few tricks under the hood to help speed up data processing and we are always looking for ways to improve Streamlit especially in terms of speed, so any examples you can provide will help!

In terms of Ag-Grid you can use the Streamlit component for free if youā€™re not using enterprise features. Itā€™s also one of the highest priorities for us this year to upgrade st.dataframe to provide more native data interaction experiences.

@sridharr in terms of production, thousands of companies run Streamlit apps in production and host their apps themselves. Streamlit doesnā€™t offer an enterprise deployment option on our Cloud as Streamlit Cloud is specifically to help users get started and share apps with the world in order to help the broader open source data community and researchers, students, and hobbiests spread their data work. Many large companies (including over half the Fortune 50) host Streamlit to do their own deployments and add in their own logins and other features there. We are currently working with Snowflake to provide an enterprise deployment solution for companies that donā€™t want to do their own deployments.

5 Likes

@hack-r Thanks, i am using the same docs. In this case, i sending the SQL query and getting the required dataframe and showing it in streamlit. Since in Databricks, we have workspace where all SQL queries were run and results are there as csv, i dont know to get that csv like get API of databricks.(like Nodejs, we write API for CRUD operations)

1 Like

@Amanda_Kelly Thanks for your reply. my company using databricks to handle very large dataset (eg. 1 to 10 million row data - size of 50GB csv file). If your team having expertise on databricks, so that i can discuss further with him/her to get knowledge on connecting streamlit with it.

1 Like

I donā€™t have specific knowledge on DataBricks but maybe someone else in the community has experience they can share?

1 Like

@Amanda_Kelly Thank you

1 Like

@Steven_Atkin This goes a bit off the track but can you elaborate on ā€œWe donā€™t use memoization or streamlit caching as we ran into problems when our app is deployed in the cloud. We instead rely on Redis cachingā€? Weā€™re revisiting caching right now, so would love to get any feedback you have!

1 Like

@Amanda_Kelly Regarding speed - thanks for asking - I do have some examples and would love to hear the tips and tricks. Let me know if itā€™s appropriate for this thread or if I should open another question.

On that note - is there a recommended method for measuring page load times? There are some 3rd party web apps I normally use but in at least some cases (if not all) Iā€™m not sure they are correctly measuring the full load time for Streamlit, but rather when Streamlit itself returns a page before it loads the content.

1 Like

@sridarr are you actually trying to load 50GB into a browser?

I assume thatā€™s not the case. So it must be that you just want to transfer or read 50GB raw data from DataBricks to whatever server youā€™re running streamlit on then let your DSā€™s or whoever run some Python scripts against it and display some of the results in Streamlit?

The details of how youā€™re processing the data may matter here. If you just need an example of how to get 50GB (or whatever size) data from DB to a server via Python I can find that for you but I assume you probably already have it. So maybe itā€™s just that you were thinking of this as a Streamlit operation when really the data pull doesnā€™t need to be a Streamlit step.

Are they trying to read this data dynamically with it updating all the time and always wanting the Streamlit app hitting the most recent version of the data? Maybe you can give an example. If the real use case is too secretive you can just explain by analogy like ā€œthe 50GB is a large dataset of transactions, the Streamlit app will use menus to let users graphically create DB queries which will then be summarized in a word cloud and a tabular table. The data is refreshed daily. The upstream database type is Snowflake.ā€.

@hack-r You are right, I am working 50GB data on Databricks and workout results are there in Databricks Workspace. I want to connect streamlit with results of databricks. I donā€™t want to read entire table. Is there any option or function can be written to fetch the results of databricks.

currently i am using this function

def run_query(query):
  with sql.connect(server_hostname = "dbc-xxxxx-xxxx.cloud.databricks.com",
                  http_path       = "sql/protocolv1/o/123456789/0614-xxxxx-yyyyyy",
                  access_token    = "dapia12cdfertfg45467yghu9hjghdfher3") as connection:
    df = pd.read_sql(query, connection)
    return df

Here the Problem, i am sending sql query and getting result from databricks as table but getting the existing workout results. Is there any work way aroundā€¦

I would suggest that you donā€™t attempt to use the caching feature that is available in Streamlit and instead rely on external sources of caching. The built-in Streamlit caching will not work with 50GB of data.

2 Likes

@Steven_Atkin Thanks for reply. As per your suggestion, I willl check out on Redis cachingā€¦

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.