Using hundreds of model weight files

Hi, I have just finished reading some of the main documentation and am fairly new to making apps.

I am trying to distribute an app that needs to load scikit-learn model weight files and use them for inference. The problem is that there are thousands of them and it’s quite heavy(over 3Gb).

My question is,

  1. Is there a way where I can only return the results of the inference while storing the weight files in my local server? A user would upload a csv file into my app and all I need is to return the inference results.
    The computation takes quite a time.

  2. I have to save the csv file uploaded to the app. Does streamlit support these functions?

Any advice would be of real help! thanks

Hi @Jaeho_Kim, welcome to the Streamlit community!

The first thing I would question is why there are thousands of files. Is this a single model or multiple models?

In general, yes. You can load the files in the Streamlit app, the user would upload data, then they would get the results.

Yes, but this is more of a function of Python than Streamlit. When you upload a file, it goes into the equivalent of a BytesIO buffer. How exactly you should do that might depend, but this is the general idea:

Best,
Randy

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.