I have created a data science app using the awesome streamlit and Pycaret packages.
I was unable to deploy in herorku due to their slag limitation.
I am eagerly waiting for the streamlit sharing invitation so that I can deploy it in the web
I have created a data science app using the awesome streamlit and Pycaret packages.
I was unable to deploy in herorku due to their slag limitation.
I am eagerly waiting for the streamlit sharing invitation so that I can deploy it in the web
Hi @ashishgopal1414, when did you sign up?
Hi @randyzwitch,
For streamlit share, i have signed up on 19 Oct and also via my referral link 5 people signed up, to reduce the waiting time.
As a work around to the slug limitation, you could store whatever data is too big onto an object storage server like S3 or B2 and then download the data to the local file system when the dyno spins up. It should be as simple as check if the data exists locally, if it doesn’t, then download it”.
You could also separate the data out into multiple objects that are accessed only when needed to speed up downloading.
This is of course assuming you aren’t trying to use some enormous single data object.
Hi @ theimposingdwarf,
Thanks for the information, but actually there is no heavy data present, it’s just the packages which are huge. Pycaret v2 and streamlit, when kept in the requirements.txt file, cross the slug size.
Though Pycaret v1 and Streamlit work, but, that version had few bugs, which were resolved in v2.
Have you tried clearing the build cache from the application git repository? The individual who wrote the article at the below post mentioned it reduced their slug size by 100mb.
yes, I tried all these, but to no avail.
even a very simple code whose requirements are just the two packages pycaret v2 and streamlit, that slag will cross 500MB
I have the exact same experience using PyCaret 2.2.3, streamlit 0.73.1 I built a super simple app to serve a PyCaret catboost model. Added a .slugignore, pared down all the dependencies in requirements.txt based on the PyCaret example and I still get a size of 574MB which exceeds the Free dyno size. I opened an issue on the PyCaret repo here:
https://github.com/pycaret/deployment-heroku/issues/3.
From looking at the heroku build log I was amazed at the footprint and how many libraries the combination of PyCaret and streamlit depend on. Cleared the build cache a few times, no joy.
These cookies are necessary for the website to function and cannot be switched off. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms.
These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us understand how visitors move around the site and which pages are most frequently visited.
These cookies are used to record your choices and settings, maintain your preferences over time and recognize you when you return to our website. These cookies help us to personalize our content for you and remember your preferences.
These cookies may be deployed to our site by our advertising partners to build a profile of your interest and provide you with content that is relevant to you, including showing you relevant ads on other websites.