I have an app that scraps a certain website in a batch.
I input the start and end number and my app will iterate through the number, fetching data and updating my spreadsheet.
It runs slow yet fine up to few hundred pages, however it will stop in the middle of process when fetching more than thousands of pages.
It was not a memory issue because if it were, the app would crash and I would have to reboot.
However in my case, it wasn’t because there wasn’t a display of error and I could still run it second time. Only the scraping process just stops
I couldn’t see anywhere officially stating that there is a runtime limit on streamlit cloud, but I am guessing it might be from looking at my app.
I’d like to know if anyone is having a simillar issue.
Or it could be simply my code’s fault so I am sharing my app and code below.
Deployed App: https://quusais-test.streamlit.app/CrawlingAI_for_Moyo
Github Link: QUUSai/pages/04_CrawlingAI_for_Moyo.py at master · M8chaa/QUUSai · GitHub
Hi @Minsoo_Cha . Streamlit cloud offers only 1gb of memory. If our app reaches the memory then the server will get stopped and you will see the message that streamlit server is failed message.
Hello @Guna_Sekhar_Venkata . Thanks for your reply, but my app didn’t show the message that usually will be shown if memory outage occurred. Once I input 3000 urls and left it overnight. It could fetch around 2000 urls and stopped and yet my app could still run additional input urls without having to reboot. That is why I thought there might be additional resource limit like runtime or cpu quota.
If you want any quotation related issues you can drop the email to snowflake community. If they like your then they will provide resources for your application.
I have a doubt that whether your application is storing or download or scrapping anything?
Oh I should drop it to snowflake community as well
My app does scrap websites via selenium.
Ok. Then its better to drop an email snowflake community and mention to provide resources for your application