Ask the community or our support engineers for answers to questions.
I am writing an app to take a sentence, perform some named entity recognition on it with a model and return the output. It works perfectly in the python app but when I run the streamlit main file, it is stopping. Can someone help me out?
This does look like odd behavior, but itās going to be hard for us to debug without a bit of source code. If possible for you, are you able to provide us with a smaller reproducible code snippet with the problem, or at the very least point the line of code where it seems to stop inside your source code on Github + a link to the project ?
Hey @nemesis , did you happen to find a solution to this one? Iām having similar issues (also using simpletransformers, oddly enough). the only difference in my case was that I cached the function that loads the model, so it wonāt load each time we want to make a prediction
EDIT2: GOT IT
After reading the source code of the predict function I decided to check if multiprocess is being used and what will happen if I will omit it. And it worked.
model = T5Model(āt5-smallā, use_cuda=False)
print(model.args.use_multiprocessed_decoding)
model.args.use_multiprocessed_decoding = False
This made a lot of sense because there were new processes on the āps aux | grep streamlitā list
And now the previous message
Having the same problem right now. Iāve also used the cache for caching model (simpletransformers.t5) and itās stopping after decoding outputs. Two different PCās , ubuntu on both.
EDIT: I did found out 1 of problems on mi side with try/catch code block:
Linux has opened file limit ( ulimit set to 1024 by default ) and I had it low. After changing it I still have āStoppingā¦ā issue, but maybe you guys will have more luck.
I donāt know if this issue has been solved but I encountered the same problem in my code and I fixed it but actually changing the model args of my bert model to :
model.args.use_multiprocessing_for_evaluation=False
I am guessing when the predict is launched, streamlit doesnāt like the multiprocess part
Hi. Iām having the same problem. I built the app on my M1 mac, used multiprocessing perfectly. Now I have deployed to an EC2 aws instance (only 2 cores) but expected to work at least. The Stopping ā¦ is everything that happens in the backend.
Thanks for stopping by! We use cookies to help us understand how you interact with our website.
By clicking āAccept allā, you consent to our use of cookies. For more information, please see our privacy policy.
Cookie settings
Strictly necessary cookies
These cookies are necessary for the website to function and cannot be switched off. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms.
Performance cookies
These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us understand how visitors move around the site and which pages are most frequently visited.
Functional cookies
These cookies are used to record your choices and settings, maintain your preferences over time and recognize you when you return to our website. These cookies help us to personalize our content for you and remember your preferences.
Targeting cookies
These cookies may be deployed to our site by our advertising partners to build a profile of your interest and provide you with content that is relevant to you, including showing you relevant ads on other websites.