Ask the community or our support engineers for answers to questions.
I am writing an app to take a sentence, perform some named entity recognition on it with a model and return the output. It works perfectly in the python app but when I run the streamlit main file, it is stopping. Can someone help me out?
This does look like odd behavior, but itās going to be hard for us to debug without a bit of source code. If possible for you, are you able to provide us with a smaller reproducible code snippet with the problem, or at the very least point the line of code where it seems to stop inside your source code on Github + a link to the project ?
Hey @nemesis , did you happen to find a solution to this one? Iām having similar issues (also using simpletransformers, oddly enough). the only difference in my case was that I cached the function that loads the model, so it wonāt load each time we want to make a prediction
EDIT2: GOT IT
After reading the source code of the predict function I decided to check if multiprocess is being used and what will happen if I will omit it. And it worked.
model = T5Model(āt5-smallā, use_cuda=False)
print(model.args.use_multiprocessed_decoding)
model.args.use_multiprocessed_decoding = False
This made a lot of sense because there were new processes on the āps aux | grep streamlitā list
And now the previous message
Having the same problem right now. Iāve also used the cache for caching model (simpletransformers.t5) and itās stopping after decoding outputs. Two different PCās , ubuntu on both.
EDIT: I did found out 1 of problems on mi side with try/catch code block:
Linux has opened file limit ( ulimit set to 1024 by default ) and I had it low. After changing it I still have āStoppingā¦ā issue, but maybe you guys will have more luck.
I donāt know if this issue has been solved but I encountered the same problem in my code and I fixed it but actually changing the model args of my bert model to :
model.args.use_multiprocessing_for_evaluation=False
I am guessing when the predict is launched, streamlit doesnāt like the multiprocess part
Hi. Iām having the same problem. I built the app on my M1 mac, used multiprocessing perfectly. Now I have deployed to an EC2 aws instance (only 2 cores) but expected to work at least. The Stopping ⦠is everything that happens in the backend.