Streamlit is stopping

Ask the community or our support engineers for answers to questions.

I am writing an app to take a sentence, perform some named entity recognition on it with a model and return the output. It works perfectly in the python app but when I run the streamlit main file, it is stopping. Can someone help me out?

I can provide link to the source on github.

Hello @nemesis, welcome to the community :slight_smile:

This does look like odd behavior, but itā€™s going to be hard for us to debug without a bit of source code. If possible for you, are you able to provide us with a smaller reproducible code snippet with the problem, or at the very least point the line of code where it seems to stop inside your source code on Github + a link to the project ?

Fanilo

Here is a link to the main file on my github:

Unfortunately the model file is too big to upload to github, I can attach a drive link if required.

Hey @nemesis , did you happen to find a solution to this one? Iā€™m having similar issues (also using simpletransformers, oddly enough). the only difference in my case was that I cached the function that loads the model, so it wonā€™t load each time we want to make a prediction

EDIT2: GOT IT :slight_smile:
After reading the source code of the predict function I decided to check if multiprocess is being used and what will happen if I will omit it. And it worked.

model = T5Model(ā€œt5-smallā€, use_cuda=False)
print(model.args.use_multiprocessed_decoding)
model.args.use_multiprocessed_decoding = False

This made a lot of sense because there were new processes on the ā€œps aux | grep streamlitā€ list

And now the previous message :wink:

Having the same problem right now. Iā€™ve also used the cache for caching model (simpletransformers.t5) and itā€™s stopping after decoding outputs. Two different PCā€™s , ubuntu on both.

EDIT: I did found out 1 of problems on mi side with try/catch code block:

try:
#code
except Exception as e:
print(getattr(e, ā€˜messageā€™, repr(e)))
print(getattr(e, ā€˜messageā€™, str(e)))

Linux has opened file limit ( ulimit set to 1024 by default ) and I had it low. After changing it I still have ā€œStoppingā€¦ā€ issue, but maybe you guys will have more luck.

Hey @nemesis !!

I am also encountering the same issue. Did you find any fixes yet ? It would be great if you can add something.

Thanks

2 Likes

Hello,

I donā€™t know if this issue has been solved but I encountered the same problem in my code and I fixed it but actually changing the model args of my bert model to :
model.args.use_multiprocessing_for_evaluation=False
I am guessing when the predict is launched, streamlit doesnā€™t like the multiprocess part :slight_smile:

Hi. Iā€™m having the same problem. I built the app on my M1 mac, used multiprocessing perfectly. Now I have deployed to an EC2 aws instance (only 2 cores) but expected to work at least. The Stopping ā€¦ is everything that happens in the backend.