Streamlit is stopping

Ask the community or our support engineers for answers to questions.

I am writing an app to take a sentence, perform some named entity recognition on it with a model and return the output. It works perfectly in the python app but when I run the streamlit main file, it is stopping. Can someone help me out?

I can provide link to the source on github.

Hello @nemesis, welcome to the community :slight_smile:

This does look like odd behavior, but it’s going to be hard for us to debug without a bit of source code. If possible for you, are you able to provide us with a smaller reproducible code snippet with the problem, or at the very least point the line of code where it seems to stop inside your source code on Github + a link to the project ?

Fanilo

Here is a link to the main file on my github:

Unfortunately the model file is too big to upload to github, I can attach a drive link if required.

Hey @nemesis , did you happen to find a solution to this one? I’m having similar issues (also using simpletransformers, oddly enough). the only difference in my case was that I cached the function that loads the model, so it won’t load each time we want to make a prediction

EDIT2: GOT IT :slight_smile:
After reading the source code of the predict function I decided to check if multiprocess is being used and what will happen if I will omit it. And it worked.

model = T5Model(“t5-small”, use_cuda=False)
print(model.args.use_multiprocessed_decoding)
model.args.use_multiprocessed_decoding = False

This made a lot of sense because there were new processes on the “ps aux | grep streamlit” list

And now the previous message :wink:

Having the same problem right now. I’ve also used the cache for caching model (simpletransformers.t5) and it’s stopping after decoding outputs. Two different PC’s , ubuntu on both.

EDIT: I did found out 1 of problems on mi side with try/catch code block:

try:
#code
except Exception as e:
print(getattr(e, ‘message’, repr(e)))
print(getattr(e, ‘message’, str(e)))

Linux has opened file limit ( ulimit set to 1024 by default ) and I had it low. After changing it I still have “Stopping…” issue, but maybe you guys will have more luck.

Hey @nemesis !!

I am also encountering the same issue. Did you find any fixes yet ? It would be great if you can add something.

Thanks

2 Likes

Hello,

I don’t know if this issue has been solved but I encountered the same problem in my code and I fixed it but actually changing the model args of my bert model to :
model.args.use_multiprocessing_for_evaluation=False
I am guessing when the predict is launched, streamlit doesn’t like the multiprocess part :slight_smile:

Hi. I’m having the same problem. I built the app on my M1 mac, used multiprocessing perfectly. Now I have deployed to an EC2 aws instance (only 2 cores) but expected to work at least. The Stopping … is everything that happens in the backend.