Connection timeout while waiting for model to return the result

Hi guys!

My streamlit app is crashing coz my model takes a long time for execution (approx 5-6 min), and hence is causing the app to shut down.

My Main LGB Model is a combination of 3 models ( BiLSTM, BERT & GPT2)

Here is the code

sample = title + ' ' + article_text if title is not None else article_text
lgb = LGB()
output_label = lgb.predict(sample)

st.markdown('''**Analysis based on:** : Artificial intelligence''')
st.markdown(f'Predicted label : {output_label}', unsafe_allow_html=True)

Any idea what might be causing this or do you know how should I improve the performance?

I thought of using asyncio for this but I don’t know it would actually help me to resolve this issue