Check out this awesome Streamlit app I built
Hey @Uchenna . Use @st.cache
before the function definition of the model. For the first time it loads the model from file. From the second time onwards it stores the model in cache without loading everytime which speed up the prediction.
Thank you @Guna_Sekhar_Venkata this is noted.