How do I use my trained model such that the app doesn't have to train on each input?

Hi,

I got my app to work (well sort of) - shoutout to @snehankekre and @blackary for the help on a previous post.

Good stuff to know

I started learning machine learning and coding in python roughly 3 months ago and this is my first Streamlit app. I am trying to create a simple license plate detector that returns a string of the license plate which then taps into a license plate database and performs a google search of the given car brand, model and year.

The link for the app is here: https://iipeteii-car-license-plate-recognition-clean-app-xom5kv.streamlit.app/

My repo from which the app runs is here: GitHub - IIPeteII/car-license-plate-recognition-clean

The app file from which the app runs is: car-license-plate-recognition-clean/app.py at main · IIPeteII/car-license-plate-recognition-clean · GitHub

The app right now shows an error when no picture is uploaded (that’s okay because the functionality is there once the picture is uploaded).

My issue

However, as a picture is uploaded, the model starts training and that takes up to 10-12 minuts every time - this obviously isn’t ideal and I would like to know if there is a solution such that the model could be uploaded into the app and then predict on new pictures coming in.

Let me know of your ideas and solutions to this.

2 Likes

Hi @IIPeteII,

Thanks for sharing your question and congrats on getting your app live! :partying_face:

Streamlit Community Cloud now serves as a free resource for users to get started with Streamlit and deploy 1 private app + unlimited public apps. As such, it is not designed for ML training jobs. It is expected behavior that model training takes a long time.

I would like to know if there is a solution such that the model could be uploaded into the app and then predict on new pictures coming in.

Yes, that is possible. You could even upload the model to GitHub and/or use GitLFS (if your model exceeds 25 MB) to do so and in your app use @st.experimental_singleton cache the model. The solution you propose would work too! (albeit it might not be the ideal user experience) It is possible for users to upload models with st.file_uploader and use that model for inference.

Ideally you want to train the model offline (not on Community Cloud), save the model, perhaps even compress/distill the keras model, and upload it to a service like GitHub, AWS, GCP, etc, and then download/load the model in your app. Doing ML training jobs on Community Cloud would mean your app would exhaust it’s 1 GB of RAM fairly quickly.

Figuring out how to refactor your code though is your prerogative :balloon: All the best, we can’t wait to see what you build!

re: AttributeError

You can first check if uploaded_file is not None and only then the rest of your app functionality.

Hey again @snehankekre

Thanks for the insightful perspective! I figured there’d be a smarter way of doing this.

Seems like uploading the model to my GitHub repo is the way to go - but then how do I save the model? I have it functioning locally and the app works as well. How do I go about exporting my model as a file? Uploading it is not the issue, but I’m curious as to what type of file a machine learning model is.

Thanks for your help - you have been an awesome source of ideas/solutions during this!

1 Like

Uploading it is not the issue, but I’m curious as to what type of file a machine learning model is.

Ah, I see! Thanks for clarifying! From your code, it looks like you’re using Keras/TensorFlow. I’d suggest pouring over documentation and tutorials on saving/loading models + searching GitHub of examples of loading saved TF/Keras models. It should be just 1-3 lines of extra code:

You probably should not rely on the following since it’s from 2 years ago, but here’s an example from one of my repos on saving TF models.

1 Like

Awesome, thank you so much! I will take a look :smile:

Fantastic! Give it a couple of tries and let us know how you ended up solving the issues and/or if you’ve run into new ones :smiley:

1 Like

Hey @snehankekre

insert Frankenstein meme: IT’S ALIIIIIIVE

Solution:

Just as you said, it was quite simple.

I saved the model from my .ipynb to a folder I had named Model by:

model.save(r'Model')

and in the app.py file I loaded it through

model = keras.models.load_model(r'Model')

alas it works to the prototype level I need and when I insert a picture I get the following output:

I cannot thank you enough for the help that you have given in my first babysteps towards learning data science and machine learning - feels marvelous to know that I started three-four months ago and coming up with this now due to your help :pray: :raised_hands: :clap:

1 Like

:heart_eyes: Glad you’ve got it work! It’s all you! You can pay it forward by dropping by the forums any time and helping other community members get unstuck or sharing other cool Streamlit apps you’ve built with those still unaware about Streamlit :wink:

1 Like

100% will do! I will for sure build on what I’ve learned with this project and share with others!

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.