Guide: How to build and deploy Streamlit app for ML model

Hi everyone! I figured out it would be nice to build a Streamlit app automatically for a ML model. Since you always know input and output data schema, you can automatically build the app. We implemented this as part of MLEM ML deployment tool, and it takes just couple of steps to achieve:

  1. You need to install MLEM pip install mlem
  2. Save your model in your Python script and provide a sample data to it: mlem.api.save(clf, "mymodel", sample=data)
  3. Serve model in CLI: $ mlem serve streamlit --model mymodel.

This will spin up something like this:

The app is pretty much customizable, you can change title, description and other things with CLI options for $ mlem serve.

This can build apps for most popular ML frameworks such as PyTorch, TF, Sklearn, Lightgbm, etc, etc, or even just Python functions (in fact, the Gif above shows the Streamlit app for a Python function). You can save your model from Python script or even Jupyter notebook, it doesn’t matter.

You can also build a Docker Image with Streamlit app + your model baked in with

$ mlem build docker --image.name myimage --model mymodel --server streamlit

And deploy to Fly.io, Heroku, Kubernetes or AWS Sagemaker with something like

$ mlem deploy heroku --model mymodel --app_name mlem-mymodel

The last command will do all the steps: take your model, wrap it in Streamlit app, build a Docker Image and release it as a Heroku application.

Here’s couple of examples:

I’m curious to see if that’s will be useful for you, so happy to hear any thoughts or feedback!

6 Likes

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.