Hi everyone! I figured out it would be nice to build a Streamlit app automatically for a ML model. Since you always know input and output data schema, you can automatically build the app. We implemented this as part of MLEM ML deployment tool, and it takes just couple of steps to achieve:
- You need to install MLEM
pip install mlem
- Save your model in your Python script and provide a sample data to it:
mlem.api.save(clf, "mymodel", sample=data)
- Serve model in CLI:
$ mlem serve streamlit --model mymodel
.
This will spin up something like this:
The app is pretty much customizable, you can change title, description and other things with CLI options for $ mlem serve
.
This can build apps for most popular ML frameworks such as PyTorch, TF, Sklearn, Lightgbm, etc, etc, or even just Python functions (in fact, the Gif above shows the Streamlit app for a Python function). You can save your model from Python script or even Jupyter notebook, it doesn’t matter.
You can also build a Docker Image with Streamlit app + your model baked in with
$ mlem build docker --image.name myimage --model mymodel --server streamlit
And deploy to Fly.io, Heroku, Kubernetes or AWS Sagemaker with something like
$ mlem deploy heroku --model mymodel --app_name mlem-mymodel
The last command will do all the steps: take your model, wrap it in Streamlit app, build a Docker Image and release it as a Heroku application.
Here’s couple of examples:
- NLP model using Huggingface, trained in Jupyter notebook, deployed to Fly Deploying ML models straight from Jupyter Notebooks - DEV Community 👩💻👨💻
- PyTorch CV model, trained in Python script, deployed to Fly Deploy Computer Vision Models Faster and Easier
- Tabular data, Sklearn, building Docker Image and deploying it to Heroku and Kubernetes. To build not a REST API (FastAPI) app, but a Streamlit app, just use
mlem serve streamlit
,mlem build --server streamlit
, andmlem deploy --server streamlit
: Get Started
I’m curious to see if that’s will be useful for you, so happy to hear any thoughts or feedback!