So I trained some ML models and deployed them on Streamlit. The user can input new data (text) and the classification models return their predictions - Yes/No. Since I have multiple models I simply iterate over them in a
for loop and then print the decisions one after each other.
Now I was thinking of using this app to make the annotation-life of my colleagues easier and put noisy labels on new documents. So the user uploads a document/inputs text, the model return the predictions and then the user can indicate whether the prediction is correct or not.
The app then stores the document/text as a datapoint and all user feedbacks as labels. This way someone could generate new gold standard training data and I can re-train my models, etc…You get it
Any hints how to build this in Streamlit? Any hints (also to other tools) are welcomed!