I also have a mini side project similar to Google’s Quickdraw where I interact with a canvas to draw doodles, export the result as an array to send through a TensorflowJS model, and recognize the drawing (you can test here ).
It would be much faster if instead of pulling a JS project next to my Python TF pipeline, I could just use Streamlit to load the TF model and draw on a canvas which returns numpy arrays. I think if you load a canvas with your image as background we can solve our use cases.
I haven’t tested it yet, have you looked at ipycanvas ? Does the API and examples sound nice to you ? Maybe we could have a similar one :
canvas = st.canvas(size=(200, 200))
bg = load_image('test.png')
canvas.draw_image(bg, 50, 50)
canvas.fill_rect(0, 0, 50, 50)
def handle_mouse_down(x, y):
# Do something else
And then eventually it becomes it’s own input and we can interact with the polygons ?
Now I see there’s already an issue on this and I have to agree that it would be pretty hard to maintain a new JS dependency only for one streamlit method except if more people need this for their DL pipelines (there is no Canvas component in the frontend library used by Streamlit) so the foreseen way would be to wait for a plugin architecture to push our own plugin, and then merge it into the project if more people see it as a necessary API.