Access Webcam Video from a hosted Streamlit application

Hello,
I am running streamlit to process the webcam video. I hosted this application on the web on a server (heroku). The problem is that I couldn’t get the application to work because it can’t not get access to the webcam. (I use OpenCV to access the webcam video on my local machine cap=cv.VideoCapture(0) )

Can any one of you help me with that ?

1 Like

Hi @amineHY, and welcome to Streamlit!

Streamlit doesn’t currently have any browser-based camera support, so this isn’t possible right now. It seems like a really interesting use-case that we’d potentially like to support - but it also presents some real architectural challenges.

In particular, it’s hard to imagine how this could be done in a way that’s easily compatible with Streamlit’s execution model, which involves rerunning the app script from top to bottom each time an input changes. (We probably wouldn’t want to re-run an entire Streamlit app 30 times per second for each attached video stream.)

I’ve opened a feature request here, so that we can track this internally. In the meantime, can you describe your use case so that we have a better sense of how this might be implemented? (What sort of processing are you doing on the video frames? Is the processing done in real-time, or batched? Do you use every frame from a video stream, or select samples at some frequency?)

Thanks!

3 Likes

Hi @tim,

Thank you for the reply, for the moment, I am working on a project to perform computer vision (in general). I successfully built the app and it works great locally process locally a video from a webcam/url/file. After hosting the app on a web server, the app works fine when using a video file from a url or file (copied on the server for testing but I can not get the video from the webcam to run on the server. I am not an expert, but I’ve read that I need to create a server / client communication protocol, using Flask/gunicorn for instance. At this moment streamlit does not support that and this is might not be a request feature anyway
You made a good point when you mentioned the 30 times re-run of the streamlit app, that justify my lack of experience in streaming video to the browser. Although I am not sure of that.

A typical use case is that a user launches a web page of the streamlit application and chose the source of the video, for instance, the webcam, then the user can choose a computer vision model (say, object detection or classification) algorithm and see the result in real-time. Another use case is to apply the same processing but not for the webcam video, but for a video that comes from an IP camera. This is a type of uses cases I can think if I want to visualize the result in real-time.

I don’t know if streamlit is meant to perform a similar task, or it is better to use a side solution.

Thank you again @tim for creating the FR.

A workaround solution to this problem is welcome.

Hi @amineHY,

Regarding Flask there is another topic that seems to be related to this: Streamlit restful app

Please let us know if this helps you

There is still no way to open the camera using streamlit?

Not currently.

1 Like

We’re working on a solution that will allow the webcam to be used in Streamlit Components (this isn’t currently possible for sandboxing reasons). There’s no specific date on this, but it should be happening soon!

3 Likes

Hello,

I created streamlit-webrtc, which might be useful in such use cases.
Here is the topic about it: New Component: streamlit-webrtc, a new way to deal with real-time media streams

With this component, I confirmed it was possible to host a Streamlit app with computer vision models on a remote server (Heroku, in my sample case) and use it with a local webcam via web browser in real time, even on a smartphone.

I posted a working example in the topic linked above, including object detection with MobileNet SSD, simple computer vision processing with OpenCV, and so on.

4 Likes

Hello, can you please share the youtube link to this tutorial?

It’s will be very useful…

@whitphx I wanted to build an app for Emotion Based Music player, I used the Webrtcmode = SENDONLY with key=loopback. The system works perfectly fine on my localhost but in streamlit:sharing the below line throws an QueueEmpty error

webrtc_ctx.video_receiver.get_frame(timeout=1)

Please help Senpai @whitphx
Github repo: https://github.com/warrenferns/Emotion-Based-Music-Player/blob/main/main.py

Hello, I stuck at using streamlit-webrtc to my project. I read all the articles by @whitphx , But I did not get any.
=>open cv videocapture will get frame by frame from the following commands
cap=cv.VideoCapture(0)
ret, img=cap.read()
classify(img) # I have to pass it to this function
But how can I achieve this using streamlit-webrtc (How to capture frame by frame )
My project is face Recognition using dlib .
Please help me to solve this.
Thank you,
regards
Pavan Sai

I think that when the app runs on a remote host, establishing the connection takes longer time and webrtc_ctx.video_receiver.get_frame(timeout=1) throws queue.Empty error because of timeout.

However, your code linked above already wraps this line with try-except and the error is caught for the loop to continue. Then current code looks able to work. Isn’t it?

Even so, if you want to avoid the error, please try setting the timeout longer.

Try to start with the tutorial above and extend the sample code in it. You should know cv2.VideoCapture is no longer used if you use streamlit-webrtc instead, and find where your computer vision code should be with streamlit-webrtc.

BTW, please format your sample code in the post because non-formatted code is hard to read.

2 Likes

@whitphx I changed the timeout value to 10 but its still not working and Face is not detected within 20 secs as I dont want to extend time in detection to keep it user friendly.
This is the Remote link of Streamlit sharing I’ve hosted on
https://share.streamlit.io/warrenferns/emotion-based-music-player/main/main.py
Please do resolve :pray:

@whitphx I’m working on an age verification system using face detection.Where i capture the image of the user and predict the age .The problem is I can’t use the camera from stream .Can you please help me out ?

Now, we can access the webcam using Streamlit. Thanks to the streamlit-webrtc. Just install this module

pip install streamlit-webrtc

Now, we can access the webcam using this module, let’s see with example

import cv2
from streamlit_webrtc import VideoTransformerBase, webrtc_streamer

faceCascade = cv2.CascadeClassifier(cv2.haarcascades+'haarcascade_frontalface_default.xml')


class VideoTransformer(VideoTransformerBase):
    def __init__(self):
        self.i = 0

    def transform(self, frame):
        img = frame.to_ndarray(format="bgr24")
        gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
        faces = faceCascade.detectMultiScale(gray, 1.3, 5)
        i =self.i+1
        for (x, y, w, h) in faces:
            cv2.rectangle(img, (x, y), (x + w, y + h), (95, 207, 30), 3)
            cv2.rectangle(img, (x, y - 40), (x + w, y), (95, 207, 30), -1)
            cv2.putText(img, 'F-' + str(i), (x, y - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.8, (0, 0, 0), 2)

        return img

webrtc_streamer(key="example", video_transformer_factory=VideoTransformer)

You can see app screenshot below

3 Likes

Does a streamlit-webrtc web-app hosted on a server support multiple client connections ? I recently created a face anti spoofing web app using streamlit-webrtc (webrtc_streamer) and I had this bad experience where the application would close on one client’s machine and start on the other client’s machine when they accessed it. Aren’t there any workarounds? I have gone through a lot of WebRTC multi-peer connection articles but they all seem to suggest that Javascript is the way to go although my face anti spoofing libraries are supported in only python.

streamlit_webrtc supports multiple connections as long as the server has enough CPU cores.

The video chat sample below is the example about it - multiple users can access the server at the same time.

Apart, I have an experience that multiple sessions do not work when the streamlit_webrtc server uses MediaPipe and I suspect that only single MediaPipe object can exist in a single process. I solved this problem by using multiprocessing to create a process-isolated MediaPipe object for each user as below.

Hey @whitphx, I’m facing the same issue. This solution though is for the older class-based APIs.

Can you share a code snippet or steps for the same when using the latest (callbacks functions) API?