New Component: streamlit-webrtc, a new way to deal with real-time media streams

Hello everyone,

I made streamlit-webrtc, which sends and receives video (and audio, but it’s only partially supported now) streams between frontend and backend via WebRTC.
This should be helpful for creating, for example, performant real-time computer vision WebApps.

This week I tried Steamlit, found it interesting and useful, and then wanted to run real-time computer vision apps on Streamlit, on which users can try some computer vision models with video input from their webcams.
So I tried to develop a component to achieve it using WebRTC. Fortunately, there is a great WebRTC library for Python, aiortc.

One interesting thing about this component is that the input video streams are sourced from users’ webcams and transferred to the server-side Python process via the network. Therefore, the server does not need the access to the camera, unlike the usual approach using OpenCV (cv2.VideoCapture). It means the Python code can be hosted on a remote server (actually, I hosted a sample app on Heroku and it worked, as stated below).

In addition, WebRTC provides good performance with sending and receiving video/audio frames.

Here is a demo movie:
(A full version is hosted on YouTube)

You can see

  • The app consumes, processes, and renders video frames in real time.
  • Streamlit’s interactive controls are still working well in combination with the code using WebRTC. For example, the threshold for object detection is changed interactively during execution.

You can try out the sample app using the following commands.

$ pip install streamlit-webrtc opencv-python
$ streamlit run

I also deployed it to Heroku, and confirmed it worked; however, it’s running on a very small instance on a free plan, and may not work well if multiple users access it at the same time.
I recommend you to run it on your own environment.

It’s still a prototype and the API is not finalized, and the documentation has not been written. Please refer to the sample code when you use the component.

I welcome any feedback!


Technical note:

One technical challenge was to combine WebRTC and real-time processing with Streamlit’s execution model. This problem has already been stated in Access Webcam Video from a hosted Streamlit application.
In case of streamlit-webrtc , additionally, aiortc does not work in normal Streamlit scripts, which are executed from top to bottom and terminated at each user interaction (more fundamentally, it’s because of WebRTC’s async nature).

This library approaches this problem by creating threads independent of Streamlit execution.
When the component is used for the first time, it forks a thread, launches aiortc’s code on an event loop in the thread, and maintains the thread over all the script executions until the WebRTC session is closed.

As a result of this approach, developers should alter their programming style when using this library, from Streamlit’s declarative way to an event-and-callback-based one. streamlit-webrtc/ at 477416da84e77d76201d97269589fe5ff5d17d37 · whitphx/streamlit-webrtc · GitHub is an example. In this code, the computer vision part resides in a callback function, which is called from the forked thread triggered by the media stream regardless of Streamlit’s execution timing.

To bridge this forked execution of WebRTC and normal Streamlit’s controls, streamlit-webrtc exposes a context object, which is an interface to the objects running in the WebRTC’s async world.
By using it, values from Streamlit’s controls can be passed to the computer vision code running in the forked thread, like streamlit-webrtc/ at 477416da84e77d76201d97269589fe5ff5d17d37 · whitphx/streamlit-webrtc · GitHub.


Welcome to the community @whitphx, looks awesome !

Feel free to add it to our community tracker so we don’t lose it from our radars :grin::grin::grin:


@andfanilo Thank you! I edited the wiki :smile:


Whoa - this is really great. Nice work, @whitphx!


Hello @whitphx,

Thanks for a great project!
I am trying to test yours, and couldn’t install streamlit-webrtc.

(venv) (base) admin@Youjins-MacBook-Pro mqtt-camera-streamer-master % pip install streamlit-webrtc              
ERROR: Could not find a version that satisfies the requirement streamlit-webrtc
ERROR: No matching distribution found for streamlit-webrtc

Can I install it if I download the github repo?

Hello @MalanG ,

I think it’s because you are using python <3.8 though streamlit-webrtc currently supports only >=3.8.
Please try with python 3.8.

I will also fix the package to be compatible with older versions of Python.

EDIT: I updated the package to be compatible with Python 3.6 and 3.7. Please try again.


I got an invitation to Streamlit Sharing then deployed the sample app there:
but it didn’t work with a “permission denied” error when the app requires capturing video or audio from user’s device.

It means many interesting applications of this component, including real-time object detection and image transformation, cannot be used on Streamlit Sharing…
The only page of the sample app that works is the third one, where the app plays video and audio files on server-side and transmit them to the client.

This is because, on Streamlit Sharing, the app is running inside iframe.
getUserMedia, which is used to get access to user’s device, requires Feature Policy when called inside iframe.
Details are here: MediaDevices.getUserMedia() - Web APIs | MDN

This is out of my control :crying_cat_face:

EDIT: This problem has also been stated in Sharing blocks custom webcam component due to featurePolicy



Well this is an interesting discussion to raise with the product team, thanks for taking the time to track this down! I’ll ping them on my side too :wink:



Wow this is awesome ! Great work @whitphx !! :scream::clap::clap::clap:

1 Like

Hey thanks for raising this, @whitphx - we’re discussing this!

1 Like

Wow, it is working now on Sharing. Thank you, Streamlit team!

The performance is much better than the sample hosted on Heroku :laughing:


Very useful tool but is it possible to access for example rear camera of mobile phone with this library?

Yes, you can choose the camera with the “SELECT DEVICE” button.

I wrote a tutorial about how to use this library:


Thanks, very useful

Is it possible to store frame by frame results through each iteration in a file like json for example?

Check out this object detection demo for example.
Within it, the result of each frame is obtained in the main loop as streamlit-webrtc/ at 63a2e0278b6ef76cd075d8ff4251d0b5d97e2eef · whitphx/streamlit-webrtc · GitHub
so you can write it into files as you do in normal Python scripts.

It can be like this, for example:

with open("results.txt", "w") as f:
    while True:
        result = webrtc_ctx.video_transformer.result_queue.get()
        f.write(json.dumps(result) + "\n")

(This example prints JSON in each line, in a format called ndjson.)

1 Like

Hi community,

I forked a Neural Style Transfer app by Harsh Malra and added real-time webcam video streaming functionality by using streamlit-webrtc,
and deployed the forked app to the Sharing :smile_cat: .
(Check the “Webcam” radio button for the real time video mode)

You can try Neural Style Transfer with a video stream sourced from your local webcam or smartphone camera, interactively switching the model and parameters.


I created a PR to the upstream and hope it will be merged; Live WebCam feed via streamlit-webrtc by whitphx · Pull Request #2 · malraharsh/style-transfer-web-app · GitHub

(I first would like to post this message to the app’s forum thread but couldn’t find it, then I announce it here. Please let me know if there is somewhere else preferable to post this message :smiley: )


This is awesome @whitphx!!