New Component: streamlit-webrtc, a new way to deal with real-time media streams


Actually I need to get a media file from the user.

So I think it’s not what streamlit-webrtc is for.
You can use st.file_uploader for the users to upload their media files. Then you can process the uploaded video files, save the results, and serve the files via
I think there are some existing Streamlit apps, samples, or related threads in this focum that do something like that, so please search for them.

And do you have code for detecting social distancing for a video from the user

I don’t.

Actually its not showing video when using
Can i please show examples code for this?

Thank you very much.


import cv2
cap = cv2.VideoCapture(0)
ret, img =
cv2.imshow("", img)

how to use the streamlit-webrtc component to show the captured video by cv2 in streamlit page?
Thank you.

It’s not supported now, and tracked at this issue:

A new version of streamlit-webrtc, v0.40.0 has been released!

This release includes a new big feature, class-less callback.
To define callbacks, you no longer need to create processor classes. Instead, just pass a function!

def video_frame_callback(frame):
    img = frame.to_ndarray(format="bgr24")

    # ... Image processing, or whatever you want ...

    return av.VideoFrame.from_ndarray(img, format="bgr24")

webrtc_streamer(key="example", video_frame_callback=video_frame_callback)

See the samples in the tutorial on the README↓



1 Like

Found this solution to capture images from webrtc, but have to use classes (which is okay :D)

However I could never make the increased resolution work (changing media_stream_constraints ={ “video”: { "width ": 1280, } }) didn’t affect the resolution.

def live_camera(play_state):
    c1, c2 = st.columns(2)
    class BarcodeProcessor(VideoProcessorBase):
        def __init__(self) -> None:
            self.pure_img = False

        def recv(self, frame: av.VideoFrame) -> av.VideoFrame:
            image = frame.to_ndarray(format="bgr24")
            self.pure_img = image
            return av.VideoFrame.from_ndarray(image, format="bgr24")

    stream = webrtc_streamer(
         media_stream_constraints={"video": True, "audio": False},

    if st.button("Take photo"):

possible_barcode = live_camera(True)

@ZKLO I think this is the same issue (it’s not solved though):

Thanks alot! I think I managed to solve it. The issue for me was that the dimensions I put inside the media_stream_constraint has to match the maximum exactly that of the source.

The reason I prefer using your webrtc even for photos is that I can’t change the source for the native So thanks alot for the module :-)!

1 Like

Hello, thank you for your ongoing efforts!

Just wanted to ask if there’s a way to access the play/pause button of the video. I’m making an exercise app where when a user starts the video, mediapipe pose estimation model runs and counts the reps and sets as the user performs the exercise. As I click the pause button, the video stops but the model continues to run, and accordingly the reps and sets update which is not what I aim for. I want it to stop running that code when I click the pause button and resume running it upon clicking the play button. Any suggestions?

Thank you so much!


As I click the pause button,

Does this mean the STOP button on the WebRTC component?
The following text depends on this assumption.

If you are using class-based callback defining a class with recv(self, frame), you can achieve it by initializing the model in __init__() and destroying it in on_ended(self) callback or __del__(self) on the same class.

class VideoProcessor(VideoProcessorBase):
    def __init__(self) -> None:
        print("Initialize your model here")

    def recv(self, frame):
        # Your image processing code
        return frame

    def on_ended(self):
        print("Destroy your model here")

webrtc_streamer(key="sample", video_processor_factory=VideoProcessor)

This sample app may also help that uses MediaPipe pose estimation. __init__() and __del__.
Note that this app uses multiprocessing to run the MediaPipe model in another process for concurrent access by multiple users and it might be a noise for you as it is not directly related to this topic.

Another way that I think can also be used when using class-less callback (New Component: streamlit-webrtc, a new way to deal with real-time media streams - #129 by whitphx) is to use on_change callback on webrtc_streamer component.
(I have not tested it with any real-world examples though. Let me know if this does not work.)

def on_change():
    if "sample" in st.session_state:
        if st.session_state["sample"].state.playing:
            print("Initialize your model")
            print("Destroy your model if it is already initialized")

webrtc_streamer(key="sample", on_change=on_change)

@whitphx Is there any way to put the webrtc_streamer on the sidebar? Just like streamlist’s st.sidebar.camera_input('My webcam', key='cam')

See the doc: st.sidebar - Streamlit Docs

with st.sidebar:

this is exactly what I want! thank you so much!

1 Like

Does the session state work inside the callbacks?

Hello gues, when i use webrtc on change “My Device” on my application (transcription audio to text), have this problem “”. This problem wos testing on other browsers. Help help please :slight_smile:

Streamlit, version 1.10.0

from streamlit_webrtc import webrtc_streamer
import av

def video_frame_callback(frame):
    img = frame.to_ndarray(format="bgr24")

    flipped = img[::-1,:,:]

    return av.VideoFrame.from_ndarray(flipped, format="bgr24")

webrtc_streamer(key="example", video_frame_callback=video_frame_callback)

No, because the callback is running outside the Streamlit session context.
And if it looks like working in some situations, I cannot guarantee it works perfectly in an expected way.

Dear Whitphx, thanks once again for the amazing library. :partying_face:

My PC struggles alot with streaming so the video becomes very compressed some times in order to compensate (I guess). Is it possible to lower the frame rate in order to achieve less compressed images? :melting_face:

All the best

Oh! That’s great, but I wonder if I don’t use the camera, just crop the local video into frame-by-frame pictures, and then use this component to display the pictures continuously like the video(the frame rate is the same). Recently I wanted to use the cv.VideoCapture and st.image components to process the video and realize this funciton, but I met the problem, st.image component can only show 8 pics in one second when the original video is 30fps. I really want to solve it, thank you soooooo much!!! My English is poor, I don’t know if I express myself clearly, I would be very appreciate if you can help me with this problem!!!
In short, I wonder if this component can show the continously pics(I already processed with yolo) like the video(in the same frame rate as the original video), thank you!!!