New Component: streamlit-webrtc, a new way to deal with real-time media streams

Thanks for your answer!

It appears to work now :D. Seem like I needed to introduce a little delay (time.sleep(0.10) and it is able to update the barcode very smoothly :smiley: :partying_face:

As you can see the barcodes st.subheaders gets stacked. So I tried to use placeholder = st.empty() to clear out the old prints. but got the following error:

The final loop actually works quite well, only dealing with a few things.

I’m looking for a way to either a) collapse the streaming-window following barcode detection with the option of re-spawning the window after tapping a st.button(“Scan new barcode”) or b) continous updated of the last detected barcode with st.subheader() without stacking (see image above). By putting break in the while-loop (see code below), I can succesfully grap the detected barcode and use it for various actions - but this method doesn’t allow for automatic update. However the st.subheader is correctly updated if I hit a st.button() after scanny a new barcode :sweat_smile:

   while True:
      if (stream.video_processor.barcode_val != False) and stream.video_processor.barcode_val != old_barcode:
         old_barcode = stream.video_processor.barcode_val
         place_holder.empty()  ## THIS FAILS
         stream.desired_playing_state= False ## THIS DOESN'T SEEM TO CLOSE THE STREAM

Thank you again so much for the amazing library. :partying_face: :partying_face: :partying_face:
Will share the final project if I can get everything working.


@lorenzweb Congrats! it’s fantastic :smile:

I have some comments;

  • As you wrote in the comment, setting stream.desired_playing_state does not work because this library does not support it. To affect the webrtc running state, we have to set the desired_playing_state argument on the webrtc_streamer() function.
    • So, if you want to programmatically stop the stream after the loop ends, you may have to call st.experimental_rerun() to execute the next run, where you will pass desired_playing_state=False to webrtc_streamer().
  • As you wrote in the comment, the break prevents the “RUNNING” indicator as it stops the while-loop, however, it stops the loop, of course, so that c1.subheader() is no more called and its content is no longer updated in the current run. So, if you want the content of the subheader to be updated continuously, you should not use break and keep the loop running.
    • For now, to fetch data from the webrtc worker and process it continuously in the app script, using loops is the only way. And in that case, we have to accept the “RUNNING” indicator to show during it.
1 Like

Hi, appreciate your work!

I’m new to Python and Streamlit. I’m trying to build a pose estimation project where I have to provide user with both textual and audio feedback as they perform an exercise. I have achieved the textual feedback part using VideoProcesser where I take the camera feed frame by frame and with the help of trained mediapipe model display the name of performed pose to user. Now I have to also output the audio for this. I cannot figure out how to add audio feedback along with video frame processing.

Is this even possible with this technology to give audio feedback along with processing video frames?

Thank you so much.

Can I access the Start/Stop button? I want to perform some action upon button click.

When you initialize your stream object, you can enable audio like this:


Dear Whitphx,

Thanks for the cool library!

I managed to run the webrtc at home without any issue. I then tried to show my colleagues at work using a company PC) but it just started freezing/hanging after I hit Start (note: I am able to choose webcam preview in select device)


After hitting start I noticed that the ICE connection refuse to go to “ICE connection state is completed” :smiling_face_with_tear:

Instead it closes and throws this message:

I feel like these issues could relate to perhaps IT / network security we have, but I’m a bit stuck debugging. What is your best bet?


audio feedback

It may be achieved by using AudioProcessor.

In the official example, currently there are only samples that transform input audio into output, however, overriding the audio inside AudioProcessor.recv() also seems possible - I haven’t tried and am not sure though.

One drawback of this approach is that it requires the input audio even though it is not used. This is a limitation of the current API, as it only supports custom filters, but not custom sources.

Can I access the Start/Stop button?

The event handler on button click is not supported now (it’s under development now: feature/callback by whitphx · Pull Request #695 · whitphx/streamlit-webrtc · GitHub . Please wait for the release.)

Instead, you can refer to ctx.state.playing for status checking.

That error is being tracked in Connection is shutdown and errors appear in some network environment · Issue #552 · whitphx/streamlit-webrtc · GitHub, but has not resolved yet.

I think, however, this error message itself is a subsequent event following a network trouble, and the fundamental problem may resides in the network architecture as this comment. For example, there are firewalls that drop WebRTC packets. This often occurs in office or mobile networks.

So GitHub - whitphx/streamlit-webrtc: Real-time video and audio streams over the network, with Streamlit. would be the help.

This post seems to be related too.

I released streamlit-webrtc v0.37.0 :v:

This new version supports


Continuous amazing work @whitphx!! :balloon:


@whitphx When deloy object detection by webrtc streamer, how do I show fps at other column? As I understood, can’t run st.markdown at recv().

col_monitor, col_result = st.columns((6, 2)) # webcam here
        col_cam, col_rep_img, col_detail = st.columns((6, 1, 1))
        col_fr, col_dt, col_iw, _  = st.columns((2, 2, 2, 2))
        kpi1_text = None
        with col_fr:
              st.markdown('**Frame Rate of Camera**') # I want to show fps here
             kpi1_text = st.markdown("0")

Hii whitphx,
Is there is any way to get the video from the user process it and show it on the app as and writer is not working…Can you please help me out with this

Feel free to check out my Streamlit Audio Recorder Custom Component that implements functionality to record audio from the user’s microphone via the Web-Media-API (web browser audio API access, built on top of “audio-react-recorder”). This custom component works in applications that are deployed to the web! :balloon:

Currently, I am working on directly returning high-quality WAV audio data back to Python/Streamlit. However, I ran into multiple issues regarding buffering Web-Audio-Blobs (slicing/concatenating audio blobs & conversion to base64 data). In case somebody of you is experienced in dealing with these kinds of audio datatypes please don’t hesitate and text me! :raised_hand_with_fingers_splayed:

1 Like


As I understood, can’t run st.markdown at recv().


I think you can write code like below, the while-loop in the object detection sample.

In this loop, st.table (precisely, the combination of st.empty + placeholder.table) is used to show the information retrieved from inside recv() through the instance attribute of the video processor, webrtc_ctx.video_processor.result_queue.

You can use st.markdown and pass the FPS from recv() to it in such a way.


You can record the video from streamlit-webrtc by using the recorder. below is the sample.

After recording, the saved video can be served through, I think.

Or if you have a different problem, please explain the details (I can’t understand what the “writer” you mentioned is). Pasting the actual code is the best.

@stefanrmmr I think it’s not associated with streamlit-webrtc and should not be posted here.
Why don’t you promote it in a separate thread for it, as the doc says?

please post on the Streamlit ‘Show the Community!’ Forum category with the title similar to “New Component: <your component name> , a new way to do X”.

Publish a Component - Streamlit Docs

Actually I need to get a media file from the user. And do you have code for detecting social distancing for a video from the user


Actually I need to get a media file from the user.

So I think it’s not what streamlit-webrtc is for.
You can use st.file_uploader for the users to upload their media files. Then you can process the uploaded video files, save the results, and serve the files via
I think there are some existing Streamlit apps, samples, or related threads in this focum that do something like that, so please search for them.

And do you have code for detecting social distancing for a video from the user

I don’t.

Actually its not showing video when using
Can i please show examples code for this?

Thank you very much.