As you can see the barcodes st.subheaders gets stacked. So I tried to use placeholder = st.empty() to clear out the old prints. but got the following error:
The final loop actually works quite well, only dealing with a few things.
I’m looking for a way to either a) collapse the streaming-window following barcode detection with the option of re-spawning the window after tapping a st.button(“Scan new barcode”) or b) continous updated of the last detected barcode with st.subheader() without stacking (see image above). By putting break in the while-loop (see code below), I can succesfully grap the detected barcode and use it for various actions - but this method doesn’t allow for automatic update. However the st.subheader is correctly updated if I hit a st.button() after scanny a new barcode
while True:
time.sleep(0.10)
if (stream.video_processor.barcode_val != False) and stream.video_processor.barcode_val != old_barcode:
old_barcode = stream.video_processor.barcode_val
place_holder.empty() ## THIS FAILS
c1.subheader(old_barcode)
stream.desired_playing_state= False ## THIS DOESN'T SEEM TO CLOSE THE STREAM
break # THIS BREAK PREVENTS THE "STREAMLIT ICON" FROM STAYING IN RUNNING MODE, BUT THE VIDEO-STREAM STILL UDPATES THE LATEST BARCODE CORRECTLY, BUT DOESN*T PRINT WITH st.subheader() UNTIL A WIDGET IS PRESSED.
Thank you again so much for the amazing library.
Will share the final project if I can get everything working.
As you wrote in the comment, setting stream.desired_playing_state does not work because this library does not support it. To affect the webrtc running state, we have to set the desired_playing_state argument on the webrtc_streamer() function.
So, if you want to programmatically stop the stream after the loop ends, you may have to call st.experimental_rerun() to execute the next run, where you will pass desired_playing_state=False to webrtc_streamer().
As you wrote in the comment, the break prevents the “RUNNING” indicator as it stops the while-loop, however, it stops the loop, of course, so that c1.subheader() is no more called and its content is no longer updated in the current run. So, if you want the content of the subheader to be updated continuously, you should not use break and keep the loop running.
For now, to fetch data from the webrtc worker and process it continuously in the app script, using loops is the only way. And in that case, we have to accept the “RUNNING” indicator to show during it.
I’m new to Python and Streamlit. I’m trying to build a pose estimation project where I have to provide user with both textual and audio feedback as they perform an exercise. I have achieved the textual feedback part using VideoProcesser where I take the camera feed frame by frame and with the help of trained mediapipe model display the name of performed pose to user. Now I have to also output the audio for this. I cannot figure out how to add audio feedback along with video frame processing.
Is this even possible with this technology to give audio feedback along with processing video frames?
I managed to run the webrtc at home without any issue. I then tried to show my colleagues at work using a company PC) but it just started freezing/hanging after I hit Start (note: I am able to choose webcam preview in select device)
After hitting start I noticed that the ICE connection refuse to go to “ICE connection state is completed”
In the official example, currently there are only samples that transform input audio into output, however, overriding the audio inside AudioProcessor.recv() also seems possible - I haven’t tried and am not sure though.
One drawback of this approach is that it requires the input audio even though it is not used. This is a limitation of the current API, as it only supports custom filters, but not custom sources.
I think, however, this error message itself is a subsequent event following a network trouble, and the fundamental problem may resides in the network architecture as this comment. For example, there are firewalls that drop WebRTC packets. This often occurs in office or mobile networks.
Hii whitphx,
Is there is any way to get the video from the user process it and show it on the app as st.video() and writer is not working…Can you please help me out with this
Feel free to check out my Streamlit Audio Recorder Custom Component that implements functionality to record audio from the user’s microphone via the Web-Media-API (web browser audio API access, built on top of “audio-react-recorder”). This custom component works in applications that are deployed to the web!
Currently, I am working on directly returning high-quality WAV audio data back to Python/Streamlit. However, I ran into multiple issues regarding buffering Web-Audio-Blobs (slicing/concatenating audio blobs & conversion to base64 data). In case somebody of you is experienced in dealing with these kinds of audio datatypes please don’t hesitate and text me!
I think you can write code like below, the while-loop in the object detection sample.
In this loop, st.table (precisely, the combination of st.empty + placeholder.table) is used to show the information retrieved from inside recv() through the instance attribute of the video processor, webrtc_ctx.video_processor.result_queue.
You can use st.markdown and pass the FPS from recv() to it in such a way.
You can record the video from streamlit-webrtc by using the recorder. app_record.py below is the sample.
After recording, the saved video can be served through st.video, I think.
Or if you have a different problem, please explain the details (I can’t understand what the “writer” you mentioned is). Pasting the actual code is the best.
@stefanrmmr I think it’s not associated with streamlit-webrtc and should not be posted here.
Why don’t you promote it in a separate thread for it, as the doc says?
Actually I need to get a media file from the user.
So I think it’s not what streamlit-webrtc is for.
You can use st.file_uploader for the users to upload their media files. Then you can process the uploaded video files, save the results, and serve the files via st.video.
I think there are some existing Streamlit apps, samples, or related threads in this focum that do something like that, so please search for them.
And do you have code for detecting social distancing for a video from the user
Thanks for stopping by! We use cookies to help us understand how you interact with our website.
By clicking “Accept all”, you consent to our use of cookies. For more information, please see our privacy policy.
Cookie settings
Strictly necessary cookies
These cookies are necessary for the website to function and cannot be switched off. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms.
Performance cookies
These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us understand how visitors move around the site and which pages are most frequently visited.
Functional cookies
These cookies are used to record your choices and settings, maintain your preferences over time and recognize you when you return to our website. These cookies help us to personalize our content for you and remember your preferences.
Targeting cookies
These cookies may be deployed to our site by our advertising partners to build a profile of your interest and provide you with content that is relevant to you, including showing you relevant ads on other websites.