Actually I need to get a media file from the user. And do you have code for detecting social distancing for a video from the user
Actually I need to get a media file from the user.
So I think it’s not what streamlit-webrtc
is for.
You can use st.file_uploader
for the users to upload their media files. Then you can process the uploaded video files, save the results, and serve the files via st.video
.
I think there are some existing Streamlit apps, samples, or related threads in this focum that do something like that, so please search for them.
And do you have code for detecting social distancing for a video from the user
I don’t.
Actually its not showing video when using st.video()
Can i please show examples code for this?
Thank you very much.
import cv2
cap = cv2.VideoCapture(0)
ret, img = cap.read()
cv2.imshow("", img)
how to use the streamlit-webrtc component to show the captured video by cv2 in streamlit page?
Thank you.
A new version of streamlit-webrtc, v0.40.0 has been released!
This release includes a new big feature, class-less callback.
To define callbacks, you no longer need to create processor classes. Instead, just pass a function!
def video_frame_callback(frame):
img = frame.to_ndarray(format="bgr24")
# ... Image processing, or whatever you want ...
return av.VideoFrame.from_ndarray(img, format="bgr24")
webrtc_streamer(key="example", video_frame_callback=video_frame_callback)
See the samples in the tutorial on the README↓
Beautiful!
Found this solution to capture images from webrtc, but have to use classes (which is okay :D)
However I could never make the increased resolution work (changing media_stream_constraints ={ “video”: { "width ": 1280, } }) didn’t affect the resolution.
def live_camera(play_state):
c1, c2 = st.columns(2)
class BarcodeProcessor(VideoProcessorBase):
def __init__(self) -> None:
self.pure_img = False
def recv(self, frame: av.VideoFrame) -> av.VideoFrame:
image = frame.to_ndarray(format="bgr24")
self.pure_img = image
return av.VideoFrame.from_ndarray(image, format="bgr24")
stream = webrtc_streamer(
key="barcode-detection",
mode=WebRtcMode.SENDRECV,
desired_playing_state=play_state,
video_processor_factory=BarcodeProcessor,
media_stream_constraints={"video": True, "audio": False},
async_processing=True,
)
if st.button("Take photo"):
cv2.imwrite("test.jpg",stream.video_processor.pure_img)
possible_barcode = live_camera(True)
Thanks alot! I think I managed to solve it. The issue for me was that the dimensions I put inside the media_stream_constraint has to match the maximum exactly that of the source.
The reason I prefer using your webrtc even for photos is that I can’t change the source for the native st.camera(). So thanks alot for the module :-)!
Hello, thank you for your ongoing efforts!
Just wanted to ask if there’s a way to access the play/pause button of the video. I’m making an exercise app where when a user starts the video, mediapipe pose estimation model runs and counts the reps and sets as the user performs the exercise. As I click the pause button, the video stops but the model continues to run, and accordingly the reps and sets update which is not what I aim for. I want it to stop running that code when I click the pause button and resume running it upon clicking the play button. Any suggestions?
Thank you so much!
As I click the pause button,
Does this mean the STOP button on the WebRTC component?
The following text depends on this assumption.
If you are using class-based callback defining a class with recv(self, frame)
, you can achieve it by initializing the model in __init__()
and destroying it in on_ended(self)
callback or __del__(self)
on the same class.
class VideoProcessor(VideoProcessorBase):
def __init__(self) -> None:
print("Initialize your model here")
def recv(self, frame):
# Your image processing code
return frame
def on_ended(self):
print("Destroy your model here")
webrtc_streamer(key="sample", video_processor_factory=VideoProcessor)
This sample app may also help that uses MediaPipe pose estimation. __init__()
and __del__
.
Note that this app uses multiprocessing
to run the MediaPipe model in another process for concurrent access by multiple users and it might be a noise for you as it is not directly related to this topic.
Another way that I think can also be used when using class-less callback (New Component: streamlit-webrtc, a new way to deal with real-time media streams - #129 by whitphx) is to use on_change
callback on webrtc_streamer
component.
(I have not tested it with any real-world examples though. Let me know if this does not work.)
def on_change():
if "sample" in st.session_state:
if st.session_state["sample"].state.playing:
print("Initialize your model")
else:
print("Destroy your model if it is already initialized")
webrtc_streamer(key="sample", on_change=on_change)
@whitphx Is there any way to put the webrtc_streamer on the sidebar? Just like streamlist’s st.sidebar.camera_input('My webcam', key='cam')
@GabrielLCHeng
See the doc: st.sidebar - Streamlit Docs
with st.sidebar:
webrtc_streamer(key="sample")
this is exactly what I want! thank you so much!
Does the session state work inside the callbacks?
Hello gues, when i use webrtc on change “My Device” on my application (transcription audio to text), have this problem “”. This problem wos testing on other browsers. Help help please
Streamlit, version 1.10.0
from streamlit_webrtc import webrtc_streamer
import av
def video_frame_callback(frame):
img = frame.to_ndarray(format="bgr24")
flipped = img[::-1,:,:]
return av.VideoFrame.from_ndarray(flipped, format="bgr24")
webrtc_streamer(key="example", video_frame_callback=video_frame_callback)
@Hafsah_MR
No, because the callback is running outside the Streamlit session context.
And if it looks like working in some situations, I cannot guarantee it works perfectly in an expected way.
Dear Whitphx, thanks once again for the amazing library.
My PC struggles alot with streaming so the video becomes very compressed some times in order to compensate (I guess). Is it possible to lower the frame rate in order to achieve less compressed images?
All the best