Hi I have this code, when I run locally in my Windows machine an streamlit app with yolo 8 I open the webcam and works to detect my hands but when I run it in the share.streamlit.io gives and error but I’m not sure what should I do this is the error:
[ WARN:2@80012.501] global cap_v4l.cpp:982 open VIDEOIO(V4L2:/dev/video0): can't open camera by index
[ERROR:2@80012.501] global obsensor_uvc_stream_channel.cpp:156 getStreamChannelGroup Camera index out of range
This does not work on any hosted environment, wrong approach.
cv2.VideoCapture(source_webcam)
and what will be the good approach?
Hey, thanks for posting…
let me solve your problem. 
First understand this…
When you deploy a streamlit app, the app is running on the server and when you call the command cv2.VideoCapture()
, it searches for the camera on the server instead of user’s browser. So when you are running it locally, your server has a camera and so it works perfectly fine.
So, what is the solution for this?..
You need to use the tool named streamlit-webrtc .
You can just check their website where the simple implementation is shown.
Or you can check the video tutorial by PyCon JP . Where he shows how to implement it in your code.
ThankYou.
1 Like
@ilovetensor thank you so much I been doing my research but I’m still having problems openning the webcam with the library streamlit-webrtc I tried with this code but still not working on streamlit cloud:
help please
def play_webcam(conf, model):
“”"
Plays a webcam stream. Detects Objects in real-time using the YOLOv8 object detection model
Parameters:
conf: Confidence of YOLOv8 model.
model: An instance of the `YOLOv8` class containing the YOLOv8 model.
Returns:
None
Raises:
None
"""
source_webcam = settings.WEBCAM_PATH
is_display_tracker, tracker = display_tracker_options()
if st.sidebar.button('Detect Objects'):
try:
# Change webrtc_stream variable name to my_webrtc_stream
my_webrtc_stream = streamlit_webrtc.WebRTCStream(
key="stream",
media_stream_constraints={
"video": {"width": 1280, "height": 720},
},
)
st.webrtc_stream(my_webrtc_stream)
st_frame = st.empty()
while (my_webrtc_stream.running):
success, image = my_webrtc_stream.get_frame()
if success:
_display_detected_frames(conf,
model,
st_frame,
image,
is_display_tracker,
tracker,
)
else:
my_webrtc_stream.stop()
break
except Exception as e:
st.sidebar.error("Error loading video: " + str(e))
To deploy the app to the cloud, you have to add rtc_configuration parameter to the webrtc_streamer().
like this…
webrtc_streamer(
key="something",
video_frame_callback=callback,
rtc_configuration={ # Add this line
"iceServers": [{"urls": ["stun:stun.l.google.com:19302"]}]
}
)
You can also refer this article for clear basic understanding of streamlit_webrtc
So to create a def play_webcam(conf, model) in my code to run my model over the input from the camera I need to change a few lines right? this is my code:
I tried to change the lines vid_cap = cv2.VideoCapture(source_webcam) but i haven’t had success could you help me? Because it doesn’t show the results of my model and I was trying my best I really do please help. when i run the camera it gives me in the streamlit cloud terminal:
2023-08-14 03:01:12.146 Thread ‘async_media_processor_6’: missing ScriptRunContext
I really do need help
def play_webcam(conf, model):
"""
Plays a webcam stream. Detects Objects in real-time using the YOLO object detection model.
Returns:
None
Raises:
None
"""
st.sidebar.title("Webcam Object Detection")
def callback(frame):
img = frame.to_ndarray(format="bgr24")
is_display_tracker, tracker = display_tracker_options()
st_frame = st.empty()
image = _display_detected_frames(conf, model, st_frame, img, is_display_tracker, tracker)
return av.VideoFrame.from_ndarray(image, format="bgr24")
webrtc_streamer(
key="example",
video_frame_callback=callback,
rtc_configuration={"iceServers": [{"urls": ["stun:stun.l.google.com:19302"]}]},
media_stream_constraints={"video": True, "audio": False},
)
1 Like
Hey, @Eduardo_Padron, dont worry
…
let me do this for you .
global model
global conf
model = something
conf = something_else
def _display_detected_frames( image, is_display_tracking=None, tracker=None):
global conf
global model
image = cv2.resize(image, (720, int(720*(9/16))))
if is_display_tracking:
res = model.track(image, conf=conf, persist=True, tracker=tracker)
else:
# Predict the objects in the image using the YOLOv8 model
res = model.predict(image, conf=conf)
# Return the processed frame to callback
res_plotted = res[0].plot()
return res_plotted
def callback(frame):
img = frame.to_ndarray(format="bgr24")
is_display_tracker, tracker = display_tracker_options()
image = _display_detected_frames(img, is_display_tracker, tracker)
return av.VideoFrame.from_ndarray(image, format="bgr24")
webrtc_streamer(
key="example",
video_frame_callback=callback,
rtc_configuration={"iceServers": [{"urls": ["stun:stun.l.google.com:19302"]}]},
)
few fixes i have done in your code, like you cannot pass anything else from the callback function so you need to define your model and configuration as global variables or you can get them from any other function also as you have done in display_tracker_option
function…
Try this, I hope this will work for you. 
1 Like
Hi @ilovetensor I’m really trying but is impossible I’m getting this error with your fix:
2023-08-18 20:10:05.800 Thread 'async_media_processor_0': missing ScriptRunContext
Here is my full code: could you check my file helper.py I’m not sure what happened but is not working I tried with this and it gives me the output from the model correctely but is not showing the frames in the webcam window:
def play_webcam(conf, model):
"""
Plays a webcam stream. Detects Objects in real-time using the YOLO object detection model.
Returns:
None
Raises:
None
"""
st.sidebar.title("Webcam Object Detection")
webrtc_streamer(
key="example",
video_processor_factory=lambda: MyVideoTransformer(conf, model),
rtc_configuration={"iceServers": [{"urls": ["stun:stun.l.google.com:19302"]}]},
media_stream_constraints={"video": True, "audio": False},
)
class MyVideoTransformer(VideoTransformerBase):
def init(self, conf, model):
self.conf = conf
self.model = model
def recv(self, frame):
image = frame.to_ndarray(format="bgr24")
processed_image = self._display_detected_frames(image)
st.image(processed_image, caption='Detected Video', channels="BGR", use_column_width=True)
def _display_detected_frames(self, image):
orig_h, orig_w = image.shape[0:2]
width = 720 # Set the desired width for processing
# cv2.resize used in a forked thread may cause memory leaks
input = np.asarray(Image.fromarray(image).resize((width, int(width * orig_h / orig_w))))
if self.model is not None:
# Perform object detection using YOLO model
res = self.model.predict(input, conf=self.conf)
# Plot the detected objects on the video frame
res_plotted = res[0].plot()
return res_plotted
return input