Hi everyone!
I’m trying to implement a video classification application for which I’m using 3 files [labels, trained model and input video]. I’m hardcoding the file paths of labels.txt file and trained model however I’m allowing the user to select the input video they want using the file_uploader() component. Now the problem is that the cv2.VideoCapture() requires a filename along with extension to read the local video file but with file_uploader() I get a BytesIO stream (if I’m not wrong) of the selected file.
I referred a couple of threads on this forum on the similar issue and found this one to be helpful in reading the uploaded file and then playing it by using tempfile
to create a temporary file and passing that to cv2.VideoCapture().
Now the problem is I need to extract certain number of frames from the input video, pre-process them, create a blob of those frames (using cv2.dnn.blobFromImages()) and then pass this blob to the trained model to get classification. I’m struggling with the part where I need to extract frames from the video stream and unfortunately this is the core of my application.
I’m attaching the python code that I’m trying to convert into a web app here
So can someone please help me implement this, I’d really appreciate it.
Hi @Deleora, welcome to the Streamlit community!
Here’s an example I put together that:
- Uses
st.file_uploader()
- Saves uploaded video to disk so that you can use
cv2.VideoCapture()
- Extracts certain number of frames from the input video and processes (displays) them:
import streamlit as st
import cv2
from PIL import Image
uploaded_video = st.file_uploader("Choose video", type=["mp4", "mov"])
frame_skip = 300 # display every 300 frames
if uploaded_video is not None: # run only when user uploads video
vid = uploaded_video.name
with open(vid, mode='wb') as f:
f.write(uploaded_video.read()) # save video to disk
st.markdown(f"""
### Files
- {vid}
""",
unsafe_allow_html=True) # display file name
vidcap = cv2.VideoCapture(vid) # load video from disk
cur_frame = 0
success = True
while success:
success, frame = vidcap.read() # get next frame from video
if cur_frame % frame_skip == 0: # only analyze every n=300 frames
print('frame: {}'.format(cur_frame))
pil_img = Image.fromarray(frame) # convert opencv frame (with type()==numpy) into PIL Image
st.image(pil_img)
cur_frame += 1
Hope this helps you with using opencv to extract frames from videos
Happy Streamlit-ing!
Snehan
2 Likes
Hello @snehankekre and thanks for the welcome!!
And a big thanks! for the snippet, I’ve successfully implemented the core of my application.
Also one more thing, so I ran your code and it is able to extract frames for the video however the frames being displayed are somehow shown bluish, it’s like they got a blue tint on them
Any idea why?
1 Like
Forgot to mention earlier that for images coming from libraries like OpenCV you should set the channels
parameter in st.image()
to ‘BGR’, instead.
Does changing the line to st.image(pil_img, channels='BGR')
help fix the image color?
Best,
Snehan
1 Like
Yes, it worked!!
Thanks a lot for all the help, I really appreciate it
1 Like