Displays error when deploying the app on Streamlit but works fine on localhost

It only displays the error when it is deployed on Streamlit. Initially, I tried on localhost and it works fine.

Below is the error:

Traceback (most recent call last):
  File "/home/appuser/venv/lib/python3.8/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 565, in _run_script
    exec(code, module.__dict__)
  File "/app/facial-expressions-recognition-system/app.py", line 77, in <module>
    num_faces = face_detector.detectMultiScale(gray_frame, scaleFactor=1.3, minNeighbors=5)
cv2.error: OpenCV(4.7.0) /io/opencv/modules/objdetect/src/cascadedetect.cpp:1689: error: (-215:Assertion failed) !empty() in function 'detectMultiScale'

Same problem as well when it’s a real-time recognition which requires switching on the camera. But, the camera is not able to switch on when running the deployed app and cause it shows that the frame is empty.

Below is the error:

File "/home/appuser/venv/lib/python3.8/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 565, in _run_script
    exec(code, module.__dict__)
  File "/app/facial-expressions-recognition-system/app.py", line 21, in <module>
    frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
cv2.error: OpenCV(4.7.0) /io/opencv/modules/imgproc/src/color.cpp:182: error: (-215:Assertion failed) !_src.empty() in function 'cvtColor'

image

Hope it is clear and kindly let me know if more information is needed. Much appreciated.

Please help me to solve

Can you provide a minimal sample code where we can try. Something that can replicate the error.

See also the posting guide.

Below is the code for app.py:

import cv2
import streamlit as st
import numpy as np
from keras.models import model_from_json
import tempfile

st.set_page_config(page_title="FYP1",page_icon="😀")
st.title("Facial Expressions Recognition System")

option = st.sidebar.selectbox(
    "Select an option",
    ("Real-time Recognition", "Upload Video")
)
FRAME_WINDOW = st.image([])

if option == 'Real-time Recognition':
    start = st.checkbox('Start')
    while start:
        camera = cv2.VideoCapture(0)
        _, frame = camera.read()
        frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
        emotion_dict = {0: "Angry", 1: "Disgusted", 2: "Fearful", 3: "Happy", 4: "Neutral", 5: "Sad", 6: "Surprised"}

        # load json and create model
        json_file = open('./model/model.json', 'r')
        loaded_model_json = json_file.read()
        json_file.close()
        emotion_model = model_from_json(loaded_model_json)

        # load weights into new model
        emotion_model.load_weights("./model/model.h5")

        face_detector = cv2.CascadeClassifier('haarcascades/haarcascade_frontalface_default.xml')
        gray_frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

        # detect faces available on camera
        num_faces = face_detector.detectMultiScale(gray_frame, scaleFactor=1.3, minNeighbors=5)

        # take each face available on the camera and Preprocess it
        for (x, y, w, h) in num_faces:
            cv2.rectangle(frame, (x, y-50), (x+w, y+h+10), (0, 255, 0), 4)
            roi_gray_frame = gray_frame[y:y + h, x:x + w]
            cropped_img = np.expand_dims(np.expand_dims(cv2.resize(roi_gray_frame, (48, 48)), -1), 0)

            # predict the emotions
            emotion_prediction = emotion_model.predict(cropped_img)
            maxindex = int(np.argmax(emotion_prediction))
            cv2.putText(frame, emotion_dict[maxindex], (x+5, y-20), cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 0, 0), 2, cv2.LINE_AA)
            FRAME_WINDOW.image(frame)
    else:
        st.write("Stopped")
else:
    uploaded_video = st.file_uploader("Please upload a video.")
    if uploaded_video is not None:
        tfile = tempfile.NamedTemporaryFile(delete=False) 
        tfile.write(uploaded_video.read())
  
        camera = cv2.VideoCapture(tfile.name)
        while camera.isOpened():
            _, frame = camera.read()
            frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
            emotion_dict = {0: "Angry", 1: "Disgusted", 2: "Fearful", 3: "Happy", 4: "Neutral", 5: "Sad", 6: "Surprised"}

            # load json and create model
            json_file = open('./model/model.json', 'r')
            loaded_model_json = json_file.read()
            json_file.close()
            emotion_model = model_from_json(loaded_model_json)

            # load weights into new model
            emotion_model.load_weights("./model/model.h5")

            face_detector = cv2.CascadeClassifier('haarcascades/haarcascade_frontalface_default.xml')
            gray_frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

            # detect faces available on camera
            num_faces = face_detector.detectMultiScale(gray_frame, scaleFactor=1.3, minNeighbors=5)

            # take each face available on the camera and Preprocess it
            for (x, y, w, h) in num_faces:
                cv2.rectangle(frame, (x, y-50), (x+w, y+h+10), (0, 255, 0), 4)
                roi_gray_frame = gray_frame[y:y + h, x:x + w]
                cropped_img = np.expand_dims(np.expand_dims(cv2.resize(roi_gray_frame, (48, 48)), -1), 0)

                # predict the emotions
                emotion_prediction = emotion_model.predict(cropped_img)
                maxindex = int(np.argmax(emotion_prediction))
                cv2.putText(frame, emotion_dict[maxindex], (x+5, y-20), cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 0, 0), 2, cv2.LINE_AA)
                FRAME_WINDOW.image(frame)
                # stframe.image(frame)
    else:
        st.write(" ")

Two things:

First, can you link your repository and/or share your full environment configuration? (e.g. requirements.txt and packages.txt) CV2 requires some binaries be installed which go in packages.txt as they are not python modules.

Second (and more fundamentally), I don’t think cv2.VideoCapture will work with Streamlit Cloud as I think it is using a local camera (as in looking for a camera attached to the Streamlit Community Cloud server containers).

(Last linked thread has alternatives discussed.)

1 Like

Here is the link for the repository:
https://github.com/hellojr01/facial-expressions-recognition-system

May I know what will be the alternatives in order to enable access of the video camera?

The alternatives are listed in the thread I linked. If you follow the link to the “Unable to import cv2” thread, it’s just a couple posts down from where I linked.

The github link isn’t working for me. Perhaps it’s private?

Can you try again now?

Thanks. I see there isn’t a packages.txt file, so go ahead and follow the first link I provided to see about adding the right binary there so the underlying (non-python) part will work. You will still have issues with VideoCapture as I mentioned, but it should at least make the package accessible.

Thanks for the suggestion. Will it be the same solution for the uploaded video?

Hey @hellojr01, please don’t tag moderators in your posts. I’ve edited your post to remove the tag. Thanks!

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.