Persistent Image Processing Error in Streamlit App Using OpenCV and dlib

Hi everyone,

I’m working on a Streamlit app for detecting facial landmarks using OpenCV and dlib. Despite numerous troubleshooting attempts, I keep encountering the following error:

Unsupported image type, must be 8bit gray or RGB image.

Debugging Information

  1. Are you running your app locally or is it deployed?
  • The app is deployed.
  1. If your app is deployed: a. Is it deployed on Community Cloud or another hosting platform?
  1. Share the link to your app’s public GitHub repository (including a requirements file).
  1. Share the full text of the error message (not a screenshot).
Unsupported image type, must be 8bit gray or RGB image.
  1. Share the Streamlit and Python versions.
  • Streamlit version: 1.36.0
  • Python version: 3.11

Project Overview

My app processes video files to extract facial landmarks. Here’s a brief overview of the image processing pipeline:

  1. Upload video file
  2. Extract frames from video
  3. Resize and convert frames to the appropriate format
  4. Use dlib to detect faces and extract facial landmarks

Code Snippets

LandmarkExtractor Class on SVMneeds.py

import dlib
import numpy as np
import cv2
import streamlit as st
import matplotlib.pyplot as plt

class LandmarkExtractor:
    def __init__(self):
        self.detector = dlib.get_frontal_face_detector()
        self.predictor = dlib.shape_predictor("shape_predictor_68_face_landmarks.dat")
    
    def extract_landmarks(self, image):
        try:
            gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
            st.write(f"Grayscale image shape: {gray.shape}, dtype: {gray.dtype}")
            if gray.dtype != 'uint8':
                gray = gray.astype('uint8')
                st.write(f"Converted grayscale image dtype to uint8")
            faces = self.detector(gray)
            st.write(f"Number of faces detected: {len(faces)}")
            if len(faces) == 0:
                return None
            for face in faces:
                landmarks = self.predictor(image=gray, box=face)
                return np.array([(landmarks.part(n).x, landmarks.part(n).y) for n in range(68)])
        except Exception as e:
            st.error(f"Error in extract_landmarks: {e}")
        return None
    
    def visualize_landmarks(self, image, landmarks):
        if landmarks is None:
            print("No landmarks to visualize.")
            return
        plt.figure(figsize=(8, 8))
        plt.imshow(cv2.cvtColor(image, cv2.COLOR_BGR2RGB))
        plt.scatter(landmarks[:, 0], landmarks[:, 1], s=20, marker='.', c='c')
        plt.show()

Main Processing Code on pages/2_📃_Test.py

import streamlit as st
import cv2
import tempfile
import os
import pandas as pd
from pathlib import Path
from your_module import DataPersistence, ImagePreprocessor, LandmarkExtractor, ImageSlopeCorrector, FeatureCalculator  # Replace with actual imports

def verify_image(image, stage):
    try:
        st.write(f"{stage} image shape: {image.shape}, dtype: {image.dtype}")
        if len(image.shape) == 2 or image.shape[2] == 1:
            image = cv2.cvtColor(image, cv2.COLOR_GRAY2RGB)
        elif image.shape[2] == 4:
            image = cv2.cvtColor(image, cv2.COLOR_RGBA2RGB)
        elif image.shape[2] != 3:
            raise ValueError(f"Unsupported image shape at {stage}: {image.shape}")
        if image.dtype != 'uint8':
            image = image.astype('uint8')
        return image
    except Exception as e:
        st.error(f"Error verifying image at {stage}: {e}")
        return None

with tab3:
    st.title("Video Feature Extraction")
    video_file = st.file_uploader("Upload a video", type=["mp4", "avi"])
    sheet_name = st.text_input("Enter the sheet name for the Excel output", "VideoAnalysis")
    output_file = str(Path('3. output calculation.xlsx'))
    fps_value = st.number_input("Enter the FPS value for processing", min_value=1, value=1)

    if video_file is not None:
        if st.button("Process Video"):
            with st.spinner('Processing...'):
                try:
                    with tempfile.NamedTemporaryFile(delete=False) as tfile:
                        tfile.write(video_file.read())
                        temp_filename = tfile.name

                    cap = cv2.VideoCapture(temp_filename)
                    data_persistence = DataPersistence(output_file)
                    all_features = []

                    original_fps = cap.get(cv2.CAP_PROP_FPS)
                    frames_to_skip = int(original_fps / fps_value)  # Adjusting to the user-defined FPS

                    frame_count = 0
                    while cap.isOpened():
                        ret, frame = cap.read()
                        if not ret:
                            break

                        if frame is None:
                            st.error(f"Frame {frame_count} is None.")
                            continue

                        frame = verify_image(frame, "initial")

                        if frame_count % frames_to_skip == 0:
                            preprocessor = ImagePreprocessor(frame)
                            preprocessor.read_and_resize()
                            resize_image = verify_image(preprocessor.resized_image, "resized")

                            extractor = LandmarkExtractor()
                            landmarks = extractor.extract_landmarks(resize_image)

                            if landmarks is None:
                                st.warning(f"No landmarks detected in frame {frame_count}.")
                                continue

                            corrected_image = ImageSlopeCorrector.rotate_image_based_on_landmarks(resize_image, landmarks)
                            corrected_image = verify_image(corrected_image, "corrected")

                            corrected_landmarks = extractor.extract_landmarks(corrected_image)
                            st.write(f"Correcting frame {frame_count}, shape: {corrected_image.shape}, dtype: {corrected_image.dtype}")

                            if corrected_landmarks is None:
                                st.warning(f"No landmarks detected after correction in frame {frame_count}.")
                                continue

                            kalkulasi_fitur = FeatureCalculator()
                            features = kalkulasi_fitur.rumus29(corrected_landmarks)
                            all_features.append(features)

                        frame_count += 1

                    cap.release()

                except Exception as e:
                    st.error(f"An error occurred: {e}")
                finally:
                    if os.path.isfile(temp_filename):
                        os.unlink(temp_filename)

                    new_column_names = {
                        '0': 'F0.1', '1': 'F1.1', '2': 'F2.1', '3': 'F3.1', '4': 'F4.1',
                        '5': 'F5.1', '6': 'F6.1', '7': 'F7.1', '8': 'F10.1', '9': 'F3.2',
                        '10': 'F8.2', '11': 'F9.2', '12': 'F11.2', '13': 'F15.2', '14': 'F27.2',
                        '15': 'F1.3', '16': 'F3.3', '17': 'F8.3', '18': 'F9.3', '19': 'F10.3',
                        '20': 'F15.3', '21': 'F18.3', '22': 'F21.3', '23': 'F23.3', '24': 'F28.3'
                    }

                    if not all_features:  # No features were extracted
                        st.error("No features extracted.")
                    else:
                        try:
                            features_df = pd.DataFrame(all_features)
                            features_df.rename(columns=new_column_names, inplace=True)
                            if not features_df.empty:
                                data_persistence.save_to_excel(features_df, sheet_name=sheet_name)
                                st.success("Features extracted and saved to Excel.")
                            else:
                                st.error("No features extracted.")
                        except Exception as e:
                            st.error(f"An error occurred: {e}")

                st.success("Done!")
                st.write("Features extracted and saved to Excel.")

Troubleshooting Steps Taken

  • Ensured all frames are in RGB format.
  • Verified that images are of type uint8.
  • Added extensive logging to trace the error.

Logs and Screenshots

Questions

  1. Has anyone encountered similar issues with image processing in Streamlit?
  2. Are there any known compatibility issues between OpenCV, dlib, and Streamlit?
  3. Any suggestions for alternative approaches or best practices for handling image processing in Streamlit apps?

Thank you for your help!

Unfortunately your exception handling is hiding the actual source of the error, making it harder to debug.

Thanks for your suggestion. I’ve removed the exception handling to reveal the full error message. Here is the detailed traceback:

RuntimeError: This app has encountered an error. The original error message is redacted to prevent data leaks. Full error details have been recorded in the logs (if you're on Streamlit Cloud, click on 'Manage app' in the lower right of your app).

Traceback:
File "/home/adminuser/venv/lib/python3.11/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 589, in _run_script
    exec(code, module.__dict__)
File "/mount/src/autism-detection-system/pages/2_📃_Test.py", line 259, in <module>
    landmarks = extractor.extract_landmarks(resize_image)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mount/src/autism-detection-system/SVMneeds.py", line 64, in extract_landmarks
    faces = self.detector(gray)
            ^^^^^^^^^^^^^^^^^^^

do you have any other insights or suggestions? thanks!

Try numpy<2. You may need to reboot or redeploy the app.

1 Like

Actually, I already did it and it still didn’t work, but after you suggested it again and I tried it again, it worked! Thanks!

I use numpy==1.26.4

This topic was automatically closed 2 days after the last reply. New replies are no longer allowed.