Memory leakage when doing ML inference

import cv2
import supervision as sv

from collections import defaultdict, deque

from inference import InferencePipeline
from inference.core.interfaces.camera.entities import VideoFrame
 
import streamlit as st
from PIL import Image

frame_display = st.empty()

def my_custom_sink(predictions: dict, video_frame: VideoFrame):
    #Parts of the code related to the problem
    annotated_frame_rgb = cv2.cvtColor(annotated_frame, cv2.COLOR_BGR2RGB)
    image_pil = Image.fromarray(annotated_frame_rgb)
    frame_display.image(image_pil, caption="Annotated Frame", use_column_width=True)

pipeline1 = InferencePipeline.init(
    model_id= "yolov8n-640",
    video_reference= SOURCE_VIDEO_PATH,
    on_prediction=my_custom_sink,
)

pipeline1.start()
pipeline1.join()

My RAM usage kept rising every time the script ran, RAM usage only leveled out when I commented the frame_display.image() part. Are there any suggestions to avoid this problem? Thank you in advance

Hey @Ardhana_Putra ! Thanks for posting and Welcome to the community!

Looks like we are talking over in the GitHub issue, so I think that seems best to keep things there. Here it is for anyone following.

This topic was automatically closed 180 days after the last reply. New replies are no longer allowed.