App always crashes when it comes to analysing pictures with EasyOCR

Hallo everyone ,
I need your help. I am currently going through some pictures which I got from URL and just want to analyze them with easyocr bit everytime the model starts to analyze the app crahes and this is what I get:


Can please someone help me?

Unfortunately this is not enough information to help.
I suspect that the app is crashing because it is hitting memory limits.
But i could be wrong.

import os
import cv2
from PIL import Image
from PIL import ImageOps
import numpy as np
import re
import streamlit as st
import easyocr
import requests
import tempfile
import shutil

@st.cache
def load_model():
    return easyocr.Reader(['de'],gpu=False)
#reader = load_model()
def get_context_R(url_pages):
    reader = load_model()
    #os.chdir(path_pict) 
    
    #texts_detected={}
    #number_pictures=str(len(os.listdir('./')))
    #dirFiles=os.listdir('./')
    #dirFiles.sort(key=lambda f: int(str(f).split('_')[-1].split('.')[0]))
    
    display=st.empty()
    for url in url_pages:
       
        text="Analysing context of page: {page} / {number_pictures}".format(page=str(url).split('_page_')[-1].split('.jpg')[0],number_pictures=str(len(url_pages)))
        display.metric("Currently",f"{text}")
        
        response = requests.get(url, stream=True)
        response.raw.decode_content = True
        fp=tempfile.NamedTemporaryFile(delete=False,suffix='.jpg')
        name=fp.name
        shutil.copyfileobj(response.raw, fp)
        image_full = Image.open(fp)
        width, height = image_full.size
        fp.seek(0)
        croppedImage = image_full.crop((0, 0, width, height/3*2)).save(fp,'JPEG' ,quality=95)
            #image=fp
        img=cv2.imread(fp.name)
        list_text=reader.readtext(img, detail = 0)
        if 'KNALLER' in list_text or 'AKTION' in list_text:
           #print(str(filename).split('_')[-1].split('.')[0])
           texts_detected[str(url).split('_page_')[-1].split('.jpg')[0]]='KNALLER AKTION'
        elif int(str(url).split('_page_')[-1].split('.jpg')[0])==0:
            texts_detected[str(url).split('_page_')[-1].split('.jpg')[0]]='Deine Auswahl - Auch beim Preis '+reader.readtext(img, detail = 0)[2]+' '+reader.readtext(img, detail = 0)[3]
        else:
            texts_detected[str(url).split('_page_')[-1].split('.jpg')[0]]=' '.join(reader.readtext(img, detail = 0)[0:4])
        os.unlink(fp.name)
    load_model.clear()
    return texts_detected

Maybe this part of my script which gets a list of urls could help

Which streamlit version do you use? Seems to be not the latest version?
I would use the latest version with this:

@st.cache_resource
def load_model():
    return easyocr.Reader(['de'], gpu=False)

streamlit==1.21.0

Hi @loeerc

As @Franky1 pointed out, I also think that the app may reach the memory limit. Have you tried running locally to see the memory usage.

Also, have you tried running with a fewer number of URLs as input to the app.

Best regards,
Chanin

Thanks for your answers @dataprofessor & @Franky1 ,

I will just check if it runs locally fine and how much memory it consumes as well. Thanks for the suggestion.

1 Like

I think EasyOCR has some very heavy dependencies e.g. torch etc.
I quickly ran a docker container locally with:

pip install easyocr

and this:

import easyocr
model = easyocr.Reader(['de'], gpu=False)

These are my resources eaten up by the container:

  • 350MB RAM
  • 7.5GB Disk

Yes, I think thatโ€™s the point even if I just take one url the app continues to crash

This topic was automatically closed 180 days after the last reply. New replies are no longer allowed.