FileNotFoundError when Deploying a Streamlit Application on streamlit cloud

Summary :
This project runs smoothly on a local computer, but when I tried to deploy the Streamlit app on streamlitcloud platform, the code encountered issues and displayed the following error message:

File “/home/appuser/venv/lib/python3.8/site-packages/streamlit/runtime/scriptrunner/script_runner.py”, line 565, in _run_script exec(code, module.dict)
File “/app/lipreading/app/streamlitapp.py”, line 32, in file_path = os.path.join(data_dir, selected_video)

Solved Code:

import streamlit as st
import os 
import pathlib
from moviepy.editor import VideoFileClip
import imageio
import tensorflow as tf 
from utils import load_data, num_to_char
from modelutil import load_model

# Set the layout of the Streamlit app as wide 
st.set_page_config(layout='wide')

# Setup the sidebar
with st.sidebar: 
    st.image('https://www.onepointltd.com/wp-content/uploads/2020/03/inno2.png')
    st.markdown("<h1 style='text-align: center; color: white;'>Abstract</h1>", unsafe_allow_html=True) 
    st.info('This project, developed by Amith A G as his MCA final project at KVVS Institute Of Technology, focuses on implementing the LipNet deep learning model for lip-reading and speech recognition. The project aims to demonstrate the capabilities of the LipNet model through a Streamlit application.')

st.markdown("<h1 style='text-align: center; color: white;'>LipNet</h1>", unsafe_allow_html=True) 

# Generating a list of options or videos 
code_dir = pathlib.Path(__file__).parent.resolve()
files_location = code_dir / ".." / "data" / "s1"  
files_location = files_location.resolve()  

# Convert the files_location to a list of files
options = os.listdir(files_location)

selected_video = st.selectbox('Choose video', options)

# Generate two columns 
col1, col2 = st.columns(2)

if options: 

    # Rendering the video 
    with col1: 
        st.info('The video below displays the converted video in mp4 format')
        file_path = str(files_location / selected_video)
        output_path = str(code_dir / 'test_video.mp4')
    
        # Convert the video using moviepy
        video_clip = VideoFileClip(file_path)
        video_clip.write_videofile(output_path, codec='libx264')
    
        # Display the video in the app
        video = open(output_path, 'rb')
        video_bytes = video.read()
        st.video(video_bytes)


    with col2: 
        st.info('This is all the machine learning model sees when making a prediction')
        video, annotations = load_data(tf.convert_to_tensor(file_path))
        imageio.mimsave('animation.gif', video, fps=10)
        st.image('animation.gif', width=400) 

        st.info('This is the output of the machine learning model as tokens')
        model = load_model()
        yhat = model.predict(tf.expand_dims(video, axis=0))
        decoder = tf.keras.backend.ctc_decode(yhat, [75], greedy=True)[0][0].numpy()
        st.text(decoder)

        # Convert prediction to text
        st.info('Decode the raw tokens into words')
        converted_prediction = tf.strings.reduce_join(num_to_char(decoder)).numpy().decode('utf-8')
        st.text(converted_prediction)

Result:

Problem Code :

import streamlit as st
import os 
from moviepy.editor import VideoFileClip
import imageio
import tensorflow as tf 
from utils import load_data, num_to_char
from modelutil import load_model

# Set the layout to the streamlit app as wide 
st.set_page_config(layout='wide')

# Setup the sidebar
with st.sidebar: 
    st.image('https://www.onepointltd.com/wp-content/uploads/2020/03/inno2.png')
    st.markdown("<h1 style='text-align: center; color: white;'>Abstract</h1>", unsafe_allow_html=True) 
    st.info('This project, developed by Amith A G as his MCA final project at KVVS Institute Of Technology, focuses on implementing the LipNet deep learning model for lip-reading and speech recognition. The project aims to demonstrate the capabilities of the LipNet model through a Streamlit application.')

st.markdown("<h1 style='text-align: center; color: white;'>LipNet</h1>", unsafe_allow_html=True) 
# Generating a list of options or videos 
options = os.listdir(os.path.join('..', 'data', 's1'))
selected_video = st.selectbox('Choose video', options)

# Generate two columns 
col1, col2 = st.columns(2)

if options: 

    # Rendering the video 
    with col1: 
        st.info('The video below displays the converted video in mp4 format')
        file_path = os.path.join('..', 'data', 's1', selected_video)
        output_path = os.path.join('test_video.mp4')
    
        # Convert the video using moviepy
        video_clip = VideoFileClip(file_path)
        video_clip.write_videofile(output_path, codec='libx264')
    
        # Display the video in the app
        video = open(output_path, 'rb')
        video_bytes = video.read()
        st.video(video_bytes)


    with col2: 
        st.info('This is all the machine learning model sees when making a prediction')
        video, annotations = load_data(tf.convert_to_tensor(file_path))
        imageio.mimsave('animation.gif', video, fps=10)
        st.image('animation.gif', width=400) 

        st.info('This is the output of the machine learning model as tokens')
        model = load_model()
        yhat = model.predict(tf.expand_dims(video, axis=0))
        decoder = tf.keras.backend.ctc_decode(yhat, [75], greedy=True)[0][0].numpy()
        st.text(decoder)

        # Convert prediction to text
        st.info('Decode the raw tokens into words')
        converted_prediction = tf.strings.reduce_join(num_to_char(decoder)).numpy().decode('utf-8')
        st.text(converted_prediction)

Explaination and Expected Result:

The provided code is a Streamlit application that implements the LipNet deep learning model for lip-reading and speech recognition. When executed, the application launches with a wide layout and displays a sidebar containing an image and an introductory paragraph about the project. The main section of the application showcases the LipNet model with a heading and allows users to choose a video from a list of options. Upon selecting a video, the application renders it in the first column as an mp4 video and presents frames and annotations in the second column. The frames are processed by the LipNet model, which predicts output tokens and displays them, along with the converted text prediction. The raw tokens are further decoded into words.
Overall, the application provides a user-friendly interface to explore the lip-reading and speech recognition capabilities of LipNet, offering visual representations and insights into the model’s predictions.

Oswalk :

Current Directory: D:\LipReading
Number of subdirectories: 3
Subdirectories: app, data, models
Number of files: 3
Files: .gitattributes, oswalk.py, requirements.txt

Current Directory: D:\LipReading\app
Number of subdirectories: 0
Subdirectories:
Number of files: 5
Files: animation.gif, modelutil.py, streamlitapp.py, test_video.mp4, utils.py

Current Directory: D:\LipReading\data
Number of subdirectories: 2
Subdirectories: alignments, s1
Number of files: 0
Files:

Current Directory: D:\LipReading\data\alignments
Number of subdirectories: 1
Subdirectories: s1
Number of files: 0
Files:

Current Directory: D:\LipReading\data\alignments\s1
Number of subdirectories: 0
Subdirectories:
Number of files: 1000
Files: bbaf2n.align, bbaf3s.align, bbaf4p.align, bbaf5a.align, bbal6n.align, bbal7s.align, bbal8p.align, bbal9a.align, bbas1s.align, bbas2p.align, bbas3a.align…

Current Directory: D:\LipReading\data\s1
Number of subdirectories: 0
Subdirectories:
Number of files: 1001
Files: bbaf2n.mpg, bbaf3s.mpg, bbaf4p.mpg, bbaf5a.mpg, bbal6n.mpg, bbal7s.mpg…

Current Directory: D:\LipReading\models
Number of subdirectories: 1
Subdirectories: __MACOSX
Number of files: 3
Files: checkpoint, checkpoint.data-00000-of-00001, checkpoint.index

Current Directory: D:\LipReading\models__MACOSX
Number of subdirectories: 0
Subdirectories:
Number of files: 3
Files: ._checkpoint, ._checkpoint.data-00000-of-00001, ._checkpoint.index

GitHubRepository:https://github.com/Amith-AG/LipReading.git
Requirement.txt :
imageio==2.9.0
keras>=2.10.0,<2.13.0
matplotlib==3.7.1
numpy>=1.21.0
moviepy
opencv-python-headless
streamlit== 1.22.0
tensorflow==2.12.0
tensorboard==2.12.0

I kindly request assistance from the community as it holds great significance. Your support and expertise would be highly valuable in addressing this matter.

File paths must be relative to the root folder of the github repo, if you want to run the app on streamlit cloud.

1 Like

Thank you very much for the prompt response. I have included the output of os.walk of the local repository in the updated post.I’m new to the Streamlit Cloud platform guidance and support would be greatly appreciated

Always run/test/debug your local streamlit app from the root folder:

streamlit run app/streamlitapp.py

You will face the same file path issues, fix them, then it should also work on streamlit cloud.

Thank you @Franky1, for your assistance. It has truly been helpful and gave me some hope. Could you please take a look at the updated code? Unfortunately, I am encountering a new error now.I believe the file path error might have been resolved, but I am still unclear about the current error that I am encountering.
while debugging in local environment , to gain a better understanding, I have printed the paths of code_dir, files_location, and output in order to analyze them

file_path=D:\LipReading\data\s1\bbaf2n.mpg
code_dir=D:\LipReading\app
output=D:\LipReading\app\test_video.mp4
files_location=D:\LipReading\app

which means now file path is relative to root right?

Solution:
I changed the relative path to the absolute path in this project inorder to fix it by changing options = os.listdir(os.path.join('..', 'data', 's1')) (this code assigns the list of files and directories in the relative path '../data/s1' )to the variable options to

code_dir = pathlib.Path(__file__).parent.resolve()
files_location = code_dir / ".." / "data" / "s1"  
files_location = files_location.resolve()

Overall, this code determines the directory path of the current script file, and then constructs an absolute path to a specific location by appending relative directory names. The resulting absolute path is stored in the files_location variable.

Then we need to change type to string because when you use the pathlib.Path module to create a path or resolve a path, it returns a Path object.Therefore i used code given below:

file_path = str(files_location / selected_video)
output_path = str(code_dir / 'test_video.mp4')

Then changed relative path written in other python file to absolute path.
Checked out the github link that i provide the post or updated code given in the post for your reference

This topic was automatically closed 2 days after the last reply. New replies are no longer allowed.