Import errors in project

Hi I am facing issues in importing libraries and have mentioned all the modules required in the requirements.txt file but I still keep facing issues
I cant share my source code link due to privacy reasons but this is my requirements file:
streamlit
#pymongo
#boto3
matplotlib
#pyyaml

Base ----------------------------------------

matplotlib
numpy
opencv-python
Pillow
PyYAML
requests
scipy
torch
torchvision
tqdm
protobuf

Logging -------------------------------------

tensorboard

Plotting ------------------------------------

pandas
seaborn

Additional Packages -------------------------

streamlit
pymongo
boto3
matplotlib
pyyaml

and I keep getting this error
image
I even checked Manage app and it shows dependencies are successfully installed


I am also able to run the same code locally without any import issues, this is specifically showing up in the app.Can someone help me with solving this issue?

Where this issue is occurring in you local or on hosted environment?

Hi @Anannya, welcome to the community :wave:

It looks like the command you’re running to invoke a subprocess uses a different Python interpreter/executable than the one running your app.

Instead of doing:

command = f"python yolov7/detect1.py --weights \"{weights_path}\" --conf {confidence_threshold} --source \"{image_path}\""

Do this:

import sys

command = f"{sys.executable} yolov7/detect1.py --weights \"{weights_path}\" --conf {confidence_threshold} --source \"{image_path}\""

Once you do that, also make sure to create a packages.txt folder in the root of your repo containing an entry for libgl1 – otherwise you’ll run into a ImportError: libGL.so.1:

After those two changes, calling the object detection script from a subprocess should work.

Happy Streamlit-ing! :balloon:

1 Like

Thank you! Do you know any way in which I can run a training job for yolov7 via streamlit? I’m new to ML ops and would like to know if there are ways to automate this.

Good question :bulb: I wouldn’t recommend running a training job on Community Cloud due to its resource limitations in terms of CPU, RAM, and no current GPU support:

Your app would run out of memory and crash while loading the weights into memory. Community Cloud is best suited to showcase a trained model via inference (ideally via an API call to an external inference endpoint unless it’s a tiny, distilled model optimized for CPU and low memory usage).

You could connect to an external cloud training provider like Snowflake, AWS, GCP, etc, train your model there, set up an inference endpoint, make API calls via your Streamlit app on Community Cloud, and display the returned predictions within the app.

Yes! I was looking into those options, is there a way to trigger the training via streamlit though? I dont want to train in using streamlit because of the resource limitations, I wanted to trigger the training using API calls. Do you know how this is possible? As far as i have researched free GPU resources like kaggle and google collab dont support this type of api call.

This topic was automatically closed 2 days after the last reply. New replies are no longer allowed.