LookupError on NLTK stopwords while deploying to cloud

Hi, I am facing this Lookup Error while deploying my app, when I am using Stopwords from nltk, I have tried all the solutions posted in the previous discussion still unable to solve this. Please help me in this

public app url : https://toxiccommentsclassifier.streamlit.app/
github repo : GitHub - Kushaagra-exe/ToxicCommentsMultiLabelClassification: Toxic Multi Label Classification for Text using NLP and Logistic Regression

LookupError: This app has encountered an error. The original error message is redacted to prevent data leaks. Full error details have been recorded in the logs (if you're on Streamlit Cloud, click on 'Manage app' in the lower right of your app).
Traceback:
File "/home/adminuser/venv/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/exec_code.py", line 85, in exec_func_with_error_handling
    result = func()
File "/home/adminuser/venv/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 576, in code_to_exec
    exec(code, module.__dict__)
File "/mount/src/toxiccommentsmultilabelclassification/Webapp.py", line 32, in <module>
    stopwords = set(stopwords.words('english'))
File "/home/adminuser/venv/lib/python3.10/site-packages/nltk/corpus/util.py", line 121, in __getattr__
    self.__load()
File "/home/adminuser/venv/lib/python3.10/site-packages/nltk/corpus/util.py", line 86, in __load
    raise e
File "/home/adminuser/venv/lib/python3.10/site-packages/nltk/corpus/util.py", line 81, in __load
    root = nltk.data.find(f"{self.subdir}/{self.__name}")
File "/home/adminuser/venv/lib/python3.10/site-packages/nltk/data.py", line 583, in find
    raise LookupError(resource_not_found)

Logs:

During handling of the above exception, another exception occurred:

────────────────────── Traceback (most recent call last) ───────────────────────

/home/adminuser/venv/lib/python3.10/site-packages/streamlit/runtime/scriptru

nner/exec_code.py:85 in exec_func_with_error_handling

/home/adminuser/venv/lib/python3.10/site-packages/streamlit/runtime/scriptru

nner/script_runner.py:576 in code_to_exec

/mount/src/toxiccommentsmultilabelclassification/Webapp.py:32 in

 29                                                                         

 30 LR_pipeline = load_model()                                              

 31 stemmer = SnowballStemmer('english')                                    

❱ 32 stopwords = set(stopwords.words(‘english’))

 33                                                                         

 34 @st.cache_data                                                          

 35 def remove_stopwords(text):                                             

/home/adminuser/venv/lib/python3.10/site-packages/nltk/corpus/util.py:121 in

getattr

118 │   │   if attr == "__bases__":                                         

119 │   │   │   raise AttributeError("LazyCorpusLoader object has no attri  

120 │   │                                                                   

❱ 121 │ │ self.__load()

122 │   │   # This looks circular, but its not, since __load() changes our  

123 │   │   # __class__ to something new:                                   

124 │   │   return getattr(self, attr)                                      

/home/adminuser/venv/lib/python3.10/site-packages/nltk/corpus/util.py:86 in

__load

 83 │   │   │   │   try:                                                    

 84 │   │   │   │   │   root = nltk.data.find(f"{self.subdir}/{zip_name}")  

 85 │   │   │   │   except LookupError:                                     

❱ 86 │ │ │ │ │ raise e

 87 │   │                                                                   

 88 │   │   # Load the corpus.                                              

 89 │   │   corpus = self.__reader_cls(root, *self.__args, **self.__kwargs  

/home/adminuser/venv/lib/python3.10/site-packages/nltk/corpus/util.py:81 in

__load

 78 │   │   │   │   │   raise e                                             

 79 │   │   else:                                                           

 80 │   │   │   try:                                                        

❱ 81 │ │ │ │ root = nltk.data.find(f"{self.subdir}/{self.__name}")

 82 │   │   │   except LookupError as e:                                    

 83 │   │   │   │   try:                                                    

 84 │   │   │   │   │   root = nltk.data.find(f"{self.subdir}/{zip_name}")  

/home/adminuser/venv/lib/python3.10/site-packages/nltk/data.py:583 in find

 580 │   msg += "\n  Searched in:" + "".join("\n    - %r" % d for d in pat  

 581 │   sep = "*" * 70                                                     

 582 │   resource_not_found = f"\n{sep}\n{msg}\n{sep}\n"                    

❱ 583 │ raise LookupError(resource_not_found)

 584                                                                        

 585                                                                        

 586 def retrieve(resource_url, filename=None, verbose=True):               

────────────────────────────────────────────────────────────────────────────────

LookupError:


Resource e[93mstopwordse[0m not found.

Please use the NLTK Downloader to obtain the resource:

e[31m>>> import nltk

nltk.download(‘stopwords’)

e[0m

For more information see: NLTK :: Installing NLTK Data

Attempted to load e[93mcorpora/stopwordse[0m

Searched in:

- './resources/nltk_data_dir/'

- '/home/appuser/nltk_data'

- '/home/adminuser/venv/nltk_data'

- '/home/adminuser/venv/share/nltk_data'

- '/home/adminuser/venv/lib/nltk_data'
- '/usr/share/nltk_data'
- '/usr/local/share/nltk_data'
- '/usr/lib/nltk_data'
- '/usr/local/lib/nltk_data'