I’m facing an issue when trying to use my models and make prediction on whether the text is real or fake based on the way it’s written. I believe the issue might be that when I’m vectorzing my text, I have a special tokenizer that I had defined as (my_lemmatization_tokenizer) and this seems to be causing an issue when I run the streamlit app. I’d really appreciate if someone could give me insights as to how to fix this. Thanks so much.
I’m new to the field of data science and have had some trouble while trying to debug this issue. The issue I’m facing is below:
The code that I have in here within my script is as follows:
Any help would be appreciated.
Thanks so much