Error with gpt-2 model

Hi everyone,
I get an error when I try to deploy an app with the model GPT-2, a NLP model download from Hugging Face. I am not sure of the reason. I guess that we can’t download the model when it is on the cloud ?
Here the message :

> File "/home/appuser/venv/lib/python3.7/site-packages/streamlit/script_runner.py", line 354, in _run_script
    exec(code, module.__dict__)
File "/app/robot/app.py", line 25, in <module>
    tokenizer =load_tokenizer()
File "/home/appuser/venv/lib/python3.7/site-packages/streamlit/legacy_caching/caching.py", line 574, in wrapped_func
    return get_or_create_cached_value()
File "/home/appuser/venv/lib/python3.7/site-packages/streamlit/legacy_caching/caching.py", line 558, in get_or_create_cached_value
    return_value = func(*args, **kwargs)
File "/app/robot/app.py", line 17, in load_tokenizer
    tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
File "/home/appuser/venv/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 1732, in from_pretrained
    user_agent=user_agent,
File "/home/appuser/venv/lib/python3.7/site-packages/transformers/file_utils.py", line 1929, in cached_path
    local_files_only=local_files_only,
File "/home/appuser/venv/lib/python3.7/site-packages/transformers/file_utils.py", line 2178, in get_from_cache
    "Connection error, and we cannot find the requested files in the cached path."

Hi @Maxime_tut,

It was likely a network error connecting to the HuggingFace servers. Could you try re-deploying your app? I was able to successfully deploy a fork of your repo:

Best, :balloon:
Snehan

Oh yes you’re right, it worked now. Thanks a lot :slight_smile:

1 Like