Restricted using Huggingface AutoTokenizer for llama model

I tried to use Autotokenizer for llama model using huggingface which works in local perfectly fine but while deployment I get the following error: -

Code: -

class LLAMA_EVAL:
    def __init__(self) -> None:
        self.tokenizer = AutoTokenizer.from_pretrained("hf-internal-testing/Llama-2-7b-hf")
        pass

I also tried to use ‘meta-llama/Llama-2-7b-hf’ model but still I get restriction error while deployment.

This topic was automatically closed 180 days after the last reply. New replies are no longer allowed.