Error running Streamlit-LLM

Hello,

I tried to follow the instruction to implement llamaindex for a chatbot here but got this error.
I searched which part that might give error when loading the data folder but got nothing.
Anyone got the same experience? Thanks!

RetryError: RetryError[<Future at 0x123cc8ee0 state=finished raised RateLimitError>]
Traceback:
File “/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py”, line 552, in _run_script
exec(code, module.dict)
File “/Users/is/Desktop/Desktop/Project - LLMStreamlit/DropletLlama/Llamadroplet.py”, line 26, in
index = load_data()
File “/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/streamlit/runtime/caching/cache_utils.py”, line 211, in wrapper
return cached_func(*args, **kwargs)
File “/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/streamlit/runtime/caching/cache_utils.py”, line 242, in call
return self._get_or_create_cached_value(args, kwargs)
File “/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/streamlit/runtime/caching/cache_utils.py”, line 266, in _get_or_create_cached_value
return self._handle_cache_miss(cache, value_key, func_args, func_kwargs)
File “/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/streamlit/runtime/caching/cache_utils.py”, line 320, in _handle_cache_miss
computed_value = self._info.func(*func_args, **func_kwargs)
File “/Users/is/Desktop/Desktop/Project - LLMStreamlit/DropletLlama/Llamadroplet.py”, line 23, in load_data
index = VectorStoreIndex.from_documents(docs, service_context=service_context)
File “/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/llama_index/indices/base.py”, line 102, in from_documents
return cls(
File “/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/llama_index/indices/vector_store/base.py”, line 46, in init
super().init(
File “/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/llama_index/indices/base.py”, line 71, in init
index_struct = self.build_index_from_nodes(nodes)
File “/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/llama_index/indices/vector_store/base.py”, line 265, in build_index_from_nodes
return self._build_index_from_nodes(nodes)
File “/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/llama_index/indices/vector_store/base.py”, line 253, in _build_index_from_nodes
self._add_nodes_to_index(
File “/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/llama_index/indices/vector_store/base.py”, line 213, in _add_nodes_to_index
nodes = self._get_node_with_embedding(nodes, show_progress)
File “/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/llama_index/indices/vector_store/base.py”, line 111, in _get_node_with_embedding
) = self._service_context.embed_model.get_queued_text_embeddings(show_progress)
File “/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/llama_index/embeddings/base.py”, line 223, in get_queued_text_embeddings
embeddings = self._get_text_embeddings(cur_batch_texts)
File “/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/llama_index/embeddings/openai.py”, line 314, in _get_text_embeddings
return get_embeddings(
File “/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/tenacity/init.py”, line 289, in wrapped_f
return self(f, *args, **kw)
File “/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/tenacity/init.py”, line 379, in call
do = self.iter(retry_state=retry_state)
File “/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/tenacity/init.py”, line 326, in iter
raise retry_exc from fut.exception()

Hi @xiao, and welcome to our community! :raised_hands:

It appears that you’ve encountered a RateLimitError while using the llama_index package in your Streamlit app.

If you are using one of OpenAI’s APIs, you might be hitting the rate limit for the number of requests you can make within a certain time period.

Could you please check if this is the case?

Thanks,
Charly

Hi @Charly_Wargnier ,

Thank you for the prompt reply.
The thing is, I just copied the code from here and downloaded the data folder.
I didn’t change any rate limit or anything from OpenAI.
I followed the procedure by creating the token, put it in secrets.toml file, and run it. :smiley:
Or, is there any setting i need to put to avoid the RateLimitError?

To give some context, after I run the code, it stuck here before generating the error I mentioned above:

Look forward to your reply!

Best,
Xiao

Can you check your rate limit in your OpenAI dashboard?