- My application is still in testing by running in a local app environment
- My application has not been deployed yet
- Do not have GitHub repository (including a [requirements file]
- I do not have error, I just want to know in coding on how I read my data from a google bucket.
- Streamlit version 1.30 ; python version 3.9
Descriptions of my post: Application is an RAG API, via LangChain, using gpt-4. I have 11,255 one-page text; the text has been splitted, embdeddings, the vectorstore is serialized into google cloud as python pickle file. The vectorstore includes two files index.faiss and index.pkl (pickle). I follow [Connect Streamlit to Google Cloud Storage - Streamlit Docs](https://connect streamlit to google cloud). Except for the following steps:
- I don’t know where the .streamlit/secrets.toml file is.
- Copy your app secrets to the cloud: I use streamlit on conda virtual environment for evaluation, don’t have streamlit community version
From my .py file for streamlit I would like to read the vectorstore using pickle as follows: vectorstore = FAISS.load_local(vstore_name, embeddings)
Where vstore_name should be pointed to the URL of the bucket on google cloud.
I spent 3-4 hours on trying to get this working. Please help.