Reading Vectorstore on Google Cloud into Streamlit run on Conda virtual environment

  1. My application is still in testing by running in a local app environment
  2. My application has not been deployed yet
  3. Do not have GitHub repository (including a [requirements file]
  4. I do not have error, I just want to know in coding on how I read my data from a google bucket.
  5. Streamlit version 1.30 ; python version 3.9

Descriptions of my post: Application is an RAG API, via LangChain, using gpt-4. I have 11,255 one-page text; the text has been splitted, embdeddings, the vectorstore is serialized into google cloud as python pickle file. The vectorstore includes two files index.faiss and index.pkl (pickle). I follow [Connect Streamlit to Google Cloud Storage - Streamlit Docs](https://connect streamlit to google cloud). Except for the following steps:

  1. I donโ€™t know where the .streamlit/secrets.toml file is.
  2. Copy your app secrets to the cloud: I use streamlit on conda virtual environment for evaluation, donโ€™t have streamlit community version

From my .py file for streamlit I would like to read the vectorstore using pickle as follows: vectorstore = FAISS.load_local(vstore_name, embeddings)
Where vstore_name should be pointed to the URL of the bucket on google cloud.

I spent 3-4 hours on trying to get this working. Please help.

Hey @My_Coyne,

For your secrets file, you can create a folder and name it .streamlit, and then create a file called secrets.toml. Follow the formatting example here to add your Google Cloud Storage credentials to your secrets file.

For the second question, you donโ€™t have to deploy your app to Streamlit Community Cloud โ€“ you can just run it locally if that works for you. If you do want to deploy your app to Streamlit Community Cloud, you would just copy the contents of your secrets.toml file to the secrets section for your deployed app (check out the doc here for full instructions).