Streamlit Chatbot trained on own dataset no longer working

Hi all,

Apologies in advance if there is something obvious here as I am relatively new to coding. I had created an AI chatbot using the Streamlit documentation and I trained it on some Markdown files. It was all working really well until suddenly it crashed and I haven’t been able to get it working again.

To clarify I hadn’t deployed the app yet and my current python 3.11.5 and Streamlit version 1.28.2.

Anything anyone can identify to help would be massively appreciated! Thank you:)

Please see my code below:

import streamlit as st
from openai import OpenAI
from PIL import Image
from llama_index import VectorStoreIndex, ServiceContext, Document
from llama_index.llms import OpenAI
from llama_index import SimpleDirectoryReader
import os

Set OpenAI API key

os.environ[“OPENAI_API_KEY”] = st.secrets[“OPENAI_API_KEY”]
client = OpenAI(api_key=“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”)

Load data and create index

@st.cache_resource(show_spinner=False)
def load_data():
with st.spinner(text=“Loading and indexing the ConstGuide knowledge base - hang tight!”):
reader = SimpleDirectoryReader(input_dir=“/Users/joeposnett/Desktop/streamlit_data”, recursive=True)
docs = reader.load_data()
service_context = ServiceContext.from_defaults(llm=OpenAI(model=“gpt-3.5-turbo”, embed_model=‘local’, temperature=0.5, system_prompt=“you are an expert on the Streamlit Python library and your job is to answer technical questions. Assume that all questions are related to the Streamlit Python Library. Do not hallucinate features”))
index = VectorStoreIndex.from_documents(docs, service_context=service_context)
return index

index = load_data()

Setting up a session state (chat history) variable and selecting the GPT model to use.

if “openai_model” not in st.session_state:
GPT_MODEL = “gpt-3.5-turbo”
st.session_state[“openai_model”] = GPT_MODEL

with st.chat_message(name=“assistant”, avatar=“\U0001F477”):
st.write(“My name is Site-based Sam and I will be your AI construction helper”)

Initialising the chat history

if “messages” not in st.session_state:
st.session_state.messages =

Displaying chat messages from history on app re-run

for message in st.session_state.messages:
role = message[“role”]
avatar = “\U0001F477” if role == “assistant” else None
with st.chat_message(role, avatar=avatar):
st.markdown(message[“content”])

Reacting to user input

if prompt := st.chat_input(“How can I help?”):
# Displaying user message in chat message container
with st.chat_message(“user”):
st.markdown(prompt)
# Adding user message to chat history (session state variable)
st.session_state.messages.append({“role”: “user”, “content”: prompt})

Creating a chat message container for the assistant

with st.chat_message(“assistant”, avatar=“\U0001F477”):
message_placeholder = st.empty()
full_response = “”
# Call the OpenAI API and pass the model and conversation history
for response in client.chat.completions.create(
model=st.session_state[“openai_model”],
messages=[
{“role”: m[“role”], “content”: m[“content”]}
for m in st.session_state.messages
],
stream=True
):
# Convert the dictionary response to a string
response_text = response.choices[0].message.content
full_response += response_text
message_placeholder.markdown(full_response + "| ")

message_placeholder.markdown(full_response)
# Add ChatGPT response to the messages in the session state 
st.session_state.messages.append({"role": "assistant", "content": full_response})

Hi @Joe.Posnett,

Thanks for sharing your question! Please update your post to include a link to your GitHub repo and to format your code (check out #3 in this post for instructions).

Hi @Caroline

Thank you so much for getting back to me and apologies for the formatting issue.

I hadn’t actually used GitHub before but I think I worked how to set it up and include the link. GitHub - Joe-Posnett/construct_guide: Construction Education Chatbot

Hope that works!

Joe

It looks like your code hasn’t been added to that repo, unfortunately

Oh dear…

I have just checked the following link and should be OK now.

Thank you!

Thanks! I see it now. Can you share the error message that you’re seeing?

Great!

  1. Problem shown in the terminal is “client” not defined [Ln 57, Col 21]. I looked at the API documentation and couldn’t work out what changes to make.

  2. The error which I get when I try and run the server is:

FileNotFoundError: No secrets files found. Valid paths for a secrets.toml file are: /Users/joeposnett/.streamlit/secrets.toml, /Users/joeposnett/.streamlit/secrets.toml

Traceback:

File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 534, in _run_script
    exec(code, module.__dict__)File "/Users/joeposnett/construction_chatbot.py", line 10, in <module>
    os.environ["OPENAI_API_KEY"] = st.secrets["OPENAI_API_KEY"]
                                   ~~~~~~~~~~^^^^^^^^^^^^^^^^^^File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/streamlit/runtime/secrets.py", line 305, in __getitem__
    value = self._parse(True)[key]
            ^^^^^^^^^^^^^^^^^File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/streamlit/runtime/secrets.py", line 214, in _parse
    raise FileNotFoundError(err_msg)

FYI the secrets.toml file is in a .streamlit folder which is in the same folder as the app I am creating. I stored my key as os.environ[“OPENAI_API_KEY”] = “”.

Hey @Joe.Posnett,

Sorry for the delayed response. For the first error, I think you’re missing the following in your app:

client = OpenAI(api_key=st.secrets.OPENAI_API_KEY)

(Check out this more complete example here)

For the second error, have you set up your secrets file following the format outlined here?

Hi Caroline - my turn to apologise for the delayed response I was away for a while but I am back and coding again :slight_smile:

I introduced the line highlighted above into the app and I double checked my secrets file set up against the documentation but for some reason it is still coming up with an error. I also double checked my key to make sure it is correct which it was.

I have attached a few screenshots so that you can see exactly how I have structured everything to see if there is something obvious I am missing.

Thanks again - you have no idea how excited I am to have it working again!

Joe



Hey @Joe.Posnett,

It looks like the name of the secrets file is misspelled – it should be secrets.toml

Oh dear sorry about that - well spotted.

I have solved the next couple of errors which came up after that but I am stuck on this line which used to work.

AttributeError: ‘function’ object has no attribute ‘completions’

Traceback:

File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 534, in _run_script
    exec(code, module.__dict__)File "/Users/joeposnett/So confused/Construction Education/Construct_GPT.py", line 57, in <module>
    for response in client.chat.completions.create(
                    ^^^^^^^^^^^^^^^^^^^^^^^

Also - I appreciate you have already helped me a lot with this so please don’t feel like you have to coach me every step of the way as I am conscious that you have already invested a lot of time into this!

Hi Caroline,

Apologies to ask you again but I wondered if you might have any ideas on the above?

I seem to get the following "AttributeError: ‘function’ object has no attribute ‘completions’ " in relation to the API call on line 57.

I have checked it against the latest documentation and have ensured that I am using a suitable version of Python (version 3.11.5) and that the latest OpenAI version is installed (1.6.1) but haven’t worked it out.

If you have any ideas on what the issue may be it would honestly make my day as it is the only feature out of my main app which I haven’t managed to unlock yet.

Thanks!

This topic was automatically closed 180 days after the last reply. New replies are no longer allowed.