I’m working on a chatbot with Llamaindex and am using streamlit to build the app. I’m using a basic template found in the streamlit repo (llamaindex-chat-with-streamlit-docs/streamlit_app.py at main · streamlit/llamaindex-chat-with-streamlit-docs · GitHub) and the code displaying the LLM response is shown below:
# If last message is not from assistant, generate a new response
if st.session_state.messages[-1]["role"] != "assistant":
with st.chat_message("assistant"):
response_stream = st.session_state.chat_engine.stream_chat(prompt)
st.write_stream(response_stream.response_gen)
message = {"role": "assistant", "content": response_stream.response}
# Add response to message history
st.session_state.messages.append(message)
This might not be the most efficient method, but I am trying to verify some of the text in the llm response before displaying it to the user. With this method I assume that I can not use the “st.write_stream(response_stream.response_gen)” line since the llm has to finish its entire response before I can verify the text. So, what I want to know is:
- I can access the response string using response_stream.response but it returns as empty maybe because it is still generating the answer. How do I obtain the response when it is completed writing the response?
- Is there a way to display a loading circle while it generates this response?
Thanks