Hi,
I’m trying to make a langchain llm streamlit app, here’s the result :
here’s the code :
# Generate a new response if last message is not from assistant
if st.session_state.messages[-1]["role"] != "assistant":
with st.chat_message("assistant"):
with st.spinner("Thinking..."):
response = ask_question(qa, prompt)
print(response.content)
st.markdown(response.content)
message = {"role": "assistant", "content": response.content}
st.session_state.messages.append(message)
is there a way please to remove the think part and keep just the last reponse ? I mean to keep just
> I don’t know the answer to that question as the provided context does not contain information about J.K. Rowling.
thnaks