Disable st.input_chat during conversation

Hello,

I am using the st.input_chat to simulate a conversation between an user and a LLM. Things are working fine but the user can spam the input chat. i could’t find a way to work whit argument disabled and session_state to lock the input chat while the response was formation.
Any Idea how to do this?

Here is was I have:

import time
import streamlit as st

def ask(question):
    # logic
    time.sleep(10)
    return "Hello this is a response", ["doc1", "doc2"], [0.81, 0.45]

def write_and_save_assistant_response(response, matching_docs, scores) -> None:
    with st.chat_message("assistant"):
        st.markdown(response)
    st.session_state.messages.append(
        {
            "role": "assistant",
            "content": response,
            "sources": matching_docs,
            "scores": scores,
        }
    )

if "messages" not in st.session_state:
    st.session_state.messages = []

# Display chat messages from history on app rerun
for message in st.session_state.messages:
    with st.chat_message(message["role"]):
        st.markdown(message["content"])

# React to user input
question = st.chat_input("Ask a question")
if question:
    # Display user message in chat message container
    st.chat_message("user").markdown(question)
    # Add user message to chat history
    st.session_state.messages.append({"role": "user", "content": question})

    response, matching_docs, scores = ask(question)

    # Display assistant response in chat message container
    write_and_save_assistant_response(response, matching_docs, scores)

I think I found a hacky way of doing this. I find it kind of ugly but it seems to do what I want.

Since all the messages are stored in a sessions state, we can rerun the all page after getting the response. During the laps of time where the user sends a input and the end of compute, I create a new input_chat with same label but different key and disabled to True so it’s printed over the original one but not clickable.

import time
import streamlit as st

def ask(question):
    # logic
    time.sleep(10)
    return "Hello this is a response", ["doc1", "doc2"], [0.81, 0.45]

def save_assistant_response(response, matching_docs, scores) -> None:
    st.session_state.messages.append(
        {
            "role": "assistant",
            "content": response,
            "sources": matching_docs,
            "scores": scores,
        }
    )

if "messages" not in st.session_state:
    st.session_state.messages = []

# Display chat messages from history on app rerun
for message in st.session_state.messages:
    with st.chat_message(message["role"]):
        st.markdown(message["content"])

# React to user input
question = st.chat_input("Ask a question", key="real_chat_input")
if question:
    st.chat_input("Ask a question", key="disabled_chat_input", disabled=True)
    # Display user message in chat message container
    st.chat_message("user").markdown(question)
    # Add user message to chat history
    st.session_state.messages.append({"role": "user", "content": question})

    response, matching_docs, scores = ask(question)

    # Display assistant response in chat message container
    save_assistant_response(response, matching_docs, scores)
    st.experimental_rerun()

I’ve changed write_and_save_assistant_response to save_assistant_response since the page is rerun, no need to display the response because it’s saved in messages and will be displayed juste after.

HUGE WARNING:
If you have a sidebar with a slider for example and the user change it, it will break everything. Wrapping the sidebar into a form helps a bit. But if you apply the form during the process, it will mess up back.

Thanks for the tip about re-running the page! It works fine, just the syntax had changed a bit:

st.rerun()

I agree that’s a bit hacky approach, wonder if anyone found a better solution/workaround? I’ve tried to replace the normal chat input with the disabled dummy one in the callback. That way it should work without the need to rerun. It almost works :sweat_smile: But for some reason, just once.

I’d appreciate any working alternatives!

Care to share what you tried? @ton77v

Maybe I can find a fix for it to run multiple times. Because if it worked only once in a callback, I think it’s “normal” and you need to make an infinite loop of callbacks (maybe I’m wrong)

But I would be curious to play around on what you managed to create.

I’ll try to find that code, will update if could find anything. But since that was the first time I tried the feature and Streamlit in general, there was a lot of code I just tried once and threw away =)

st.rerun()

it seems something changed in version 1.28

rerun works fine for the use case with 1.27.1 while with 1.28 it causes endless loop, sending the same message over and over again

Did you try streamlut=1.28.1? It fixes a similar issue, if not the same.

I am running into the same issue. I think this is clearly something that streamlit should offer out-of-the-box, especially considering that many people are starting to build LLM-Chat interfaces based on streamlit and for the vast majority of use cases you want to disable chat input while the assistant is generating.

Especially with the new Assistants API that will throw runtime errors when you try to add messages to the thread while it is running.