Ghost double text bug

There seems to be a bug in a certain specific situation that causes the same text to be rendered twice at the same time, once normally and once as a gradually fading away text. This happened in my chat app DocDocGo and it took me a while to narrow down the issue.

Here is a simple demo app I have created to illustrate the issue:

The code is available here

I haven’t dug deep but my guess is that the issue may have something to do with the accounting of React keys to keep track of the identity of elements.

I have figured out a hacky fix for the issue, which is included in the code as a comment.

2 Likes

For clarity, here’s what this ghost double text looks like:

My code (link to full repo in OP) only st.writes each message once. Inserting st.empty() before the st.writes fixes the issue. Obviously it’s a hacky fix, but I’m hoping it can give the Streamlit devs a clue about the source of the issue.

The st.empty() trick doesn’t seem to work in all cases.

I use st.write_stream() and the ghost double text stays visible as long as the write_stream lasts.

1 Like

I use the st.stream_write() and by placing an st.empty() after the st.stream_write() worked for me.

# Get the assistant's response
with st.chat_message("assistant"):
    msg = st.write_stream(get_response(prompt, current_chat_hist))
    st.empty()  # fixes the ghosting bug...
1 Like

Actually I think it has to do with the spinner. If I remove the spinner, then I don’t get any ghosting. As soon as I add the spinner, I get ghosting.

with st.spinner('Summarizing chat...'):
        agent.summarize_history()

If I leave the spinner in and add an empty after every message, there is no ghosting:

for msg in chat_history:
    role = msg["role"]
    with st.chat_message(role):
        st.markdown(msg["content"])
        st.empty()
1 Like

I recently wrote another reply to this type of thing with an explanation of what that ghost is exactly. There’s also another way to use st.empty() to make sure things clear out as intended: Streamlit Spinner and Chat Message odd interaction - #2 by mathcatsand

for i, msg in enumerate(st.session_state.messages):
if msg["role"] == "assistant":
with col1:
                    with st.chat_message("assistant", avatar="image/bot.png"):
                        st.markdown(msg["content"])
                        st.empty()

I used st.empty() but it didn’t worked , streamlit version=1.38.0,

@sum Did you try to use st.empty() in the other way I suggested? Depending on your case, you might need more than one st.empty() if you’re writing it outside/after your message. When you follow with st.empty(), you’ll need as many empties as you had stale elements, basically. The alternate way I proposed is a little more robust.

yup I tried this way too but it failed for my scenario…

for i, msg in enumerate(st.session_state.messages):
if msg["role"] == "assistant":
with col1:
                    with st.chat_message("assistant", avatar="image/bot.png"),st.empty():
                        st.markdown(msg["content"])

Can you share more information about what exactly is happening in your case? You might want to create a new thread, share a link to your code, and include a screenshot or video link showing how stale elements are appearing for you.

Thanks @mathcatsand now its working I tried some changes in my code by adding placeholder = st.chat_message("assistant", avatar="image/bot.png") message_placeholder = placeholder.empty()

I have a similar problem. Just switched from the version 1.32 to 1.39 and it started doing it - please Streamlit team fix this as soon as possible, this wasn’t an issue before.

My code is something like this:

with st.chat_message('user'):
        st.markdown(prompt)

with st.chat_message('assistant'):
        with st.spinner('Asking GenAI...'):
            ... # generating response from the LLM

        with st.spinner('Processing response...'):
            st.code({code_from_llm})
            ... # producing plots/tables based on the generated code from LLM using 
                # st.plotly_chart, st.data_frame, st.markdown, etc.

Would anyone please suggest how to make a temporary fix before Streamlit team fixes this issue?

Have you tried the solution linked above?

Using st.empty() inside the spinners doesn’t really make any difference.
But as you suggested using it like this:

with st.chat_message('user'):
        st.markdown(prompt)

with st.chat_message('assistant'), st.empty():
        with st.spinner('Asking GenAI...'):
            ... # generating response from the LLM

        with st.spinner('Processing response...'):
            st.code({code_from_llm})
            ... # producing plots/tables based on the generated code from LLM using 
                # st.plotly_chart, st.data_frame, st.markdown, etc.

the ghosting stops. However, everytime I send a message, I don’t see the assistant response after the spinners stop spinning. I guess it’s because I am generating those UI elements (such as st.code, st.dataframe, st.ploty_chart) inside the second spinner “with st.spinner(‘Processing response…’):”?

Hi, did anyone from Streamlit development team read this thread? Let me please know if this is a known bug and if it’s being worked on. Also, what is actually the reason this suddenly started occuring in the recent versions of Streamlit? Thank you for the support!