Ghost double text bug

There seems to be a bug in a certain specific situation that causes the same text to be rendered twice at the same time, once normally and once as a gradually fading away text. This happened in my chat app DocDocGo and it took me a while to narrow down the issue.

Here is a simple demo app I have created to illustrate the issue:

The code is available here

I haven’t dug deep but my guess is that the issue may have something to do with the accounting of React keys to keep track of the identity of elements.

I have figured out a hacky fix for the issue, which is included in the code as a comment.

2 Likes

For clarity, here’s what this ghost double text looks like:

My code (link to full repo in OP) only st.writes each message once. Inserting st.empty() before the st.writes fixes the issue. Obviously it’s a hacky fix, but I’m hoping it can give the Streamlit devs a clue about the source of the issue.

The st.empty() trick doesn’t seem to work in all cases.

I use st.write_stream() and the ghost double text stays visible as long as the write_stream lasts.

1 Like

I use the st.stream_write() and by placing an st.empty() after the st.stream_write() worked for me.

# Get the assistant's response
with st.chat_message("assistant"):
    msg = st.write_stream(get_response(prompt, current_chat_hist))
    st.empty()  # fixes the ghosting bug...
1 Like

Actually I think it has to do with the spinner. If I remove the spinner, then I don’t get any ghosting. As soon as I add the spinner, I get ghosting.

with st.spinner('Summarizing chat...'):
        agent.summarize_history()

If I leave the spinner in and add an empty after every message, there is no ghosting:

for msg in chat_history:
    role = msg["role"]
    with st.chat_message(role):
        st.markdown(msg["content"])
        st.empty()
1 Like

I recently wrote another reply to this type of thing with an explanation of what that ghost is exactly. There’s also another way to use st.empty() to make sure things clear out as intended: Streamlit Spinner and Chat Message odd interaction - #2 by mathcatsand

for i, msg in enumerate(st.session_state.messages):
if msg["role"] == "assistant":
with col1:
                    with st.chat_message("assistant", avatar="image/bot.png"):
                        st.markdown(msg["content"])
                        st.empty()

I used st.empty() but it didn’t worked , streamlit version=1.38.0,

@sum Did you try to use st.empty() in the other way I suggested? Depending on your case, you might need more than one st.empty() if you’re writing it outside/after your message. When you follow with st.empty(), you’ll need as many empties as you had stale elements, basically. The alternate way I proposed is a little more robust.

yup I tried this way too but it failed for my scenario…

for i, msg in enumerate(st.session_state.messages):
if msg["role"] == "assistant":
with col1:
                    with st.chat_message("assistant", avatar="image/bot.png"),st.empty():
                        st.markdown(msg["content"])

Can you share more information about what exactly is happening in your case? You might want to create a new thread, share a link to your code, and include a screenshot or video link showing how stale elements are appearing for you.

Thanks @mathcatsand now its working I tried some changes in my code by adding placeholder = st.chat_message("assistant", avatar="image/bot.png") message_placeholder = placeholder.empty()

I have a similar problem. Just switched from the version 1.32 to 1.39 and it started doing it - please Streamlit team fix this as soon as possible, this wasn’t an issue before.

My code is something like this:

with st.chat_message('user'):
        st.markdown(prompt)

with st.chat_message('assistant'):
        with st.spinner('Asking GenAI...'):
            ... # generating response from the LLM

        with st.spinner('Processing response...'):
            st.code({code_from_llm})
            ... # producing plots/tables based on the generated code from LLM using 
                # st.plotly_chart, st.data_frame, st.markdown, etc.

Would anyone please suggest how to make a temporary fix before Streamlit team fixes this issue?

Have you tried the solution linked above?

Using st.empty() inside the spinners doesn’t really make any difference.
But as you suggested using it like this:

with st.chat_message('user'):
        st.markdown(prompt)

with st.chat_message('assistant'), st.empty():
        with st.spinner('Asking GenAI...'):
            ... # generating response from the LLM

        with st.spinner('Processing response...'):
            st.code({code_from_llm})
            ... # producing plots/tables based on the generated code from LLM using 
                # st.plotly_chart, st.data_frame, st.markdown, etc.

the ghosting stops. However, everytime I send a message, I don’t see the assistant response after the spinners stop spinning. I guess it’s because I am generating those UI elements (such as st.code, st.dataframe, st.ploty_chart) inside the second spinner “with st.spinner(‘Processing response…’):”?

Hi, did anyone from Streamlit development team read this thread? Let me please know if this is a known bug and if it’s being worked on. Also, what is actually the reason this suddenly started occuring in the recent versions of Streamlit? Thank you for the support!

Hi,
has anyone been able to resolve this issue? I updated to Streamlit==1.41.1 and the ghosting is still happening. If any of the developers from Streamlit read this, please let me know if there is a plan to eliminate the ghosting in the nearest updates.

Thank you for the support!

1 Like

@Ond_ej The stale (ghost) elements in and of themselves are a part of the rerun design and not considered a bug.

Can you share an executable script to show how you get different stale elements between versions (with dummy functions to “generate” the necessary multiline response)?

1 Like

Hi @mathcatsand, thank you for the answer.

As mentioned before, in the previous version of Streamlit (1.32), the chat interface worked as expected without any ghosting of the messages. Not changing my code, the ghosting started appearing after updating to the version 1.39 (I believe the issue started appearing in 1.33 onwards). Hence, something on the side of Streamlit implementation must have changed, causing this issue to start appearing.

I have read similar reports of “ghosting” across different threads - all started appearing at a similar time around March/April/May 2024 when the versions 1.33/1.34 were released:

Screen ghosting with chat_input inside container plus multiple columns · Issue #8480 · streamlit/streamlit

Ghosting of container in streamlit - Using Streamlit - Streamlit

Bug making double image-text generation - Using Streamlit - Streamlit

Chatbot message appears twice - LLMs and AI - Streamlit

Also, the posts of this thread suggest the ghosting behavior of st.char_message() elements appear when spinner is used and that is not the expected or correct behavior as it used to be in the version 1.32.

Could you please verify this is really a known issue that is planned to be reverted back to the previous state when the ghosting during chat interface didn’t appear?

Here is the generic chat interface code that shows the ghosting issue (using Streamlit 1.41):

import openai
import dataiku

import pandas as pd
import streamlit as st
import plotly.express as px

from time import sleep, time


if 'df' not in st.session_state:
    data = {"TIME": ["1/1/2024","2/1/2024","3/1/2024","4/1/2024","5/1/2024","6/1/2024","7/1/2024","8/1/2024","9/1/2024","10/1/2024","11/1/2024","12/1/2024"],
            "Product_ID":[101,102,103,101,102,104,101,103,104,102,103,104],
            "Product_Name":["Apple","Banana","Orange","Apple","Banana","Grapes","Apple","Orange","Grapes","Banana","Orange","Grapes"],
            "Quantity_Sold":[50,30,40,60,25,20,55,35,15,20,50,25],
            "Price_per_Unit":[1,0.5,0.75,1,0.5,2,1,0.75,2,0.5,0.75,2],
            "Total_Sales":[50,15,30,60,12.5,40,55,26.25,30,10,37.5,50]}
    
    df = pd.DataFrame(data)

    # Group by Product_Name to get total sales for each product
    total_sales_by_product = df.groupby('Product_Name')['Total_Sales'].sum().reset_index()

    # Create a bar chart
    fig = px.bar(total_sales_by_product, x='Product_Name', y='Total_Sales', 
                 title='Test Plot', 
                 labels={'Total_Sales': 'Total Sales', 'Product_Name': 'Product Name'})

    st.session_state['df'] = df
    st.session_state['fig'] = fig

    # Update layout for better visualization
    fig.update_layout(xaxis_title='Product Name', yaxis_title='Total Sales')
 
st.title("Chat Bot Tester")

if "messages" not in st.session_state:
    st.session_state.messages = []

for message in st.session_state.messages:
    with st.chat_message(message["role"]):
        
        if message['role'] == 'assistant':
            with st.expander('Python Code:'):
                st.code(message['content']['code'])
            # st.plotly_chart(message['content']['fig'], use_container_width=True, key=message['content']['key'])
            st.dataframe(message['content']['df'], use_container_width=True, key=message['content']['key'])

            st.markdown(message["content"]['text'])
        else:
            st.markdown(message["content"]['text'])
        
if prompt := st.chat_input("What is up?"):
    st.session_state.messages.append({"role": "user", "content": {'text': prompt}})
    
    with st.chat_message("user"):
        st.markdown(prompt)

    with st.chat_message("assistant"):
        with st.spinner('Asking GenAI...'):
            sleep(2)
            response = {'code': 'This is a sample code',
                        'fig': st.session_state['fig'],
                        'df': st.session_state['df'],
                        'text': 'This is a sample text',
                        'key': time()}

        with st.expander('Python Code:'):
            st.code(response['code'], language='python', line_numbers=True)
        
        with st.spinner('Processing Answer...'):
            sleep(2)
            # st.plotly_chart(response['fig'], use_container_width=True, key=response['key'])
            st.dataframe(response['df'], use_container_width=True, key=response['key'])
            
        st.markdown(response['text'])
        st.session_state.messages.append({"role": "assistant", "content": response})

If you run this code, you can see that the the data frame and text element both have the ghosting, while the plotly_chart doesn’t for some reason - for showing the plot instead of the dataframe, comment the dataframe parts and uncomment the plot parts. Regardless, the same code run in Streamlit version 1.32 doesn’t have any ghosting issues.

I found a way to temporarily fix this in 1.41 by including st.container() into the with statement. However, it is a hack that I believe shouldn’t be necessary. While using st.container, st.dataframe still flickers when a new question is asked (don’t think it flickered before in 1.32). Here is the revised last part of the code with st.container():

with st.chat_message("assistant"), st.container():
        with st.spinner('Asking GenAI...'):
            sleep(2)
            response = {'code': 'This is a sample code',
                        'fig': st.session_state['fig'],
                        'df': st.session_state['df'],
                        'text': 'This is a sample text',
                        'key': time()}

        with st.expander('Python Code:'):
            st.code(response['code'], language='python', line_numbers=True)
        
        with st.spinner('Processing Answer...'):
            sleep(2)
            # st.plotly_chart(response['fig'], use_container_width=True, key=response['key'])
            st.dataframe(response['df'], use_container_width=True, key=response['key'])
            
        st.markdown(response['text'])
        st.session_state.messages.append({"role": "assistant", "content": response})

Thank you for the specific example. It doesn’t run in 1.32.0 but I was able to remove the keys from the dataframes and get it to run in both 1.32.0 and 1.41.0 to compare. (When I had tried a simple example before, I didn’t see a difference in the stale elements between the versions, so I’m looking at this now.)