How to build a multi-expanders UI for steraming data for a multi-state workflow chatbot system

Hi everyone, I am new to streamlit and currently trying to build a multi-state(workflow) chatbot system with streamlit.

In my design, the llm response will be streamed to streamlit UI in json format from backend.
In streamlit, I want after a user-input question, the assistant will organize the answer into multiple expanders. Each expander represents one state with the think content inside expander.

My thoughts the code structure would be like:

with st.chat_message('user'):
    st.markdown(user_input)
with st.chat_message('assistant'):
    # a generator yields one json response one time
    output_generator = request_handler(user_input)
    for output_iter in output_generator:
        if state == '0':
             if first_response_in_this_state:
                 st.expander('state1 starts')
             else:
                  if is_thinking:
                      st.expander('state1 executing...')
                      # update the thinkings under expander
                      st.markdown(thinking contents)
                  else:
                      st.expander('state1 finish...')
                      # update the result after the expander
      elif state == '1':
      ....

The way I show with expander in code above probably creates mult expanders but just for explaining the code workflow. In my code, i tried to use one single expander object but don’t know how to call or modify the object properly.

I can get the UI work without using expanders. I am not sure how to change the label of expander and its corresponding expanding content dynamically in my code structure (Or maybe my structure is not a good choice). Any helps and suggestions are really appreciated.

Thanks in advanced.

Would you possibly prefer st.status instead of st.expander? It includes an .update() method for this kind of case.

Thank you for the reply.

st.status is a better choice.

One more question is, how can I use st.status to stream llm thinking content? I assume I need somehow a placeholder inside the st.status? Similar code structure

with st.chat_message('assistant'):
    status_container = st.empty()
    # a generator yields one json response one time
    output_generator = request_handler(user_input)
    for output_iter in output_generator:
        if state == '0':
             if first_response_in_this_state:
                 status_container = st.status(label='state1 starts')
             else:
                  if is_thinking:
                      status_container.update(label='state1 executing...')
                      # but how to update the thinkings under status in stream mode?
                      status_container.markdown(thinking_contents) <-- this will output all stream data one by one but not flush the old ones
                  else:
                      st.expander('state1 finish...')
                      status_container.markdown(thinking_contents)
      elif state == '1':
      ....

Look at st.write_stream. Use it to stream content and then save it to your history at the end of the stream (so you just display it directly from history on a rerun).