St.status: Visualize your app’s processes

Rich context for users and more control for developers

Posted in Product, September 7 2023

TL;DR: Replace long app wait times and shed light on the “black box” of data processing with st.status. Play with our demo app to see how it works.

Long-running apps like LLM agents rarely show you their inner workings out of the box. On top of that, if a response takes too long to generate, users get impatient and leave. Not ideal!

Introducing: st.status

If you're not always confident in your model's output, how do you inspect the intermediate steps and chain of thought to verify results?  A few months ago, we provided a targeted solution by integrating with LangChain, using their callback system.

Now you can add st.status to any interactive or API-powered app to:

  • Animate its "under-the-hood" processes such as API calls or data retrieval.
  • See step-by-step logic to understand what went wrong (or validate what went right).
  • Allow users to engage with your app, rather than experiencing a blank page.

See how it works in our demo app! Choose any of the 8 different animations below to pair with your app operations. Check out the docs for more detail.

Let's look at two examples to see it in action.

Step-by-step transparency, in real-time

With st.status, every process step is defined, broken out, and animated. The app viewer can expand the status to check the details or leave it collapsed to focus on the final output.

Unlike st.spinner, the intermediate steps remain available to inspect even after the process completes.

https://release126.streamlit.app/st.status_demo

Flexible interaction to validate results

This can be particularly helpful to validate results from LLM-based apps.

LLMs aren't perfect. Their intelligence relies on the data sets they are trained on, which could be incomplete or contain misinformation. The LLM attempts to generate a plausible response to a user's prompt, but if it reaches the boundaries of its knowledge base, it can take liberties. This phenomenon causes an LLM to "hallucinate." If the user can't quite tell if the model is correct, or is embellishing a result, they quickly lose trust.

With st.status, the context and intermediate steps are available so users can validate the output logic:

https://release126.streamlit.app/LangChain_demo

What's next

This flexible framework gives you a higher degree of control to up-level your app’s user experience, and easily integrate custom components. It’s worth the extra lines of code!

Help us raise the bar with new (and refined) UI improvements. What additional transparency features would you like to see next?

Let us know in the comments below or on Discord.

Happy Streamlit-ing! 🎈


This is a companion discussion topic for the original entry at https://blog.streamlit.io/st-status-visualize-your-apps-processes
1 Like

Great stuff !
Is there a way we can see the code of your demo app ?
Thanks

Absolutely! You can see the code here:

1 Like

Thanks !

Thanks !

Hi there, what’s the best way to empty the container of st.status similar to what you can do with st.progress? Any insight will be greatly appreciated. This does NOT seem to do the trick…

status_container = st.status("Hello...", expanded=True)
         with status_container as status:
             st.write("📚 Asking my ...")
             time.sleep(2)
             agent_result = long_running_task()
             status.update(label="✅ Producing final touches. Almost ready!", state="complete", expanded=False)
         status_container.empty()

Nest the status element inside an empty element.

1 Like