Summary
Is there any way to create an html prompt with streamlit? I have an initial textbox with a question, but sometimes I need more info from the user. A javascript prompt would be best I think.
Is there any way to create an html prompt with streamlit? I have an initial textbox with a question, but sometimes I need more info from the user. A javascript prompt would be best I think.
I think just another textbox would be better UX-wise and also easier to code.
Great, I could try another textbox. But, how can I make the UI wait for user input?
I donāt think I understand your question. Once the UI is drawn, nothing hapens until the user interacts with a widget. So the UI s always waiting for user input.
I need to get additional information from the user after the first interaction. But, only sometimes.
For example:
case 1:
Example pizza company, whatās your order:
⦠(text input)
[optional input text] Great, you want a pepperoni pizza, whatās your address?
[customer adds address]
case 2:
Example pizza company, whatās your order:
⦠(text input); address included
Great, sending 1 pepperoni pizza to 123 Fake st., Mobile, AL 12345
In case 1, I have an interaction that needs some updated info (eg an address) so I need to present a modal dialog for user input.
In case 2, I have an interaction that already has the address included, so no new user input dialog should be presented.
I donāt get it. You can have just a text input for the address, the user only have to type the address once even if they are ordering several pizzas. What is wrong with that? Thereās no way out of typing the address once.
OK, Iām not doing a good job of explaining itā¦
Imagine that this is a āpizza company conversational chat botā. In most cases, it will be sufficient to just have a single text box as input to drive the conversation. However, in some cases we will need to get additional information back from the user (eg āwhat is your addressā āI noticed that you ordered to mediums, would you like to make it a meal for $5 moreā āweāre currently out of black olivesā, etc, etc)
So, I have a form with the original prompt. But, in some cases I need to gather more information from the user. Iām not sure how I would go about that.
Thatās just part of the conversation. The chatbot asks questions (whatās your order? Whatās your address?) or provides information (weāre out of black olives) and the user submit an answer or a reaction in a text input. Why would some questions require a different UI?
How is this a conversation you couldnāt have in whatsapp? There are no popups in whatsapp.
Yep, a popup maybe isnāt necessary. I guess Iād need to reuse the existing textbox then? Or create a new one?
How would you implement such an interface in streamlit? Is it possible? I mean, just the basic input text box.
It seems that once you create a text input that you canāt get that element again? One would need to get this element, get itās text, clear it, etc. Is this possible?
Well, Iām guessing by the lack of updated response that what Iām asking for is not possible with this library. Iāve been beating my head against the wall with this library for 2 days now trying to get even the most simplest of interactions to work well.
I did find the chatbot component but like even the most basic apps the UI is pretty much non usable for anything other than the most simplest of use cases.
I finally got that the lib revolves around state. Fine. I tried to roll my own chat functionality using state instead and I get ābad formatā errors
This code seems to just do nothing at all:
import streamlit as st
from streamlit_chat import message
st.title("ChatGPT-like Web App")
top = st.empty()
bottom = st.empty()
def text_changed():
input = st.session_state["input"]
st.session_state["input"] = ""
bottom.write(input)
bottom.info(input)
user_input = top.text_input("You:",key='input',on_change=text_changed)
And if you change the āemptyā calls to ācontainerā you get the ābad formatā bug.
Ug, Iāve wasted enough time with this lib, moving on to something else.
Hey @javamonkey,
sorry for the frustration. You can find some examples for how to build chatbots with Streamlit on our new page about generative AI: https://streamlit.io/generative-ai
Weāre also launching some dedicated chat features very soon, which will make this a lot simpler. Feel free to follow roadmap.streamlit.io and our social accounts for updates. Once this is launched, we plan to add a more official tutorial on how to build chatbots/LLM apps to our docs.
Again, sorry this doesnāt feel as smooth as it should be yet. Obviously, LLMs apps are a pretty new field for all of us ā weāre working hard to make Streamlit better for it but we also wanted to take some time to properly think through what we want to implement. Not that we have to rip out everything again in 6 months because the field moved on so quickly.
I think this is what you are trying to do:
def text_changed():
st.session_state["last"] = st.session_state["input"]
st.session_state["input"] = ""
st.session_state.setdefault("last", "")
st.text_input("You:", key='input' ,on_change=text_changed)
st.write(st.session_state["last"])
st.info(st.session_state["last"])
Which is a start but far from enough. At a minimum you want to preserve and display the whole conversation.
def text_changed():
st.session_state["history"].append(st.session_state["input"])
st.session_state["input"] = ""
st.session_state.setdefault("history", [])
st.text_input("You:", key='input', on_change=text_changed)
for msg in st.session_state["history"]:
st.info(msg)
A conversation with yourself wonāt get you a pizza, so letās add a āpizza company conversational chat botā.
import requests
def generate_output(history):
url = "http://hipsum.co/api/?type=hipster-centric&sentences=1"
r = requests.get(url)
return r.json()[0]
def display_history(history):
for msg in history:
st.info(f"**Bot**: {msg['out']}")
st.info(f"**You**: {msg['in']}")
def text_changed(output):
st.session_state["history"].append({"out": output, "in": st.session_state["input"]})
st.session_state["input"] = ""
st.session_state.setdefault("history", [])
with st.spinner(text="Thinking..."):
output = generate_output(history=st.session_state["history"])
st.write(output)
st.text_input("You:",key='input',on_change=text_changed, kwargs={"output": output})
display_history(history=st.session_state["history"])
Now that you have the basic functionality for a conversation you need to make the bot smarter, find a way to decide when to stop, and what to do next.