I like ur UI style. It prevents chatbot from bloating and gives other elements to add details and then keep them.
My workload is very similar but I am too confused how to get multiple inputs in validation and correction phase.
You already have in earlier response the main validate_Input_query() where is bulk of get_user_input() - a wrapper on chat_input() is called whereever we need user to add more field correction or some details. It’s like 4-5 places in that function as you see.
My app kick starts with a run_chatbot() function in main script as below:
import streamlit as st
from src.platform_intelligence.language.process_input_query import process_chat_response, tools
from src.utils.display_utility import stream_message
def run_chatbot():
# Modularize chatbot UI elements
"""Run the Streamlit chatbot interface."""
language = st.sidebar.radio("Choose Response Language", ('English', 'German', 'French'), horizontal=True)
if "messages" not in st.session_state:
st.session_state.messages = [{"role": "assistant", "content": "Hello👋 , I'm Eliza, your Segmentation Assistant. I can currently help you create segments."}]
for message in st.session_state.messages:
if message["role"] == "assistant":
with st.chat_message(message["role"], avatar = "💁"):
st.markdown(f'<div style="color: black;">{message["content"]}</div>', unsafe_allow_html=True)
else:
with st.chat_message(message["role"]):
st.markdown(f':black[{message["content"]}]')
if user_input := st.chat_input("Ask something", key = "user_input_main"):
with st.chat_message("user", avatar = None):
st.markdown(f':black[{user_input}]')
st.session_state.messages.append({"role": "user", "content": user_input})
assistant_response = process_chat_response(user_input, tools = tools, language = language).strip()
# Stream assistant's response in real-time
stream_message(assistant_response)
process_chat_response():
def process_chat_response(
user_query: str,
system_prompt: str = SYSTEM_PROMPT,
tools: Any = None,
schema_model: BaseModel = InputQuery,
language: str = "English"
) -> Union[BaseModel, str]:
"""
Processes the chat response from a completion request and outputs function details and arguments.
This function sends a list of messages to a chat completion request, processes the response to extract
function calls and arguments, and prints relevant information. It also merges function arguments and
initializes an `InputQuery` object using the merged arguments.
Args:
user_query (str): Query entered by user
system_prompt (str): If a user has any specific prompt to enter.
tools (Any): The tools to be used with the chat completion request.
schema_model (BaseModel): The Pydantic model class used for validating user query.
language (str): Language in which you want to a response
Returns:
response_output (Union[BaseModel, str]): Returns the response for the query.
"""
# Ensure chat_history is initialized in session state
if "chat_history" not in st.session_state:
st.session_state.chat_history = []
# Convert chat history into a formatted string
chat_history_str = "\n".join([f"{msg['role'].capitalize()}: {msg['content']}" for msg in st.session_state.chat_history])
# Format the system prompt with the chat history
formatted_system_prompt = system_prompt.format(chat_history=chat_history_str, language = language)
messages = [{"role": "system", "content": formatted_system_prompt}]
messages.append({"role": "user", "content": user_query})
print(f'Full Prompt: {messages}')
print(f'Tools : {tools}')
response = chat_completion_request(messages, tools=tools, response_format={"type": "text"})
print(f'\n{response}')
merged_arguments = defaultdict(lambda: None)
if response.choices[0].finish_reason == "tool_calls":
for tool_call in response.choices[0].message.tool_calls:
function_arguments = json.loads(tool_call.function.arguments)
merged_arguments.update(function_arguments)
merged_arguments = dict(merged_arguments)
print()
print(f'function call arguments: {function_arguments}')
print(f"Merged Arguments: {merged_arguments}")
# Convert merged_arguments to a JSON-like string and escape curly braces
merged_arguments_str = str(merged_arguments).replace("{", "{{").replace("}", "}}")
# Append the user's query and the assistant's response to the chat history
st.session_state.chat_history.append({"role": "user", "content": user_query})
st.session_state.chat_history.append({"role": "assistant", "content": merged_arguments_str})
# Verifying the Output with Verifier LLM Agent
#verifier_response = verifier_agent_response(user_query, merged_arguments, tools)
print()
#print(f"Verifier LLM Agent Response: {verifier_response}")
# Validate the InputQuery object with re-prompting if necessary
final_response = validate_input_query(merged_arguments, schema_model)
print(f"Process Chat Final Response: {final_response}")
elif response.choices[0].finish_reason == 'stop' and response.choices[0].message.content is not None:
final_response = response.choices[0].message.content.strip()
# Verifying the Output with Verifier LLM Agent
#verifier_response = verifier_agent_response(user_query, final_response, tools)
print()
#print(f"Verifier LLM Agent Response: {verifier_response}")
# Append the user's query and the assistant's response to the chat history
st.session_state.chat_history.append({"role": "user", "content": user_query})
st.session_state.chat_history.append({"role": "assistant", "content": final_response})
#print(f'\n{final_response}')
print(f"chat history: {st.session_state.chat_history}")
return final_response
Flow:
The user first enters a query; Which happens in above code- the kickoff code. As u see, the user input is passed to process_chat_response() - this functiion is part of script where validate_input_query() is there too.
In that script the input is sent to LLM. If the input is irrelevant to what the LLM prompt is designed for, it returns a text output which is then returned back and shown to the user in chat_message.
If the query is relevant to our task, then it is passed to validate_input_query() function which is where the validation begins. The validation uses a pydantic schema to make sure all entities extracted from user query meet the constraints of schema.
If any entity is missing or fails validation, then we end up asking a bunch of times from users to enter. This is the correction phase.
Its in this part where I need to add multiple chat_input. In ur case I see in the validation or ask phase, u only had one chat_input.
But in my case, I have multiple user input needed (depending upon how many errors) within same phase.
The query could be any random irrelevant query.