How to build an LLM-powered ChatBot with Streamlit

A step-by-step guide using the unofficial HuggingChat API (no APIs required)

Posted in LLMs, May 10 2023

Hey, Streamlit-ers! 👋

My name is Chanin Nantasenamat, PhD. I’m working as a Senior Developer Advocate creating educational content on building Streamlit data apps. In my spare time, I love to create coding and data science tutorials on my YouTube channel, Data Professor.

Are you looking to build an AI-powered chatbot using LLM models but without the heavy API cost? If you answered yes, then keep reading!

You’ll build a chatbot that can generate responses to the user-provided prompt input (i.e., questions) using an open-source, no-cost LLM model OpenAssistant/oasst-sft-6-llama-30b-xor from the unofficial HuggingChat API known as HugChat. You’ll deploy the chatbot as a Streamlit app that can be shared with the world!

In this post, you’ll learn how to:

  • Set up the app on the Streamlit Community Cloud
  • Build the chatbot

What the HugChat app can do

Before we proceed with the tutorial, let's quickly grasp the app's functionality. Head over to the app and get familiar with its layout—(1) the sidebar provides app info, and (2) the main panel displays conversational messages:

Interact with it by (1) entering your prompt into the text input box and (2) reading the human/bot messages.

Clone the app-starter-kit repo to use as the template for creating the chatbot app. Then click on "Use this template":

Give the repo a name (such as mychatbot). Next, click "Create repository from the template." A copy of the repo will be placed in your account:

Next, follow this blog post to get the newly cloned repo deployed on the Streamlit Community Cloud. When done, you should be able to see the deployed app:

Edit the requirements.txt file by adding the following prerequisite Python libraries:

streamlit
hugchat
streamlit-chat
streamlit-extras

This will spin up a server with these prerequisites pre-installed.

Let's take a look at the contents of streamlit_app.py:

import streamlit as st
st.title('🎈 App Name')
st.write('Hello world!')

In subsequent sections, you will modify the contents of this file with code snippets about the chatbot.

Finally, before proceeding with app building, let's take a look at how the user will interact with it:

  • Front-end: The user submits an input prompt (by providing a string of text to the text box via st.text_input()), and the app generates a response.
  • Back-end: Input prompt is sent to hugchat (the unofficial port to the HuggingChat API) via streamlit-chat for generating a response.
  • Front-end: Generated responses are displayed in the app via's message() command.

Build the chatbot

Fire up the streamlit_app.py file and replace the original content with code snippets mentioned below.

1. Required libraries

Import prerequisite Python libraries:

import streamlit as st
from streamlit_chat import message
from streamlit_extras.colored_header import colored_header
from streamlit_extras.add_vertical_space import add_vertical_space
from hugchat import hugchat

2. Page config

Name the app using the page_title input argument in the st.set_page_config method (it'll be used as the app title and as the title in the preview when sharing on social media):

st.set_page_config(page_title="HugChat - An LLM-powered Streamlit app")

Create a sidebar with some information about your chatbot:

with st.sidebar:
    st.title('🤗💬 HugChat App')
    st.markdown('''
    ## About
    This app is an LLM-powered chatbot built using:
    - [Streamlit](<https://streamlit.io/>)
    - [HugChat](<https://github.com/Soulter/hugging-chat-api>)
    - [OpenAssistant/oasst-sft-6-llama-30b-xor](<https://huggingface.co/OpenAssistant/oasst-sft-6-llama-30b-xor>) LLM model
    
    💡 Note: No API key required!
    ''')
    add_vertical_space(5)
    st.write('Made with ❤️ by [Data Professor](<https://youtube.com/dataprofessor>)')

Use the with statement to confine the constituent contents to the sidebar. They include:

  • The app title is specified via st.title()
  • A short description of the app via st.markdown()
  • Vertical space added via add_vertical_space() method from streamlit-extras
  • A short credit message via st.write()

4. Session state

Initialize the chatbot by giving it a starter message at the first app run:

if 'generated' not in st.session_state:
    st.session_state['generated'] = ["I'm HugChat, How may I help you?"]
if 'past' not in st.session_state:
    st.session_state['past'] = ['Hi!']

Here, past denotes the human user's input and generated indicates the bot's response.

5. App layout

Give the app a general layout. The main panel will display the chat query and responses:

input_container = st.container()
colored_header(label='', description='', color_name='blue-30')
response_container = st.container()

Use st.container() as a placeholder where the input_container and response_container variables correspond to the human user and chatbot, respectively.

6. Human user input

Create the get_text() custom function that will take prompts provided by the human user as input using st.text_input(). This custom function displays a text box in the input_container:

# User input
## Function for taking user provided prompt as input
def get_text():
    input_text = st.text_input("You: ", "", key="input")
    return input_text
## Applying the user input box
with input_container:
    user_input = get_text()

7. Bot response output

Create the generate_response(prompt) custom function for taking in the user's input prompt as an argument to generate an AI response using the HuggingChat API via the hugchat.ChatBot() method (this LLM model can be swapped with any other one):

# Response output
## Function for taking user prompt as input followed by producing AI generated responses
def generate_response(prompt):
    chatbot = hugchat.ChatBot()
    response = chatbot.chat(prompt)
    return response

Populate the response_container with the AI-generated response with the two underlying if statements:

  1. If the user has entered their input query, the if user_input statement will become True and the underlying statements will run.
  2. The user-provided prompt (user_input) will serve as an input argument to generate_response() to make the AI-generated response.
  3. Subsequently, the generated output will be assigned to the response variable.
  4. Both values for user_input and response will be saved to the session state via the append() method.
  5. When there are bot-generated messages, the if st.session_state['generated'] statement returns True and the underlying statements will run.
  6. A for loop iterates through the list of generated messages in st.session_state['generated']
  7. The human (st.session_state['past']) and the bot (st.session_state['generated']) messages are displayed via the message() command from the streamlit-chat component:
## Conditional display of AI generated responses as a function of user provided prompts
with response_container:
    if user_input:
        response = generate_response(user_input)
        st.session_state.past.append(user_input)
        st.session_state.generated.append(response)
        
    if st.session_state['generated']:
        for i in range(len(st.session_state['generated'])):
            message(st.session_state['past'][i], is_user=True, key=str(i) + '_user')
            message(st.session_state['generated'][i], key=str(i))

Wrapping up

In this post, I've shown you how to create a chatbot app using an open-source LLM from the unofficial HuggingChat API and Streamlit. You can create your own AI-powered chatbot in only a few lines of code without needing API keys.

I hope this tutorial encourages you to explore the endless possibilities of chatbot development using different models and techniques. The sky is the limit!

If you have any questions, please leave them in the comments below or contact me on Twitter at @thedataprof or on LinkedIn. Share your app creations on social media and tag me or the Streamlit account, and I'll be happy to provide feedback or help retweet!

Happy Streamlit-ing! 🎈


This is a companion discussion topic for the original entry at https://blog.streamlit.io/how-to-build-an-llm-powered-chatbot-with-streamlit/
6 Likes

Great info - thank you. I’m looking to build an A.I. chatbot like Replika, for people to communicate with as a companion. What are the best tools to use for this? I have looked at a few options but they are very expensive. Thank you for any and all help you can give me.
J.C.

Just as an FYI, following this guide “as is” will return an error:
Exception: Authentication is required now, but no cookies provided

To fix this, follow the steps outlined in the HuggingChat repo:
Soulter/hugging-chat-api: HuggingChat Python API (github.com)

2 Likes

Same issue @Shike mention. :frowning:

To fix authen:

  1. sign up at HuggingChat (huggingface.co)

  2. add email/pwd to streamlit secret Secrets management - Streamlit Docs

  3. add the code to log in with above secret

    sign = Login(st.secrets[“email”], st.secrets[“password”])
    cookies = sign.login()
    sign.saveCookies()

  4. update chat code , add cookies parameter
    chatbot = hugchat.ChatBot(cookies=cookies.get_dict())

2 Likes

I’m curious if this is possible to chat with our data like CSV file or something else. Can the chatbot generate answers based on our data? I’m new to something like this, but I’m looking forward to it. @streamlitbot

I have created the LLM powered chat bot using hug chat check this

https://utilityservices.streamlit.app/

1 Like

Hi! Thanks for this fix. I have a question maybe you (or someone else) knows the answer to. Is there any way to include another model from HuggingFace (e.g. GPT2) in this code and what would be the best approach to do this?

Hi All,

Owing to changes to the HugChat API, it is now required to login using Hugging Face credentials. Thus, the streamlit_app.py file has been updated and is now working.

Particularly, the sidebar now contains two text box for entering your Hugging Face email and password.

The updated code can be found at:

Best regards,
Chanin

Noted: hugchat login method change saveCookies() to saveCookiesToDir, I try to write out cookies value (have a value), but add cookies params still issue:
“1 validation error for ConversationChain llm value is not a valid dict (type=type_error.dict)”

Hi @jagadeesha_Gowda this is a great example. Is it possible to share code for this app. I am trying to build something similar but struggling in arranging the UI aspect of it. I would like something similar to what you have build.

If you can publish the code on GitHub it will be helpful for a lot of people like me who are starting with streamlit.

U can find the codes in the my GitHub repo.

1 Like

@jagadeesha_Gowda thanks a lot for sharing the link. This is really helpful.
But you should remove your email and password from the code.

Done…

I’ve got the langchain portion of this working for GPT2, but the streamlit part is failing with an exception streamlit.components.v1.components.MarshallComponentException: (‘Could not convert component args to JSON’, TypeError(‘Object of type HumanMessage is not JSON serializable’))
message(st.session_state[“past”][i], is_user=True, key=str(i) + “_user”)

Here’s the code (apols it’s a bit of a work in progress so a few redundant lines and print statements to track it through)

“”“Python file to serve as the frontend”“”

import os

from pathlib import Path

import streamlit as st

import torch

import tiktoken

from streamlit_chat import message

from transformers import AutoTokenizer, pipeline

from transformers import AutoModelForCausalLM

from langchain.embeddings.openai import OpenAIEmbeddings

from langchain.vectorstores import Chroma

from langchain.text_splitter import CharacterTextSplitter

#from langchain.llms import OpenAI

from langchain.llms import HuggingFacePipeline

from langchain.chains import ConversationalRetrievalChain

from langchain.document_loaders import TextLoader

from langchain.memory import ConversationBufferMemory

def load_chain():

# clear the cuda cache to prevent out of memory issues
torch.cuda.empty_cache()

os.environ["PYTORCH_CUDA_ALLOC_CONF"] = "max_split_size_mb:1024"
print(f"{os.environ['PYTORCH_CUDA_ALLOC_CONF']}")

#Logic for loading the chain you want to use should go here

model_dir = "C:/Users/benre/oobabooga_windows/oobabooga_windows/text-generation-webui/models/gpt2-medium"
model_name = "gpt2-medium"

# set up tokeniser

path_to_model = Path(f'{model_dir}/{model_name}')

print("Loading tokenizer")
tokenizer = AutoTokenizer.from_pretrained(model_name)


# bring back in once you have loaded the model
print("loading vectorstore")
source_folder = "C:/Users/benre/venv/Scripts/SourceR"
loader = TextLoader("C:/Users/benre/venv/Scripts/Repos2.txt")
documents = loader.load()

text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
documents = text_splitter.split_documents(documents)

embeddings = OpenAIEmbeddings()
vectorstore = Chroma.from_documents(documents, embeddings)

# clear the cuda cache to prevent out of memory issues
torch.cuda.empty_cache()

model=AutoModelForCausalLM.from_pretrained(model_name)

# Load the hugging face pipeline as GPT2 is not directly supported
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, max_length=256, temperature=0.7, top_p=0.95, repetition_penalty=1.15)

local_llm = HuggingFacePipeline(pipeline=pipe)

# create memory for chain
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)

# ConversationalRetrievalChain is for keeping memory history.
chain = ConversationalRetrievalChain.from_llm(llm=local_llm, retriever=vectorstore.as_retriever(search_kwargs={"k": 1}), memory=memory, chain_type="stuff")

return chain

utility function to convert text to token count

use cl100k_base tokenizer for gpt-3.5-turbo and gpt-4

tokenizer = tiktoken.get_encoding(‘cl100k_base’)

def tiktoken_len(text):
tokens = tokenizer.encode(
text,
disallowed_special=()
)
return len(tokens)

From here down is all the StreamLit UI.

st.set_page_config(page_title=“LangChain Demo”, page_icon=“:robot:”)
st.header(“LangChain Demo”)

if “generated” not in st.session_state:
st.session_state[“generated”] = [“Hi Matt”]

if “past” not in st.session_state:
st.session_state[“past”] = [“Hi”]

def get_text():
input_text = st.text_input("You: ", “Hello, how are you?”, key=“input”)
return input_text

user_input = get_text()

if user_input:

chain = load_chain()
output = chain({"question": user_input})

print("this is the response:")
print(output)

st.session_state.past.append(user_input)
st.session_state.generated.append(output)

if st.session_state[“generated”]:

for i in range(len(st.session_state["generated"]) - 1, -1, -1):
    print("this is the generated session_state")
    print(f"length: ")
    print(len(st.session_state["generated"]))
    print(f"char: "+str(i))
    message(st.session_state["generated"][i], key=str(i)) 
    message(st.session_state["past"][i], is_user=True, key=str(i) + "_user")

If anyone has any ideas about how to fix the exception I’d love to know

Hey guys, I’m main maintainer of hugchat package. if you are using hugchat as a part of your project and you are meeting with some problem, maybe you are not using the latest version of hugchat(v0.2.1 now), you can execute pip3 install hugchat --upgrade and read the readme: GitHub - Soulter/hugging-chat-api: HuggingChat Python API🤗 . Thank you for supporting hugchat :slight_smile: and if you have some problem, feel free to commit an issue <3

2 Likes

Hello there, I run the code and get the following error below after TraceBack:
Take note l can login to huggingchat via huggingface from a GUI perspective and test it working in the code. My hugchat version is : 0.3.8 and streamlit: 1.28.1.

Traceback:

File "C:\Users\gsoonh\Documents\test\myenv\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 534, in _run_script
    exec(code, module.__dict__)
File "C:\Users\gsoonh\gshdemo.py", line 54, in <module>
    response = generate_response(prompt, hf_email, hf_pass)File "C:\Users\gsoonh\gshdemo.py", line 38, in generate_response
    cookies = sign.login()
File "C:\Users\gsoonh\Documents\test\myenv\lib\site-packages\hugchat\login.py", line 127, in login
    if self.grantAuth(location):File "C:\Users\gsoonh\Documents\test\myenv\lib\site-packages\hugchat\login.py", line 105, in grantAuth
    raise Exception("grant auth fatal!")

Hello! I get the following error when I type a question to the chatbot:

requests.exceptions.MissingSchema: This app has encountered an error. The original error message is redacted to prevent data leaks. Full error details have been recorded in the logs (if you’re on Streamlit Cloud, click on ‘Manage app’ in the lower right of your app).
Traceback:
File “/home/adminuser/venv/lib/python3.9/site-packages/streamlit/runtime/scriptrunner/script_runner.py”, line 535, in _run_script
exec(code, module.dict)
File “/mount/src/chatbot/streamlit_app.py”, line 51, in
response = generate_response(prompt, hf_email, hf_pass)
File “/mount/src/chatbot/streamlit_app.py”, line 36, in generate_response
cookies = sign.login()
File “/home/adminuser/venv/lib/python3.9/site-packages/hugchat/login.py”, line 127, in login
if self.grantAuth(location):
File “/home/adminuser/venv/lib/python3.9/site-packages/hugchat/login.py”, line 100, in grantAuth
res = self.requestsGet(location, allow_redirects=False)
File “/home/adminuser/venv/lib/python3.9/site-packages/hugchat/login.py”, line 28, in requestsGet
res = requests.get(
File “/home/adminuser/venv/lib/python3.9/site-packages/requests/api.py”, line 73, in get
return request(“get”, url, params=params, **kwargs)
File “/home/adminuser/venv/lib/python3.9/site-packages/requests/api.py”, line 59, in request
return session.request(method=method, url=url, **kwargs)
File “/home/adminuser/venv/lib/python3.9/site-packages/requests/sessions.py”, line 575, in request
prep = self.prepare_request(req)
File “/home/adminuser/venv/lib/python3.9/site-packages/requests/sessions.py”, line 486, in prepare_request
p.prepare(
File “/home/adminuser/venv/lib/python3.9/site-packages/requests/models.py”, line 368, in prepare
self.prepare_url(url, params)
File “/home/adminuser/venv/lib/python3.9/site-packages/requests/models.py”, line 439, in prepare_url
raise MissingSchema(

Could you please help me understand what is happening?

Thank you! I face this error when running on jupyter notebook:


FileNotFoundError                         Traceback (most recent call last)
<ipython-input-10-381c8402f1e7> in <cell line: 9>()
      9 with st.sidebar:
     10     st.title('🤗💬 HugChat')
---> 11     if ('EMAIL' in st.secrets) and ('PASS' in st.secrets):
     12         st.success('HuggingFace Login credentials already provided!', icon='✅')
     13         hf_email = st.secrets['EMAIL']

1 frames
/usr/local/lib/python3.10/dist-packages/streamlit/runtime/secrets.py in _parse(self, print_exceptions)
    212                 if print_exceptions:
    213                     st.error(err_msg)
--> 214                 raise FileNotFoundError(err_msg)
    215 
    216             if len([p for p in self._file_paths if os.path.exists(p)]) > 1:

FileNotFoundError: No secrets files found. Valid paths for a secrets.toml file are: /root/.streamlit/secrets.toml, /content/.streamlit/secrets.toml

This topic was automatically closed 180 days after the last reply. New replies are no longer allowed.