- Are you running your app locally or is it deployed?
Running the app locally.
- Share the link to your app’s public GitHub repository (including a requirements file).
The app has a private repository.
- Share the full text of the error message (not a screenshot).
There are no errors per se.
- Share the Streamlit and Python versions.
Python version = 3.11.6
Streamlit version = 1.31.0
Let me start with a huge thanks to the community and especially @andfanilo for wonderful insights and videos, helped us learn a lot!
We are developing a pretty straight-forward chatbot app for a client based on RAG + Mistral 7B idea.
So far, we have managed to successfully “inject” custom CSS, HTML and setup the chat history display, modify the look and feel of the frontend, connect it to Mistral and get the outputs for questions.
As I cannot paste the full code here (I know that’s not super helpful), I can add the libraries that we are using:
import torch
from langchain_community.embeddings import HuggingFaceEmbeddings
from langchain_community.vectorstores import Chroma
from langchain.llms.huggingface_pipeline import HuggingFacePipeline
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline, BitsAndBytesConfig, GenerationConfig
from langchain.chains.question_answering import load_qa_chain
from langchain.prompts import PromptTemplate
import streamlit as st
As of now, the GUI is displaying the answers once they are fully generated, and that can take up to 15-20 seconds on our 3090 in the local settings - therefore we would like to achieve the streaming effect of typing as the answer is generated.
I’ve been personally looking into chunking Mistral’s answers, then chunking the answers outputs, but this made a (very chunky) mess unfortunately. We do have a nice “loading the answer” animation, but I believe that it would look a lot better if we could stream them instead.
Would anyone know a good approach to solving this issue, as I can see it can be achieved when using OpenAI’s API, however we would like to use the models of our choice instead of OpenAI.
Thank you again for any inputs and looking forward to seeing where the Streamlit will go in the future!