Chat input widget that supports both text and audio

:star2: Introducing the streamlit_chat_widget!

:rocket: We’re thrilled to announce streamlit_chat_widget, a custom-built chat input component for all Streamlit enthusiasts! Designed with versatility in mind, this widget brings both text and audio input capabilities, perfect for conversational AI, voice assistants, and any chat-based applications you dream up.

Created by Mohammed Bahageel, AI Developer, streamlit_chat_widget offers a seamless and intuitive user experience in your Streamlit app:

:sparkles: Key Features:

  • Text Input: Type and send messages effortlessly.
  • Audio Recording: Built-in mic functionality for voice messages.
  • Fixed Position: Just like st.chat_input, it stays anchored at the bottom for ease of access.

:inbox_tray: Installation
Get started with one line of code:

pip install streamlit-chat-widget 

Future Releases
The package will be updated regularly and periodically to fulfill the needs of of our beloved streamlit community members and its available here !
:computer: Usage
Integrating streamlit_chat_widget into your app is easy! Here’s a quick start:

import streamlit as st
from streamlit_extras.bottom_container import bottom # to position the widget on the bottom 
from streamlit_chat_widget import chat_input_widget

def main():
    st.title("My Custom Chat Application")
    
    if "chat_history" not in st.session_state:
        st.session_state.chat_history = []

    for message in st.session_state.chat_history:
        st.write(message)
    with bottom():
       user_input = chat_input_widget()

    if user_input:
        if "text" in user_input:
            st.session_state.chat_history.append(f"You: {user_input['text']}")
        elif "audioFile" in user_input:
            st.audio(bytes(user_input["audioFile"]))

if __name__ == "__main__":
    main()

:control_knobs: Additional Customization
Use from streamlit_extras.bottom_container import bottom to position the widget in a floating container for an even more refined look.

 import streamlit as st
 from streamlit_extras.bottom_container import bottom
from streamlit_chat_widget import chat_input_widget
# this will place it always at the bottom of the screen in a fixed position
with bottom():
       user_input = chat_input_widget()

:busts_in_silhouette: We’d love to see how you bring streamlit_chat_widget into your projects! Share your creations and join the conversation in the Streamlit community today.
Contributions
For contribution to the project please visit my GitHub repository star the repo and feel free to make your contributions and enhancements , thank you in advance
Happy Development and Coding!


@andfanilo
@dataprofessor
Custom Components
Community Cloud
Using Streamlit
Deployment

I really like your widget. I am trying to pass the audio to openai whisper. As per their API it expects to a file object. I was hoping if you know a way to bypass the save then load steps and convert the audio output (user_input[‘audioFile’]) to this file object.
If I do f = open(‘audio.mp3’, ‘rb’) then type(f) = <class ‘_io.BufferedReader’>
Any help?

Off course it intended to help the transcription here is the code example of how you can go about it

from streamlit_extras.bottom_container import bottom
import streamlit as st
from streamlit_chat_widget import chat_input_widget

load_dotenv()
client = OpenAI()
# Transcribe audio to text
def transcribe_audio(client, audio_path):
    with open(audio_path, "rb") as audio_file:
        transcript = client.audio.transcriptions.create(
            model="whisper-1", file=audio_file
        )
        return transcript.text
 with bottom():
       response = chat_input_widget()
    
    
    user_query = None

    if response:
        if "text" in response:
            user_query = response["text"]
        elif "audioFile" in response:
            with st.spinner("Transcribing audio..."):
                audio_file_bytes = response["audioFile"]
                temp_audio_path = "temp_audio.wav"
                with open(temp_audio_path, "wb") as f:
                    f.write(bytes(audio_file_bytes))
                user_query = transcribe_audio(temp_audio_path)
                os.remove(temp_audio_path)