Docker Remote Deployment with Sound

Have been stuck on this for 3 days.

I am deploying my streamlit app to azure app services from inside a docker container. The app uses pyaudio to use the users microphone through their browser. It does not recognize any sound device on the host. I can’t configure the host service to run docker with additional flags like docker run --device /dev/snd/. I don’t like the functionality from streamlit-webrtc because it forces some widget styling and programming method that I don’t want to hack into my code, and I don’t want to fiddle with learning the typescript for styling.

Has anyone reliably solved this problem? I’ve scoured the internet, and to the best of my abilities in 3 days time nothing has worked.

Assuming this simply is a dead end with no solution, what’s the best way for me to deploy a streamlit app without a docker container? I know streamlitcloud does this but it doesn’t allow me to do so with a private repository. I’m a data scientist and I kind of want to keep the deployment stuff simple and agile. I’m not sure I’m ready to commit to learning the ins and outs of nginx just to post cool stuff to the web with streamlit

This is a typical misunderstanding of how Streamlit works.
I am pretty sure that pyaudio does NOT use the browser to access audio devices.

Think about how that works technically for a moment:
When pyaudio wants to record/playback audio data, it accesses the audio hardware of the machine running the Streamlit application. But in the Azure container (or in any other hosted system) there is no audio-driver, no audio-hardware and no microphone. And even if there were, all you’d hear is the hiss of fans in a server rack or the scolding of a Microsoft technician. :stuck_out_tongue_winking_eye:

If you test this app locally, it happens to work because the client and server are identical, but once the Streamlit app is hosted, this approach is doomed to failure. Any user interaction in a Streamlit application must be done through the browser respectively browser APIs. That means you have to use a Streamlit component for audio recording/playback (and also any other hardware access on the clients side).

Below there are some custom Streamlit components on GitHub you might be able to use for this. I haven’t used them yet and it depends on your exact use-case.

  • streamlit-webrtc
  • streamlit-audiorecorder
  • audio-recorder-streamlit
  • streamlit_audio_recorder
  • streamlit-bokeh-events

You could also use st.file_uploader to upload an audio file or use requests to download the audio file from another source on the internet.

Here are some other threads that are also about this topic:

Maybe streamlit will also get a built-in component in the future that allows audio recording, but this is just speculation.

Dude, thank you for explaining this. That is immensely helpful. You’re right that I don’t entirely understand the different pieces in the stack. Everywhere else I was asking about they were just saying things like get your dockerfile to install and host a pulseaudio server then use a pulseaudio client etc.

What I really need is the browser to access the local client’s hardware. I’ll check out the streamlit components you mentioned. Do you know of any native python components that do this? I heard something about pysoundio that I haven’t checked out.

Thank you though again

No, streamlit ist not part of the python standard library and will never be. Everything bypassing streamlit and directly accessing audio hardware is doomed to failure as soon as you host the streamlit app.

It is the same dead-end as with pyaudio

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.