if i use the other way around … by saying generating a mp3 file and playing it with playsound* give me ‘gi’ no module not found.
from gtts import gTTS
import os
# The text that you want to convert to audio
mytext = 'Welcome Bob ,, how are you, '
# Language in which you want to convert
language = 'en'
myobj = gTTS(text=mytext, lang=language, slow=False)
myobj.save("welcome.mp3")
from playsound import playsound
playsound("welcome.mp3")
ModuleNotFoundError: No module named ‘gi’
Traceback:
File "/home/appuser/.local/lib/python3.7/site-packages/streamlit/script_runner.py", line 332, in _run_script
exec(code, module.__dict__)File "/app/streamlit_churn_analysis/streamlit_churn.py", line 138, in <module>
main()File "/app/streamlit_churn_analysis/streamlit_churn.py", line 128, in main
run_churn_plots()File "/app/streamlit_churn_analysis/plot_learning.py", line 62, in run_churn_plots
playsound("welcome.mp3")File "/home/appuser/.local/lib/python3.7/site-packages/playsound.py", line 91, in _playsoundNix
import gi
I tried deploying your app from my account and don’t see any errors. So it’s hard to say what the fix might be without more information.
One side comment, it appears that your requirements.txt file has way more Python libraries that are actually required to run your app. If you can trim that down to just the libraries you need, you’ll have much better launch times for your app.
hi the app is working now becoz i need to show someone else hav to put back to default , yes i can understand there are lots of packages. my default env packages, i will trim, and also wil share you the github link separately with the code i am having problem with… the problem is the app uses text to speech functions like playsound etc to narrate or play an audio , it cant in share.streamlit gives the OSError likbespeak,… however itworks super fine locally in my laptop,it this the server hardware resource call restriction but it should play mp3 file via generating mp3 file and using playsound to play it, but cant…
I think there is quite a technical misunderstanding here.
It’s clear that this works locally, but how would it work on streamlit sharing? playsound plays the audio file via the local audio driver of the machine on which the Python script is running. In the case of Streamlit Sharing, this is probably some Docker container in a cloud data center. It doesn’t even have an audio driver.
If you want this to work, you would have to use a custom component that uses an audio-capable Browser API, e.g. Web Audio API, WebRTC
Or you can use the Streamlit component streamlit.audio()
However this component has no autoplay capability yet.
I tried pyttsx3 (‘dummy’, ‘sapi5’, …), gTTS, playsound, etc, but they do not work. I found this:
" speech = gTTS(text, lang = ‘en’, slow = False)
speech.save(‘sound.mp3’)
audio_file = open(‘sound.mp3’, ‘rb’)
audio_bytes = audio_file.read()
st.audio(audio_bytes, format=‘audio/ogg’,start_time=0) "
It works, but we have to click to play sound (I don’t like it, but have to use it for now. It does not work on cell phone!).
However, if I import “pywhatkit”, the app does not work.
As I wrote earlier, these approaches all cannot work with Streamlit Sharing because there is a fundamental technical misunderstanding here. The Streamlit Sharing code runs on a container somewhere in a data centre and it has no audio output.
So far, this has been the disadvantage of the Streamlit audio component.
But you could change this component if necessary or write your own component that can do the autoplay function.