Feedback on me building DDSP Streamlit demo

Hello everyone,

I wanted to see how fast it took to convert a Jupyter notebook to Streamlit, so I spent 2 days rebuilding the Timbre Transfer demo in Streamlit. It’s not extra polished, and I don’t have a lot of Tensorflow background so there should be possible optimizations. You can check it out in this repo, I don’t think I’ll go much further on this POC so feel free to play with it and add features. This is tested on a Windows 7 CPU only, I haven’t tested on a GPU so hopefully it works without any code change.

Here is a small recap of what popped in my mind in the process, in order of importance :

  • The cache system is amazing. Cache the result of a long computation on the audio, then edit some button label, reload the page, reselect the audio and I don’t have to go through the long computation again. Rerunning the script is not instant, I guess it needs to parse and compare hashes for all objects to detect change (hopefully there will be progress in the speed of cache comparison + I’m waiting for the Advanced caching paragraph in the docs), but going from 3 minutes execution to 5~10 seconds on previously computed results is awesome.

  • I think I saw those somewhere in the forums or on Github, but having the ability to see what’s in the cache, eventually the size of each object and also specific deletion of a cached element from the hamburger element is interesting from a dev perspective.

  • I have to admit when I saw I could not record audio that I wanted to inject my own component without rebuilding the project, especially because I work on Windows 7 so building the project in a VM is cumbersome. I know you’re thinking about it but this could be game-changing if you pull this off.

  • Something I wanted when coding the UI : Bootstrap collapse or any CSS framework collapse to hide lots of plots inside. I guess I could put the plots in a placeholder on demand but it felt clunky so I let it go.

  • There is no way of recording audio with a st.record. This could be beneficial if more people from the Deep Learning realm want to use Streamlit to deploy their DL models with voice as input. Ok there’s already an issue on audio recorder.

  • I couldn’t use the resulting numpy data output from Tensorflow back in st.audio , but the docs say it’s possible…so does it actually read numpy yet or am I missing something ? In the source file there’s a TODO: Provide API to convert raw NumPy arrays to audio file (with proper headers, etc)? Nevermind there’s a issue on that.

  • Since I could not read the output numpy in st.audio and download from the audio player, I wanted a st.file_downloader to build the file and select where to download it, the opposite of st.file_uploader. I’d also be happy with serving audio via HTTP.

  • In st.file_uploader, when working locally can we have an option to access file_path instead of raw data ? if we want to traverse things in the same folder as the selected file, or give the ability to select folders ? Just a random thought, did not think a lot about it.

  • Is the following code supposed to work ? API docs say I can pass a str to st.audio but I get an TypeError: string argument without an encoding on Windows 7.

    import streamlit as st
    url = "C:\\sample.wav"
    st.audio(url, format="audio/wav")
    
    TypeError: string argument without an encoding
    Traceback
    ...
    File ".../streamlit/media_proto.py", line 78, in _marshall_binary
        b64encodable = bytes(data)
    

    Okay done, thanks for reading ! And thank for your hard work and such a great library.

3 Likes

Hey @andfanilo :wave:,

Thanks for sharing the app and repo :pray:, they look awesome. Also, the feedback is much appreciated and helpful, I’ll share it all internally with the team. We don’t have an exact date yet for the advance caching docs, but they’re coming soon! We’ll update everyone once they land.

Thanks again for the great app :heart:

1 Like