I am working on building a product to serve clients on automating internal meeting summaries, song transcription for budding singers and product video review analysis in real time at an affordable cost.
Wanted to share a MVP I built in the initial stages for demonstration that would be useful for the community to build upon and target similar use cases in their respective areas of interest.
Features:
Grab any video from YouTube and generate Captions (which can be saved as SRT or VTT file) side by side of the video + segregated audio
I look forward to suggestions from the community . Any pointers towards streamlit-specific resources to host such GPU resource heavy applications on platforms such as VULTR (I have taken care of dockerization and tested with Heroku already) would be a great help!
Interesting one. Probably more futureproof than downloading captions directely from youtube using youtube_transcript_api. It would be a nice feature to see how generated transcript differes from one downloaded directely
As I’ve also was playing aroung with youtube videos summarization it would be nice to share experiences.
Are you planning to use pre-trained summarization models using hugging face transformer library or openai API?
How do you approach splitting text into chunks that models can handle?
The Whisper model generates better transcripts than the auto caption generator feature of YouTube due the fact that OpenAI seemed to have trained it (especially the largev2 and medium models) on data having different accents and incorporating multilingual audio of the same magnitude as well.
However, it completely ignores filler words like “Ah, umm…”. As far as transcripts/subtitles uploaded by YouTube user(s) themselves for the videos, for English, French, German and Japanese the results are quite similar.
For Indian languages like Tamil or Hindi, the generated transcripts are not as satisfactory
As far as songs are concerned especially rap and rock songs, it works incredibly well on most languages.
Yes, my actual product involves video summarization + sentiment analysis so I am using pre-trained models from hugging face for those. Will build individual apis for those and integrate with streamlit
The model(Whisper) can handle any video/audio for 1.5 hours easily. So generation of subtitles/transcripts is not an issue. As far as summarization is concerned with the generated transcript, the models based on Google T5 on huggingspace can handle even entire books so currently I did not encounter any requirement for splitting the text
Thanks for stopping by! We use cookies to help us understand how you interact with our website.
By clicking “Accept all”, you consent to our use of cookies. For more information, please see our privacy policy.
Cookie settings
Strictly necessary cookies
These cookies are necessary for the website to function and cannot be switched off. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms.
Performance cookies
These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us understand how visitors move around the site and which pages are most frequently visited.
Functional cookies
These cookies are used to record your choices and settings, maintain your preferences over time and recognize you when you return to our website. These cookies help us to personalize our content for you and remember your preferences.
Targeting cookies
These cookies may be deployed to our site by our advertising partners to build a profile of your interest and provide you with content that is relevant to you, including showing you relevant ads on other websites.