Introducing Jina NOW, the first-ever nocode solution that lets you create and deploy multimodal neural search in a matter of minutes using a streamlit powered frontend interface!
Jina NOW lets you build end-to-end cross-modal search without the need for any text annotations of images. It is powered by the CLIP (Contrastive Language Image Pre-training) model by OpenAI.
Check out the Twitter thread to learn how Jina NOW does the magic in revamping the user experience of emoji search on Emojipedia - Tweet!
Check out the blog post for a detailed understanding and step-by-step walkthrough!