Jina NOW 👉 The one-liner Neural Search

:rocket: Introducing Jina NOW, the first-ever nocode solution that lets you create and deploy multimodal neural search in a matter of minutes using a streamlit powered frontend interface!

:joystick: Jina NOW lets you build end-to-end cross-modal search without the need for any text annotations of images. It is powered by the CLIP (Contrastive Language Image Pre-training) model by OpenAI.

:bird: Check out the Twitter thread to learn how Jina NOW does the magic in revamping the user experience of emoji search on Emojipedia - Tweet!

:closed_book: Check out the blog post for a detailed understanding and step-by-step walkthrough!


This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.