Server error to run llama models

My app uses the ollama api to run models from my local server, this is my first streamlit app and still learning coding and development - how do I run llama models in my app?.

2 Likes

Welcome to the community, @SynTia-OI! :hugs:

Could you please share a bit more about the error you’re encountering or the specific code snippet where you’re facing issues?

This will help us understand your issue better and provide the most accurate assistance.

Best,
Charly

This topic was automatically closed 180 days after the last reply. New replies are no longer allowed.