Hi all,
as “all the hype” is about chain reasoning on LLM’s i needed a quick testing system localy to interact with these models.
Feel free to give it a shot (works nicely with QwQ model and llama3.3, but should work with any decently smart model). Make sure you have the latest python ollama bindings cause it uses structured output’s.
Source is:
Credits go to
GitHub - bklieger-groq/g1: g1: Using Llama-3.1 70b on Groq to create o1-like reasoning chains for its prompts and the initial code which i derived it off