Hi! I am building a chat app with streamlit and langchain. It works fine for a simple agent like this:
agent = (
{
"input": lambda x: x["input"],
"agent_scratchpad": lambda x: format_to_openai_tool_messages(
x["intermediate_steps"]
),
"chat_history": lambda x: format_chat_history(x["chat_history"]),
}
| prompt
| llm.bind(tools=oai_tools)
| OpenAIToolsAgentOutputParser()
)
agent_executor = AgentExecutor(
agent=agent,
tools=lc_tools,
verbose=True,
callbacks=[st_cb]
)
But if I added an additional chain in the agent like this:
_coreference = (coref_prompt | llm | StrOutputParser()).with_config(
name="Co-reference Resolution"
)
agent = (
{
"input": lambda x: x["input"],
"agent_scratchpad": lambda x: format_to_openai_tool_messages(
x["intermediate_steps"]
),
"chat_history": lambda x: format_chat_history(x["chat_history"]),
}
| RunnablePassthrough.assign(referenced_input=_coreference)
| prompt
| llm.bind(tools=oai_tools)
| OpenAIToolsAgentOutputParser()
)
agent_executor = AgentExecutor(
agent=agent,
tools=lc_tools,
verbose=True,
callbacks=[st_cb]
)
The agent still works but I’m getting errors on LLMThoughts:
2024-01-16 18:00:19.753 Thread 'ThreadPoolExecutor-5_0': missing ScriptRunContext
Error in StreamlitCallbackHandler.on_llm_start callback: NoSessionContext()
Error in StreamlitCallbackHandler.on_llm_new_token callback: RuntimeError('Current LLMThought is unexpectedly None!')
Error in StreamlitCallbackHandler.on_llm_new_token callback: RuntimeError('Current LLMThought is unexpectedly None!')
Error in StreamlitCallbackHandler.on_llm_new_token callback: RuntimeError('Current LLMThought is unexpectedly None!')
Error in StreamlitCallbackHandler.on_llm_new_token callback: RuntimeError('Current LLMThought is unexpectedly None!')
Error in StreamlitCallbackHandler.on_llm_new_token callback: RuntimeError('Current LLMThought is unexpectedly None!')
Error in StreamlitCallbackHandler.on_llm_new_token callback: RuntimeError('Current LLMThought is unexpectedly None!')
Error in StreamlitCallbackHandler.on_llm_new_token callback: RuntimeError('Current LLMThought is unexpectedly None!')
Error in StreamlitCallbackHandler.on_llm_new_token callback: RuntimeError('Current LLMThought is unexpectedly None!')
Error in StreamlitCallbackHandler.on_llm_new_token callback: RuntimeError('Current LLMThought is unexpectedly None!')
Error in StreamlitCallbackHandler.on_llm_new_token callback: RuntimeError('Current LLMThought is unexpectedly None!')
Error in StreamlitCallbackHandler.on_llm_new_token callback: RuntimeError('Current LLMThought is unexpectedly None!')
Error in StreamlitCallbackHandler.on_llm_new_token callback: RuntimeError('Current LLMThought is unexpectedly None!')
Error in StreamlitCallbackHandler.on_llm_end callback: RuntimeError('Current LLMThought is unexpectedly None!')
In the UI it shows
and non of the thoughts shows up.
I looked a bit into it, the error comes from this status check in LLMThought class initialisation
self._container = parent_container.status(
labeler.get_initial_label(), expanded=expanded
)
I’m not sure why this is happening. In the documentation for LLMThought class it says:
"""Encapsulates the Streamlit UI for a single LLM 'thought' during a LangChain Agent
run. Each tool usage gets its own thought; and runs also generally having a
concluding thought where the Agent determines that it has an answer to the prompt.
Each thought gets its own expander UI.
"""
Is it because I chained multiple llms in one agent? How can I solve this?
Thanks in advance!