{NoSessionContext} NoSessionContext() Error in StreamlitCallbackHandler while using custom agent

Hi! I am building a chat app with streamlit and langchain. It works fine for a simple agent like this:

    agent = (
        {
            "input": lambda x: x["input"],
            "agent_scratchpad": lambda x: format_to_openai_tool_messages(
                x["intermediate_steps"]
            ),
            "chat_history": lambda x: format_chat_history(x["chat_history"]),
        }
        | prompt
        | llm.bind(tools=oai_tools)
        | OpenAIToolsAgentOutputParser()
    )

    agent_executor = AgentExecutor(
        agent=agent,
        tools=lc_tools,
        verbose=True,
        callbacks=[st_cb]
    )

But if I added an additional chain in the agent like this:

    _coreference = (coref_prompt | llm | StrOutputParser()).with_config(
        name="Co-reference Resolution"
    )
    agent = (
        {
            "input": lambda x: x["input"],
            "agent_scratchpad": lambda x: format_to_openai_tool_messages(
                x["intermediate_steps"]
            ),
            "chat_history": lambda x: format_chat_history(x["chat_history"]),
        }
        | RunnablePassthrough.assign(referenced_input=_coreference)
        | prompt
        | llm.bind(tools=oai_tools)
        | OpenAIToolsAgentOutputParser()
    )

    agent_executor = AgentExecutor(
        agent=agent,
        tools=lc_tools,
        verbose=True,
        callbacks=[st_cb]
    )

The agent still works but I’m getting errors on LLMThoughts:

2024-01-16 18:00:19.753 Thread 'ThreadPoolExecutor-5_0': missing ScriptRunContext
Error in StreamlitCallbackHandler.on_llm_start callback: NoSessionContext()
Error in StreamlitCallbackHandler.on_llm_new_token callback: RuntimeError('Current LLMThought is unexpectedly None!')
Error in StreamlitCallbackHandler.on_llm_new_token callback: RuntimeError('Current LLMThought is unexpectedly None!')
Error in StreamlitCallbackHandler.on_llm_new_token callback: RuntimeError('Current LLMThought is unexpectedly None!')
Error in StreamlitCallbackHandler.on_llm_new_token callback: RuntimeError('Current LLMThought is unexpectedly None!')
Error in StreamlitCallbackHandler.on_llm_new_token callback: RuntimeError('Current LLMThought is unexpectedly None!')
Error in StreamlitCallbackHandler.on_llm_new_token callback: RuntimeError('Current LLMThought is unexpectedly None!')
Error in StreamlitCallbackHandler.on_llm_new_token callback: RuntimeError('Current LLMThought is unexpectedly None!')
Error in StreamlitCallbackHandler.on_llm_new_token callback: RuntimeError('Current LLMThought is unexpectedly None!')
Error in StreamlitCallbackHandler.on_llm_new_token callback: RuntimeError('Current LLMThought is unexpectedly None!')
Error in StreamlitCallbackHandler.on_llm_new_token callback: RuntimeError('Current LLMThought is unexpectedly None!')
Error in StreamlitCallbackHandler.on_llm_new_token callback: RuntimeError('Current LLMThought is unexpectedly None!')
Error in StreamlitCallbackHandler.on_llm_new_token callback: RuntimeError('Current LLMThought is unexpectedly None!')
Error in StreamlitCallbackHandler.on_llm_end callback: RuntimeError('Current LLMThought is unexpectedly None!')

In the UI it shows


and non of the thoughts shows up.

I looked a bit into it, the error comes from this status check in LLMThought class initialisation


        self._container = parent_container.status(
            labeler.get_initial_label(), expanded=expanded
        )

I’m not sure why this is happening. In the documentation for LLMThought class it says:

    """Encapsulates the Streamlit UI for a single LLM 'thought' during a LangChain Agent
    run. Each tool usage gets its own thought; and runs also generally having a
    concluding thought where the Agent determines that it has an answer to the prompt.

    Each thought gets its own expander UI.
    """

Is it because I chained multiple llms in one agent? How can I solve this?
Thanks in advance! :blush:

I have the same problem here with the missing context as soon as I am using multiple chains (meaning every case except some demo cases) with streamlit.

IMHO this happens because the agents are executed via a threadpool and the callbacks are just passed to the agents and these are trying to update the state of streamlit which is in another thread and which leads to an exception (see Improve "missing ReportContext" threading error · Issue #1326 · streamlit/streamlit · GitHub).

But so far none of the listed solution really worked.

IMHO the callback management with LCEL is not really explicit. How can I have different callbacks for different chains because not every chain should stream the tokens to the gui? (But this is a langchain issue)