In my experience, LangChain is a very complex HIGH LEVEL abstraction, and if you follow their example exactly, it’s easy to get good results, but if you try to modify something yourself, it often brings very complicated bugs because they hide too much information in it.
Just by looking at this part of your code, I have no idea what is happening. Also, I haven’t obtained the Azure OpenAI API key yet, so I cannot test AzureChatOpenAI either.
If I were to debug it, I think I would need to first test if the response is being properly outputted when streaming is set to False.
If everything mentioned above is working fine, I noticed that the error message states: “Object of type StreamHandler is not JSON serializable.” It’s possible that the information returned by the AI is in JSON format. In that case, you might need to extract a specific part of the JSON, such as the text or token, and then pass it to the StreamHandler for processing. You can refer to the “output parser” reference for guidance: https://python.langchain.com/en/latest/modules/prompts/output_parsers/getting_started.html
Or, if your entire program’s code is not very long, you may want to copy all the code along with the error messages into GPT-4 or Claude 100k and let GPT-4 do the debug.
In fact, I wrote this StreamHandler with the help of GPT-4. I gave GPT-4 the callback description page and let it come up with it. They are pretty good.
Thanks for your suggestion. I tried the langchain’s built-in StreamingStdOutCallbackHandler to check if the streaming output worked correctly. I was able to stream the response on the terminal. But, as mentioned early I was looking for a way to stream the output on Streamlit. I was able to do this by adopting a custom stream_handler (StreamlitCallbackHandler(BaseCallbackHandler)). Then I used a callback_manager to the LLM before running the SequentialChain().
With the latest (1.24) version of Streamlit streaming is possible, however ONLY for some special cases like OpenAI’s chat completion API. I am working on a streamlit app that uses LangChain RetrievalQAWithSourcesChain to answer questions from text documents.
Is there no possibility to add streaming with Streamlit + LangChain RetrievalQAWithSourcesChain ?
def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:
"""Collect token usage."""
if response.llm_output is None:
but after the streaming, response.llm_output is None.
I also tried following, but it did not work.
with get_openai_callback() as cb:
st_cb = StreamHandler(st.empty())
response = chain.run(input=user_query, callbacks=[st_cb])
Could you please help?
Thanks in advance
Hello there 👋🏻
Strictly necessary cookies
These cookies are necessary for the website to function and cannot be switched off. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms.
These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us understand how visitors move around the site and which pages are most frequently visited.
These cookies are used to record your choices and settings, maintain your preferences over time and recognize you when you return to our website. These cookies help us to personalize our content for you and remember your preferences.
These cookies may be deployed to our site by our advertising partners to build a profile of your interest and provide you with content that is relevant to you, including showing you relevant ads on other websites.