Memory handling in multi-user LLM app

Hi,
I am analyzing the app

by @Charly_Wargnier that demos the interplay of Streamlit, Langchain, Trubrics and Langsmith. I am running this app locally and I get cross-talk between sessions regarding the chat history.

In this app, chat history is handled by Langchain ConversationBufferMemory using StreamlitChatMessageHistory. I suspect the problem is the following: The memory is created per session, but it is handed over to the function that sets up the LLM chain (in essential_chain.py). Since this function is cached all subsequent sessions also use the memory object created by the first session.

Of course caching the setup of the llm chain is important. So, is there a way to have this function cached per session instead of globally?

Thanks, and best regards,
René

Hi @Rene_Steiner

There are similar post with a lot of discussion on the topic of caching session data as opposed to the global cache performed by default via Streamlit’s caching mechanism that caches data globally across all users.

Hope these helps!

Thank you for your reply. That was helpful indeed. Adding a session-id as parameter to the cached function that provides the llm chain solved the problem. Maybe @Charly_Wargnier could add that to his demo code, too?!

Thanks again and best regards,
René

1 Like

Glad to hear that it works now and thanks for the feedback!

This topic was automatically closed 2 days after the last reply. New replies are no longer allowed.