Streamlit app works locally but gives token size error when deployed

Hello, I have an app which runs fine locally, gives what I want using an openAI LLM.

When I deploy this to google cloud, I see an open AI token error saying,

“InvalidRequestError: This model’s maximum context length is 4097 tokens, however you requested 519913 tokens (519657 in your prompt; 256 for the completion). Please reduce your prompt; or completion length.”

My code only returns a list of 10 numbers.

I am not sure where this is coming from, could this be because streamlit couldn’t end a user session or something?

1 Like

Hello there,
This issue is unrelated to Streamlit or user sessions.
The maximum context length for GPT-3.5 is 4096 tokens.
I think you may be facing an issue with taking inputs they might exceed 4096.

While using OpenAI API it considers both input and output’s tokens, so issue might be with at the place of taking input please check out there.

yes, but this only happens on my deployed streamlit application. streamlit app works fine on local machine.