Issues with Streamlit and Openai API V2 - The requested model 'gpt-4o-mini' cannot be used with the Assistants API in v1

Hello everyone!

I have been using Streamlit for a while now, mostly with LLM’s and Openai’s Assistant API. While trying out some use cases I came across a known error code:
BadRequestError: Error code: 400 - {‘error’: {‘message’: “The requested model ‘gpt-4o-mini’ cannot be used with the Assistants API in v1. Follow the migration guide to upgrade to v2: https://platform.openai.com/docs/assistants/migration.”, ‘type’: ‘invalid_request_error’, ‘param’: ‘model’, ‘code’: ‘unsupported_model’}}

but, the thing is that I am using all up to date versions of Streamlit and OpenAI. And I have been using the same code for other projects, which are working fine.
The weird thing is that I am not getting this error when running my exaxct same code outside of a Streamlit app. (when I did not install Streamlit I mean) This is my code:

from openai import OpenAI

client = OpenAI(
  api_key="key",  # this is also the default, it can be omitted
)
assistant_id = "id"

thread = client.beta.threads.create()

message = client.beta.threads.messages.create(
  thread_id=thread.id,
  role="user",
  content="I need to solve the equation `3x + 11 = 14`. Can you help me?"
)

run = client.beta.threads.runs.create_and_poll(
  thread_id=thread.id,
  assistant_id=assistant_id,
  instructions="Please address the user as Jane Doe. The user has a premium account."
)

if run.status == 'completed': 
  messages = client.beta.threads.messages.list(
    thread_id=thread.id
  )
  print(messages)
else:
  print(run.status)

As you can, nothing complex yet, just wanted to check the response basically.
Anyone encountered this lately? Is this due some issues with the new Streamlit version? I have not found a similar issue in the docs, forum or other forums.

Thanks!

1 Like