Anyone using the new features from openai?

Excited about the new token limits and things like assistants. Previously I was working on an app as a german tutor, but running a second column with translations of what the teacher was saying. These required a second call to openai.

Now with the assistants and persistent threads it looks like the API can keep track of a lot of what I was managing in Python.

Working with files too looks interesting… generate some visualization and then send it for openai to describe.

Still trying to wrap m head about the function calling and code interpretation!

A couple days later…

Ugh, a few config things to change but finally got the streaming to work again with the new gpt4 turbo model and updated openai library. Next step is to try and integrate the assistant instead of the chat completion so that I can access more of the new features.

Here are the changes I had to make to use the updated openai since 1.2.2 library (now 1.6.1).
After “import openai” add

from openai import OpenAI
openai.api_key = st.secrets['openai']["OPENAI_API_KEY"] >> OpenAI.api_key = st.secrets['openai']["OPENAI_API_KEY"]

I mean set it to whatever location you want… environment variable, secrets… but change the variable to “OpenAI.api_key”

client = OpenAI(api_key=OpenAI.api_key)

to make the request
for response in openai.ChatCompletion.create( model="gpt-4",... >> for response in"gpt-4-1106-preview",...

For streaming
full_response += response.choices[0].delta.get("content", "") >> full_response += response.choices[0].delta.content

In fact, lately I have been getting TypeError: can only concatenate str (not "NoneType") to str so to get around this I use the following
if response.choices[0].delta.content: full_response += response.choices[0].delta.content

For single response
I think it is the same syntax as before

And if you are checking for errors
except openai.error.RateLimitError as error: >> except openai.RateLimitError as error:

1 Like