Summary
I am trying to get response from ChatGPT (Azure OpenAI).
I have tested it locally using streamlit run entry_point.py (my file) and it works.
I can call openai using streamlit and pretty happy about it.
Then I deployed using share.streamlit.com
however, the deployed app always gives me HTTP TIMEOUT ERROR everytime I tried to make that call.
Can you help me?
Steps to reproduce
Code snippet:
openai.api_type = "azure"
openai.api_base = "https://xxx.openai.azure.com/"
openai.api_version = "2023-03-15-preview"
openai.api_key = st.secrets["OPENAI_API_KEY_AZURE"]
response = openai.ChatCompletion.create({"role": "system", "content": ......) # here is the problem
the Azure open ai have a custom endpoints that I dont put it here. Do I need it?
Debug info
- Streamlit version: 1.25.0
- Python version: 3.9.12
Links
Hi @stepkurniawan,
Thanks for posting!
From Azure docs, it does seem you will need to add the endpoint as well to access the model:
import streamlit
import openai
openai.api_type = "azure"
openai.api_base = st.secrets["AZURE_OPENAI_ENDPOINT"]
openai.api_version = "2023-05-15"
openai.api_key = st.secrets["AZURE_OPENAI_KEY"]
response = openai.ChatCompletion.create(
engine="gpt-35-turbo", # engine = "deployment_name".
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Does Azure OpenAI support customer managed keys?"},
{"role": "assistant", "content": "Yes, customer managed keys are supported by Azure OpenAI."},
{"role": "user", "content": "Do other Azure AI services support this too?"}
]
)
st.write(response)
st.write(response['choices'][0]['message']['content'])
You will need to add the keys to secrets.toml
file in the .streamlit/secrets.toml
dorectory:
AZURE_OPENAI_ENDPOINT = "your-azure-openai-endpoint"
AZURE_OPENAI_KEY = "your-azure-openai-key"
Let me know if this helps.
1 Like
Wow thank you for your prompt reply, i will test it today!
Hi @tonykip, I tried the solution you provided, however it didnt fix the problem.
I am still only able to call the API from my local computer, and not from the streamlit deployment server 
Perhaps, there’s a firewall somewhere maybe?
@stepkurniawan, what is the specific error you are getting?
Are you adding the secrets to the deployment like outlined in this guide?
Good question…
I will change the content of my output
# output = response.choices[0].message["content"] # old version
output = response['choices'][0]['message']['content']
and test again
If it works, i think its because the different python version that we have…
Edit:
new error message.
raise error.Timeout("Request timed out: {}".format(e)) from e
openai.error.Timeout: Request timed out: HTTPSConnectionPool(host='abcgenaidemo.openai.azure.com', port=443): Read timed out. (read timeout=600)
Still Timeout
And sometimes:
KeyError: 'content'
Traceback:
File "/home/adminuser/venv/lib/python3.9/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 552, in _run_script
exec(code, module.__dict__)
..........................................
File "/mount/src/grii_europe_slide_maker/chatGPT.py", line 60, in get_content_of_bible_from_chatGPT
output = response['choices'][0]['message']['content']
The second error is caused when there is no response since
response = openai.ChatCompletion.create(
engine="xxx",
messages=[xxx],
)
If the secrets are not implemented correctly, then I will get another error saying that I’m not authenticated, right?
Btw, do you know how to debug in the streamlit server environment?
Like somehow print in the log?
Edit:
I found the reason now… I had to use st.write() all over the place to debug…
{
"id": "chatcmpl-xxx",
"object": "chat.completion",
"created": 1690922619,
"model": "gpt-35-turbo",
"choices": [
{
"index": 0,
"finish_reason": "stop",
"message": {
"role": "assistant",
"content": "blabla."
}
}
],
"usage": {
"completion_tokens": 35,
"prompt_tokens": 201,
"total_tokens": 236
}
}
bad response:
{
"id": "chatcmpl-xxxx",
"object": "chat.completion",
"created": 1690923474,
"model": "gpt-35-turbo",
"choices": [
{
"index": 0,
"finish_reason": "content_filter",
"message": {
"role": "assistant"
}
}
],
"usage": {
"completion_tokens": 55,
"prompt_tokens": 197,
"total_tokens": 252
}
}
as you can see there is no “content”
aparently chat gpt wrote something itself, and block itself
If you’re running the app locally, you can use the terminal to see the logs and use st.write
, print statements, or Python logging. For deployed apps, you can use the menu on the lower right corner of your deployed app to see the logs.
For this issue, I think the best place to find help is the Microsoft Q&A platform on Azure OpenAI. Many questions on there are directly related to Azure OpenAI.
is there any st.write_log(“log”) to write in the log?
What do you mean by the menu on the lower right corner?
Usually I only saw its successfully deploy there 