Help build a basic LLM chat app

Hi,

I am trying to follow the tutorial below to build a basic LLM chat app that uses AzureOpenAI.

Build a basic LLM chat app

Here’s the code I use, where api_version, azure_endpoint, and st.session_state[“openai_model”] have appropriate values.

from openai import AzureOpenAI
import streamlit as st

st.title("ChatGPT-like clone")

client = AzureOpenAI(
    api_version="...",
    azure_endpoint="...",
)

if "openai_model" not in st.session_state:
    st.session_state["openai_model"] = "..."

if "messages" not in st.session_state:
    st.session_state.messages = []

for message in st.session_state.messages:
    with st.chat_message(message["role"]):
        st.markdown(message["content"])

if prompt := st.chat_input("What is up?"):
    st.session_state.messages.append({"role": "user", "content": prompt})
    with st.chat_message("user"):
        st.markdown(prompt)

    with st.chat_message("assistant"):
        stream = client.chat.completions.create(
            model=st.session_state["openai_model"],
            messages=[
                {"role": m["role"], "content": m["content"]}
                for m in st.session_state.messages
            ],
            stream=True,
        )
        response = st.write_stream(stream)
        
    st.session_state.messages.append({"role": "assistant", "content": response})

When I enter “20+30=” in the prompt, I get the following response.

`ChatCompletionChunk(id='',
 choices=[], created=0, model='', object='', system_fingerprint=None, 
prompt_filter_results=[{'prompt_index': 0, 'content_filter_results': 
{'hate': {'filtered': False, 'severity': 'safe'}, 'self_harm': 
{'filtered': False, 'severity': 'safe'}, 'sexual': {'filtered': False, 
'severity': 'safe'}, 'violence': {'filtered': False, 'severity': 
'safe'}}}])`

50

When I follow up and enter “No, it is 40.”, I get the following error.

BadRequestError: Error code: 400 - {'error': {'message': "'$.messages[1].content' is invalid. Please check the API reference: https://platform.openai.com/docs/api-reference.", 'type': 'invalid_request_error', 'param': None, 'code': None}}

Traceback:

File “C:\Python\Python311\Lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py”, line 535, in _run_script
exec(code, module.dict)File “C:\myapp\week 28-31\OpenAI\streamlit_app.py”, line 27, in
stream = client.chat.completions.create(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^File “C:\Python\Python311\Lib\site-packages\openai_utils_utils.py”, line 271, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^File “C:\Python\Python311\Lib\site-packages\openai\resources\chat\completions.py”, line 659, in create
return self._post(
^^^^^^^^^^^File “C:\Python\Python311\Lib\site-packages\openai_base_client.py”, line 1200, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^File “C:\Python\Python311\Lib\site-packages\openai_base_client.py”, line 889, in request
return self._request(
^^^^^^^^^^^^^^File “C:\Python\Python311\Lib\site-packages\openai_base_client.py”, line 980, in _request
raise self._make_status_error_from_response(err.response) from None

Please advise.
Thanks

In case it matters, I am using the gpt-4 model, while the tutorial uses the gpt-3.5-turbo model. The issue seems to be caused by “stream = True”.

Hi @txt,

Sorry for the late reply, but in case you didn’t see it and also for posterity: Support for OpenAI was added in Streamlit version 1.32.0. :slight_smile: