I’m having an issue where my local application and deployed application are giving two different responses from ChatGPT. Even though it is working and able to get the response, the one deployed on the Streamlit server is not making sense, while the one on my local machine is giving the expected output.
I have confirmed that both codes are identical.
I was wondering if someone else facing or has faced a similar issue.
Thank you.
This is where I’m invoking the chatgpt in the dashboard:
st.header('Smart Analysis 🤖')
# Add a text box
text = f"Analyzing this dataframe {ttdata}" # ttdata is a pandas dataframe I'm trying to analyze.
text_input = st.text_input("Enter your request here")
# Add a button
if st.button("Submit"):
#st.write(f"You entered: {text} {text_input}")
st.write("Analyzing...")
reuqest_headers = {
#"Content-Type": "application/json",
"Authorization": f"Bearer {openai_api_key}"
}
request_data = {
"model": "text-davinci-003",
"prompt": f"{text} {text_input} using the dataframe",
"max_tokens": 500,
"temperature": 0.5
}
response = requests.post(api_endpoint, headers=reuqest_headers, json=request_data)
if response.status_code == 200:
st.text(response.json()['choices'][0]['text'])
else:
st.text(f"request failed with status code: {str(response.status_code)}")