Saving feedbacks on LLMs in Postgresq. One generation always ahead

I’m trying to collect human feedbacks on answers provided by LLMs.

The workflow is the following:

  1. The user inserts input (query, context, parameters …)
  2. The API calls the model and generates the answer, displayed to the user
  3. The user uses a boolean to rate the answer (widget)
  4. The answer is saved on postgres with the feedback

However, looking at postgres database, the script saves the right feedback but with a new answer generated (I think because the script run again from top to bottom when the collection widget is activated).

# OpenAI API call fuction

def ask_function(model, context, prompt, temperature, top_p, max_tokens):
  
  completion = openai.ChatCompletion.create(
      model = model,
      temperature = temperature,
      top_p = top_p ,
      max_tokens = max_tokens,
      messages=[
      {"role": "system", "content": context},
      {"role": "user", "content": prompt}
    ]
  )

return completion

# 1) Collection of input (not displayed for simplicity) 
# 2) Function calling with user input

gpt35 = ask_function("gpt-3.5-turbo", context, prompt, temperature, top_p, max_tokens)
    answer_gpt35 = gpt35.choices[0].message
    tokens_gpt35 = gpt35["usage"]["completion_tokens"]

st.info(answer_gpt35['content'], icon=None)

# 3) + 4) Feedback collection and writing in Postgres

st.checkbox(key = "gpt35love", label="I prefer GPT-3.5-turbo answer", on_change = write_postgres(answer_gpt35['content'], st.session_state.gpt35love)

# where write_postgres is a function savings the answer and the feedback in a postgres database.


Someone can help me? :face_with_head_bandage:

Hello @johnsavephd,

To resolve this issue, you can use Streamlit’s st.session_state to store the answer when it’s first generated and only update this stored answer when necessary (e.g., when the user submits a new query).

import streamlit as st

def handle_feedback():
    if 'answer_content' in st.session_state and 'feedback' in st.session_state:
        write_postgres(st.session_state['answer_content'], st.session_state['feedback'])

# Place to collect user input for context, prompt, etc.

if 'last_prompt' not in st.session_state or st.session_state['last_prompt'] != prompt:
    gpt35 = ask_function("gpt-3.5-turbo", context, prompt, temperature, top_p, max_tokens)
    answer_content = gpt35.choices[0].message['content']
    st.session_state['answer_content'] = answer_content
    st.session_state['last_prompt'] = prompt

st.info(st.session_state['answer_content'])

feedback = st.checkbox("I prefer GPT-3.5-turbo answer", key="gpt35love", on_change=handle_feedback)
st.session_state['feedback'] = feedback

Hope this helps!

Kind Regards,
Sahir Maharaj
Data Scientist | AI Engineer

P.S. Lets connect on LinkedIn!

➤ Want me to build your solution? Lets chat about how I can assist!
➤ Join my Medium community of 30k readers! Sharing my knowledge about data science and AI
➤ Website: https://sahirmaharaj.com
➤ Email: sahir@sahirmaharaj.com
➤ 100+ FREE Power BI Themes: Download Now

3 Likes

This topic was automatically closed 180 days after the last reply. New replies are no longer allowed.