GPT-Clone with subscription logic

I want to develop a noob subscription logic. You get free 3 messages with chatGPT
Couldn’t figure out how the message history is carried on the UI in the doc

  • accept the prompt from chat_input and show reponse

carry the message history in the UI and st.session_state["messages"]


import streamlit as st
import openai
import requests
import os
import random
import time

import uuid



messages_ls=[
        {"role": "system", "content": "You are a helpful, pattern-following assistant."},
        {"role": "user", "content": "Help me translate the following corporate jargon into plain English."},
        {"role": "assistant", "content": "Sure, I'd be happy to!"},
        {"role": "user", "content": "New synergies will help drive top-line growth."},
        {"role": "assistant", "content": "Things working well together will increase revenue."},
        {"role": "user", "content": "Let's circle back when we have more bandwidth to touch base on opportunities for increased leverage."},
        {"role": "assistant", "content": "Let's talk later when we're less busy about how to do better."},
        {"role": "user", "content": "This late pivot means we don't have time to boil the ocean for the client deliverable."},
    ]


openai.api_key = 'sk-I3xxxxxxhdfeh34l4gj4r4jffifen'


st.title("ChatGPT-like clone")


if "openai_model" not in st.session_state:
    st.session_state["openai_model"] = "gpt-3.5-turbo"
# d={} , st.session_state
if "messages" not in st.session_state:
    st.session_state["messages"] = messages_ls

## to create list of tuple
## ask for creating a user id for session


user_id=uuid.uuid4().hex
# if started the chat -- provide 3 responses

if prompt:= st.chat_input(" "):
    with st.chat_message("user"):
        st.markdown(prompt)
    with st.chat_message("assistant"):
        response=openai.ChatCompletion.create(model=st.session_state["openai_model"],messages=st.session_state["messages"],temperature=0.7,)
        st.markdown(response["choices"][0]["message"]["content"])
    st.session_state["messages"].append({"role":"assistant", "content":response["choices"][0]["message"]["content"]})

Hi @x31

The chat history is saved to the messages Session State:

while new messages are added to the chat history via the append method once the LLM response has been generated:

Perhaps you can implement a counter that counts the number of assistant generated responses up until a certain threshold and once that is reached, then perform the specific task that you’d like it to do such as informing the user that their trial period has been reached.

Hi @dataprofessor

Thank you for response. I would really appreciate if you can explain once I have the prompt from user, how can I apply a loop to get next 3 inputs from user. --given that st.chat_input() can be used only 1 time.


##for loop somewhere here
if prompt:= st.chat_input(" "):
    with st.chat_message("user"):
        st.markdown(prompt)
    with st.chat_message("assistant"):
        response=openai.ChatCompletion.create(model=st.session_state["openai_model"],messages=st.session_state["messages"],temperature=0.7,)
        st.markdown(response["choices"][0]["message"]["content"])
    st.session_state["messages"].append({"role":"assistant", "content":response["choices"][0]["message"]["content"]})

This topic was automatically closed 180 days after the last reply. New replies are no longer allowed.