My app has a text area where user enter an input and based on that the app makes chained calls to the OpenAI apis which takes somewhat between 15-20 minutes to get the output.
However, when a new input is provided, I get back the same results from previous run. As the default behavior is caching, I used st.cache_data decorator on the functions involved in making these calls with ttl = 10 sec. However, with new input I am still getting the cached results.
I also tried clearing cache with st.cache_data.clear() and also with func.clear().
My current code looks something like this:
@st.cache_data(ttl=10)
def make_api_call_with_retries(chat, max_tokens, retries=3):
..........
response = get_response(chat, max_tokens=max_tokens)
...........
@st.cache_data(ttl=10)
def get_response(chat_history, max_tokens=0):
api call made here
@st.cache_data(ttl=10)
def process_save_document(chat, prompt_type, input_heading, output_heading, text, initial_outline, max_tokens=0):
................
response = make_api_call_with_retries(chat, max_tokens=max_tokens, retries=3)
.............
@st.cache_data(ttl)
def run_script(outline):
This function makes chained calls to the above function with output of 1 passed to another for 6 times
input_text = st.text_area("Enter text:")
if st.button("Run Script", on_click=change_run_status):
run_script.clear()
process_save_document.clear()
make_api_call_with_retries.clear()
get_response.clear()
results = run_script(input_text)
st.session_state.result = results[0]
chat_outputs = results[1:]
Any ideas on how I can stop Streamlit from caching the return values or clearing it? Or is there something that I am doing wrong in the code ? I just wish to have a new outputs returned for each new input in text area.