Hi,
I am trying to run a LLM model on streamlit using Langchain. I have a prompt template with a lot of variables. This is my code.
prompt_template = """ You are meeting {selected_doc} a company that manages {count} people. This is banner information: {banner}. Always mention the score for the company, which is {score}. Create an email"""
QA_Chain = PromptTemplate(template=prompt_template, input_variables=["context","selected_doc","count","score","banner"]
chain = load_qa_chain(llm, chain_type="stuff", prompt=QA_Chain, verbose=False)
st.session_state.output= chain.run(input_documents=[], context = context,
selected_doc=selected_doc, count=count,
score=score, banner= banner)
How can i print the prompt template for my business users on the Streamlit app so that they can validate the prompt makes sense?
I don’t want to use .format method as i have a lot of variables.
Thank you!