Pandora : Streamlit + GPT4 interactive Python console

Hello!

Proud to announce the app Iā€™ve been working (a lot!) on: Pandora.

Pandora is an AI-powered Python interactive console embedded in a Streamlit app.

Itā€™s both a fully working python console in which you can run python scripts AND a GPT4 assistant with which you can interact in natural language, capable of generating and running code.

It supports running Streamlit commands live in the console to render any widget in real time in the chat. The AI can do it too ! Thus enriching its output capabilities with the full spectrum of Streamlit widgets.

The app exists both as a web app and a local app. You can even switch between both, as your user account is managed via firebase to provide synchronization of your preferences and folder across your various devices.

There is much more to say, but I hope this will trigger your curiosity :sunglasses:.

You can check the web app here and will find all the necessary information in the repo.

Itā€™s still in development stage, can be improved in many ways and likely not bug-proof. Iā€™m looking for a few people brave enough to test it, give feedback and/or bring contributions.

Hope youā€™ll like it !

Cheers!

Baptiste

3 Likes

Tried it. No sure really what to do with this and what is the actual value. Perhaps a simple demo of you using it to build an app or something will help.

Hi @asehmi,

Itā€™s not specifically meant to build Streamlit apps (though it can surely help quick-test some aspects of your app). Itā€™s a regular python interactive console in which you may run any python script you like. Itā€™s just been upgraded with Streamlit as a means for in-chat interactive outputs (plots, dataframes, and other interactive or multimedia elements) + AI assistance with GPT4.

Think of it as a chatGPT with code interpreter tool, except that it lets you run commands in its interpreter instead of keeping it for its own use. This way, you may share the python session with the AI assistant and work together.

For instance you could do interactive data analysis in the python session, use Streamlit tools to display data and plots interactively, somewhat similarly to a Jupyter notebook, and ask the assistant to help you at any time.

The core value of this project is that the AI assistant doesnā€™t just use the python interpreter as a tool (it doesnā€™t even use the regular OpenAI tool call API). Itā€™s actually its central place of operation. Even messaging the user is done by generating and running python scripts.

The AI assistant has the whole python session in context (up to context limit), is declared as a python object in its own namespace allowing you to change its configuration, call its methods programmatically, plug any new module, object, function or API as new tools the AI will be able to play with.

From there, possibilities are quite vast !

1 Like

I see. I uploaded a simple Streamlit gist of mine. How would I execute it to see it produce output?

Depends on the complexity of the script.

One thing to keep in mind when using Streamlit commands in the Pandora console is that your scripts will run only once, therefore you canā€™t rely on a similar scripting logic as conventional Streamlit scripts relying on the script looping on itself : Namely, all interactivity must be implemented using callbacks.

Second thing to be aware of is that the console uses a special object (st_stacker) pre-declared as st to mimic the behavior of the streamlit module. You can use this module with similar commands and syntax as streamlit, with one minor change though : All widgets you will declare wonā€™t output their state value directly, but an st_output placeholder object instead. This placeholder object has a .value property that will be updated in real time as soon as your widget is rendered and have a non-empty state.

For instance, try to run the following script in the input cell (Ctrl+Enter to submit):

         
txt=st.text_input("Enter text here:") # txt in not a string here, but an st_output object

def callback(txt):
    if txt.value:
        # Access the actualized value of the widget via the .value property
        st.write(f"You entered : {txt.value}") 

st.button("Click me", on_click=callback,args=(txt,))

Itā€™s a slightly different frame of mind than using the standard Streamlit module, but once you get how it works, itā€™s as flexible and powerful.

Here is another example you can run in the console to implement a dynamic plot for instance.

import numpy as np
import matplotlib.pyplot as plt

# Create a figure and axis object
fig, ax = plt.subplots()

# Initial plot
x = np.linspace(0, 1, 100)
ax.plot(x, x)

# Display the initial plot
st.pyplot(fig)

# Function to replot the graph
def replot():
    # Clear the current plot
    ax.clear()
    # Generate new x values
    x = np.linspace(0, 1, 100)
    # Get the current value of the slider
    n = slider.value
    # Plot the new data
    ax.plot(x, x**n)

# Create a slider widget
slider = st.slider('Select exponent', min_value=0.0, max_value=2.0, value=1.0, step=0.1, on_change=replot)

A few explanations and tips:

  • Security : Your account is managed via firebase authentication. Even as the admin, I donā€™t have access to your password. Proper security rules are set in the firebase project to prevent anyone but you accessing your data.
  • API keys: all API keys needed to enable the AI features or other tools are transmitted and stored safely encrypted in firebase. All efforts are made to ensure safe transmission and storage of sensitive data.
  • Websearch: In order to enable the websearch tool of the AI agent, you may provide your google custom search API key and CX in the Settings page.
  • Storage: All files you create during a session are dumped to firebase storage when you log out. You should therefore log out gracefully to avoid your work being lost. To avoid any unwanted data loss, feel free to run the dump_workfolder() command in the console at any time. When using the web app, your user folder will be wiped from the appā€™s file system once you log out. Your files will be uploaded back from firebase storage when you sign in.
  • Stdin redirection: When using a python command that wants to read from stdin such as the input command, the script will pause and a special input widget will render to let you enter a string. This string will be available immediately when your script resumes execution (without requiring a rerun). You can therefore use the input command in your scripts seamlessly.
  • Shortcuts: Running exit() or quit() commands in the console will log you out gracefully. Running clear() in the console will clear the chat (This wonā€™t affect the python session and context memory of the AI agent).
  • Restart session: You can restart the python session by clicking the ā€˜Restart Sessionā€™ button in the sidebar menu. This will also reinitialize the AI agent to its startup state. You can achieve a similar result by running restart() in the console.
  • Editor: A text editor is provided within the app to enable opening and editing files. You may open it via the ā€˜Open editorā€™ button in the sidebar. Alternatively you may use the edit function directly from the console. edit(file=your_file,text=your_text) will open your file in the editor, prefilled with an optional string of text.
  • Pandora as a python object: Pandora (the AI assistant) is declared as a python object in the console under the name pandora. You may thus interact with it programmatically.
  • Observation: Pandora is equipped with a special observe tool enabling it to look at the content of any folder, file, data structure, imageā€¦ When applied to a module, class or function this will inspect the object and access its documentation. You may thus ask Pandora to observe almost anything to get information about it, including the pandora object itself !

This is amazing. Thank you for sharing. wow

1 Like

Hello @JayBo ,

Thank you for your nice comment. Really happy that you like it !

Eager to hear your feedback !

Cheers!

Baptiste

Just fixed the assistantā€™s webpage reading feature. There was a bug when used from the web-app. Pandora should now be able to read webpages smoothly.

For those using the local app, you should know that Pandora uses a headless firefox selenium webdriver to scrap the web. Therefore you should have firefox installed on your computer to enable the webscraping feature (for now). I will enable using other popular browsers at some point.

Donā€™t forget to pip install --upgrade pandora-app once in while, as I bring frequent modifications to the code base.

Cheers!

A few more tips:

  • Hybrid scripting: Pandora supports mixing both natural language and python code in the same input script. A parser determines which is which based on python syntactic correctness of each line. Natural language parts will be sent as prompts to the AI assistant and rendered as chat messages, the python code parts will be executed directly in the interpreter without triggering an assistant response. Beware that single word inputs like ā€œHelloā€ will be interpreted as an attempt to access a variable by the parser and will most likely result in a ā€œnot definedā€ exception. To avoid this, you should use punctuation to help the parser understand your word is meant to be a message to the assistant: ā€œHello!ā€
  • Web scraping: I just added a selenium webdriver to Pandoraā€™s toolkit. Useful to make it navigate through webpages, take webshots, interact with the DOM object, scrap dataā€¦ You can use this web-driver yourself by running driver=get_webdriver().
  • LaTeX and PDF: The web-app comes equipped with a minimal LaTeX distribution enabling Pandora to use pdflatex to generate pdf documents from .tex files. A dedicated tool is declared in the console as tex_to_pdf(tex_file,pdf_file). To be able to use it via the local app, you should have a LaTeX distribution and pdflatex installed on your computer. A custom streamlit widget enables displaying pdf files in the chat, you can use it via the show_pdf(file_or_url) shortcut.
  • Memory: Pandora uses a memory.json file associated to your user profile for storing and remembering any kind of information across sessions. The assistant has permanent contextual visibility on the memory content. You can use it to guide the assistant towards the desired behavior, save user information, preferences or memos. Just ask Pandora and it will remember something durably.
  • Startup: A startup.py file specific to your user profile will be executed whenever a new session starts. You may use it to pre-declare your favorite functions/tools to avoid having to declare them manually every new session. You may also use it to declare custom tools the agent will be able to use via the pandora.add_tool method. You will find the memory.json and startup.py files in the config folder of your workfolder. Feel free to edit them with the built-in editor.

Hi gratz B4PT0R, at the very minimum it is a great concept to be explored. I will definately give it a look, I actualy had a postponed idea for this type of use case regarding writing trading strategies in specializing in a backteting module to get more focus/in boundaries generation. All best! Rui

1 Like

Hi @Rui_Lacerda ,

Thanks for the comment. Yes, giving the LLM the possibility to operate directly from python as a base language is indeed worth exploring, as it allows the AI to handle any python module or object as a tool (thus bypassing completely the inherent limit of the tool call API which expects only functions).

Happy testing ! Feel free to report any bug or ideas for improvement.

Cheers!

Hello!

I made some significant improvements to user experience:

  • Full streaming support
  • Reduced significantly TTS and STT latency

Check it out !

Hey folks.

For those whoā€™ve tried the app (if any :sweat_smile:).

Would you say that using it via the cloud on mobile phone is a feature you would use frequently?
Would you say that cloud storage and sync of your user folder is a useful feature?
Would you say that having to connect and store your OpenAI API key on a remote server (even securely encrypted) is a no go for you ?
Would you prefer using a totally local version of the app with no user connection and no cloud sync, providing your API key via an env variable ?
Would you prefer to provide your own API key, pay a monthly subscription, or that I charge you for your API usage ?

Just trying to figure out the proper way of doing it so that people engage with it a bit more.

Cheers!