Filter for st data_editor don't work on Snowflake

Hello,
I have developed a Streamlit app on Snowflake, which allows me to modify values in a table. However, I’ve noticed that I’m unable to filter the data in the dataframe using ‘edited_data = st.data_editor(filtered_data, num_rows=“dynamic”)’ on Snowflake. Can you please assist me?"

Best
Zied

Can you explain in more detail what you’ve tried and what unexpected result you are getting? Can you provide a simplified code snippet?

Hi Mathcatsand ,

Thank you so much for your response. I would be happy if I can resolve this issue.

The application allows the user to choose a table via a dropdown menu and then gives them the ability to modify the table (edit, insert, delete rows).

With st.data_editor, it’s not possible to filter the data. Given that the tables are large, I would like to have a filter. I have coded a filter with dropdown menus that include the column names and the unique values in each column.

The idea is to filter the data saved in a DataFrame called filtered_data and then edit it with st.data_editor.

The problem is that every time I change the filter, the previous modifications are lost.

Thank you in advance for your help.

Below is my script:




import streamlit as st
from snowflake.snowpark.context import get_active_session
import pandas as pd

# Set page layout to wide
st.set_page_config(layout="wide")

# Get the active session
session = get_active_session()

# List of tables
tables = ["DATA_SOURCE", "DATA_FIELDS", "DATA_TEST"]

@st.cache_data
# Function to create backup table if it doesn't exist
def create_backup_table(table_name):
    backup_table = f"{table_name}_BACKUP"
    # Check if backup table exists
    check_query = f"SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA = 'BOX' AND TABLE_NAME = '{backup_table}'"
    result = session.sql(check_query).collect()
    if not result:
        # Retrieve data from the original table
        query_main_table = f"SELECT * FROM DB.BOX.{table_name}"
        data_main_table = session.sql(query_main_table).to_pandas()
        
        # Create backup table using write_pandas
        session.write_pandas(data_main_table, table_name=backup_table, database="DB", schema="BOX", overwrite=True, auto_create_table=True)
        st.info(f"Backup table {backup_table} created.", icon="ℹ️")


# Create backups for all tables if they don't exist
for table in tables:
    create_backup_table(table)

# Dropdown menu to select a table
selected_table = st.selectbox("Select a table", tables)

# Print Title 
st.subheader("Edit Data :")

# Retrieve data from the selected table
query_main_table = f"SELECT * FROM DB.BOX.{selected_table}"
data_main_table = session.sql(query_main_table).to_pandas()

# Get the list of columns
columns = data_main_table.columns.tolist()

# Allow user to select columns for filtering
selected_columns = st.multiselect("Select columns for filtering", columns)

# Filter data based on selected columns and values
filtered_data = data_main_table.copy()
for column in selected_columns:
    unique_values = data_main_table[column].unique()
    selected_values = st.multiselect(f"Select values for {column}", unique_values)
    if selected_values:
        filtered_data = filtered_data[filtered_data[column].isin(selected_values)]

# Display data in the Data Editor parm ,on_change= True
edited_data = st.data_editor(filtered_data, num_rows="dynamic", key="editor")

# Button to submit the changes to the main table
if st.button('Submit'):
    try:
        # Merge modified data with original data
        merged_data = data_main_table.merge(filtered_data, how="left", indicator=True)
        
        # Update rows that have been modified
        for index, row in edited_data.iterrows():
            if merged_data.loc[index, '_merge'] == 'both':
                merged_data.loc[index, merged_data.columns] = row
        
        # Remove the merge indicator column
        merged_data.drop(columns='_merge', inplace=True)
        
        # Convert merged data to DataFrame
        df = pd.DataFrame(merged_data)
        
        # Insert the new data in the main table
        session.write_pandas(df, table_name=selected_table, database="DB", schema="BOX", overwrite=True, auto_create_table=False)
        
        st.success(f"Changes have been added to the table {selected_table}.", icon="✅")
    except Exception as e:
        st.error(f"An error occurred: {e}")

# Button to rollback the changes
if st.button('Rollback'):
    try:
        # Delete data from the main table
        query_delete_main_table = f"DELETE FROM DB.BOX.{selected_table}"
        session.sql(query_delete_main_table).collect()
        
        # Retrieve data from the backup table and insert into the main table
        query_rollback = f"INSERT INTO DB.BOX.{selected_table} SELECT * FROM DB.BOX.{selected_table}_BACKUP"
        session.sql(query_rollback).collect()
        
        st.success("Rollback performed successfully.", icon="✅")
    except Exception as e:
        st.error(f"An error occurred during rollback: {e}")

# Display the current table
data_main_table_updated = session.sql(query_main_table).to_pandas()
st.write(f"Current Table: {selected_table}")
st.dataframe(data_main_table_updated, use_container_width=True)

Best

I see. Yes, like all widgets, if you change anything that defines the widget, it will lose state and be created anew. If you want to allow re-filtering between edits, you’ll need to collect the existing edits before applying the new filter.

This topic was automatically closed 180 days after the last reply. New replies are no longer allowed.