Data_editor, dataframe ... general understanding

I have a general question to understand,
how the dataflow or workflow looks like, when working with a DB
and using Streamlit (data_editor i.e.) to maintain the data.

I need to handle rowlevel security,
filtering of the data, validation, pagination…
standard CRUD operations.
We have small tables, big ones with over 300 million rows.

Is there any example which shows how to handle that in a best practice way?

I am coming from Java coding, using JMIX framework, where all of those topics
are handled automatically.
Now I am searching for information how to implement that using Streamlit.
Or if there are good third party books to buy, which could help here,
would also be fine to know about.


Hi @rwalde

For enterprise data, the Streamlit in Snowflake may be the route to go and it was recently launched for public preview. For more info check out this link:

However, if you’d like to go with the community route, you might want to check out these amazing CRUD examples from @gerardrbentley

Hope this helps!

Hi dataprofessor…

that is what I was looking for, thx.

And regarding Streamlit in Snowflake… it is not available for Azure Snowflake in the moment,
only for AWS…

If you know a way to use it in on Azure Snowflake… would be great to get that information…


Glad to hear that it is helpful.

As for the support on Azure Snowflake, it’s currently in preview so I’d recommend to check back here to see when there’s an update: About Streamlit in Snowflake | Snowflake Documentation

This topic was automatically closed 180 days after the last reply. New replies are no longer allowed.