I have an app that will be initialized in an environment with wifi, but users will then navigate around the building in order to update the app. Think of it like an inventory tracking system. Some areas of the building will be weak-spots in the wifi and I’d like to optimize the app such that it can work in these areas.
Right now, I use session_state
and cache_data
quite a bit to reduce the (re)load time. I haven’t been able to find much information on how these are implemented and thus, how they fair against poor network conditions. I believe I need client-side caching, not sure if either one of those are implemented that way.
I can’t use cookies because the data is quite relational and users will need to interact with it, modifying it and using its relationships for querying. I believe IndexedDB might be a fine solution, but not sure if this is implemented under the hood anywhere.
In a perfect world, there’d be an easy way to set up replication between a master node in the cloud and the clients IndexedDB while online, while allowing default query and modification of the IndexedDB data if the master node in the cloud is unreachable. With this solution, it would also be necessary to have a sort-of VCS so that modifications made offline can be identified as downstream and synchronized with the master node in the cloud. I don’t really need any kind of smart conflict resolution, as my app wont have users modifying the same dataset concurrently.
The world isn’t exactly perfect though, so whats the closest I can reasonably get to this desired result while using streamlit?