Hello, I am new to streamlit and need to display data in tabular format based on the specific MongoDB collection(s) selected from the select/multiselect box. In normal python (PyMongo) code below:
from pymongo import MongoClient
mongo_server_url = 'mongodb://localhost:27017'
mongo_db = 'testDB'
client = MongoClient(mongo_server_url)
def load_mongo_data(mongo_coll):
db = client[mongo_db][mongo_coll]
data = db.find()
for documents in data:
print (documents)
load_mongo_data('testCollection')
This displays all the documents when a collection name is passed.
However, when am trying to achieve similar thing using streamlit, I get no data returned:
import streamlit as st
from pymongo import MongoClient
@st.cache(hash_funcs={MongoClient: id})
def get_client():
return MongoClient("mongodb://localhost:27017")
client = get_client()
db = client.testDB
st.sidebar.subheader ("MongoDB:")
coll_name = st.sidebar.selectbox ("Select collection: ",db.list_collection_names())
@st.cache
def load_mongo_data(coll_name):
data = db.coll_name.find()
for documents in data:
print(documents)
st.write("Showing data for: ", coll_name)
st.write(load_mongo_data(coll_name))
#st.write(db.coll_name.find())
But I think you are living dangerously there as the function might return a lot of records.
So I would recommend you to do a paginated query, something like this would be less heavy for your db and server,
Taking idea from this answer, Streamlit app disconnects and stops before displaying data
import streamlit as st
from math import ceil
page_size = 1000
@st.cache
def load_mongo_data(coll_name, page):
data = list(db[coll_name].find().skip((page-1)*page_size).limit(page_size))
return data
coll_name = st.sidebar.selectbox ("Select collection: ",db.list_collection_names())
document_count = db[coll_name].count_documents()
page_number = st.number_input(
label="Page Number",
min_value=1,
max_value=ceil(document_count /page_size),
step=1,
)
st.write(load_mongo_data(coll_name, page_number))
Its untested but its just intended to communicate the idea.
UnhashableTypeError : Cannot hash object of type pymongo.write_concern.WriteConcern , found in the body of load_mongo_data() .
While caching the body of load_mongo_data() , Streamlit encountered an object of type pymongo.write_concern.WriteConcern , which it does not know how to hash.
To address this, please try helping Streamlit understand how to hash that type by passing the hash_funcs argument into @st.cache . For example:
If you don’t know where the object of type pymongo.write_concern.WriteConcern is coming from, try looking at the hash chain below for an object that you do recognize, then pass that to hash_funcs instead:
Object of type pymongo.write_concern.WriteConcern: WriteConcern()
Object of type builtins.tuple: ('_BaseObject__write_concern', WriteConcern())
Object of type builtins.dict: {'_BaseObject__codec_options': CodecOptions(document_class=dict, tz_aware=False, uuid_representation=UuidRepresentation.PYTHON_LEGACY, unicode_decode_error_handler='strict', tzinfo=None, type_registry=TypeRegistry(type_codecs=[], fallback_encoder=None)), '_BaseObject__read_preference': Primary(), '_BaseObject__write_concern': WriteConcern(), '_BaseObject__read_concern': ReadConcern(), '_Database__name': 'testDB', '_Database__client': MongoClient(host=['localhost:27017'], document_class=dict, tz_aware=False, connect=True), '_Database__incoming_manipulators': [], '_Database__incoming_copying_manipulators': [], '_Database__outgoing_manipulators': [], '_Database__outgoing_copying_manipulators': []}
Object of type pymongo.database.Database: Database(MongoClient(host=['localhost:27017'], document_class=dict, tz_aware=False, connect=True), 'testDB')
Object of type builtins.function: <function load_mongo_data at 0x000002364309D048>
Object of type builtins.tuple: ('__main__', 'load_mongo_data', <function load_mongo_data at 0x000002364309D048>)
This was without using the pagination logic, since right now am just trying with a very small collection, but handling pagination would have been definitely my next follow up question and am glad that you’ve proactively answered that
I think you should be able to get rid of this error if you create the connection and close it inside the function before the return something like this,
CONN_URI = "mongodb://localhost:27017"
@st.cache
def load_mongo_data(db_name, coll_name, page):
conn = pymongo.MongoClient(CONN_URI)
data = list(conn[db_name][coll_name].find().skip((page-1)*page_size).limit(page_size))
conn.close()
return data
It will work but not using persistent connections can be bad if a lot of concurrent users are requesting for unseen data. It will work as a temporary fix though
To save you some redundant code with this approach you could also define a decorator and do this,
def provide_db_connection(func):
@wraps(func)
def wrapper(*args, **kwargs):
conn = pymongo.MongoClient(CONN_URI)
result = func(conn=conn, *args, **kwargs)
conn.close()
return result
return wrapper
@st.cache
@provide_db_connection
def load_mongo_data(conn, db_name, coll_name, page):
data = list(conn[db_name][coll_name].find().skip((page-1)*page_size).limit(page_size))
return data
Ok, so if I understand it correctly - we need to persist the db connections state, or else have to make db calls every time for each operation ?
For now, am using it without the additional decorator - @provide_db_connection.
Secondly, the logic of pagination isn’t seem to be working in my case. It displays just 1000 records but clicking on + sign or manually entering 2 or 3 page# nothing happens and doesn’t show next set of records.
So may be for pagination as well - per this article - Preserving state across sidebar pages ; we need to persist the session state for it to work properly ?
Thanks for stopping by! We use cookies to help us understand how you interact with our website.
By clicking “Accept all”, you consent to our use of cookies. For more information, please see our privacy policy.
Cookie settings
Strictly necessary cookies
These cookies are necessary for the website to function and cannot be switched off. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms.
Performance cookies
These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us understand how visitors move around the site and which pages are most frequently visited.
Functional cookies
These cookies are used to record your choices and settings, maintain your preferences over time and recognize you when you return to our website. These cookies help us to personalize our content for you and remember your preferences.
Targeting cookies
These cookies may be deployed to our site by our advertising partners to build a profile of your interest and provide you with content that is relevant to you, including showing you relevant ads on other websites.