Next, i’m trying to cache an asyncio function (i’m forced to use it with the library) but i’m getting this Object of type builtins.coroutine: <coroutine object get_channel_details at 0x1a9be3560>
Builtins does not have coroutine, it’s inside asyncio.
How can i cache it? This does not help hash_funcs={asyncio.coroutine: lambda x: x} (probably because it does not correspond to builtins.coroutine as in the error message
Here is the function itself:
async def get_channel_details(channel):
tg_client = TelegramClient(SESSION_FILE, tg_env['TELEGRAM_APP_ID'], tg_env['TELEGRAM_APP_HASH'])
channel_details = {'title': "", 'description': "", "recent_posts": []}
async with tg_client:
ent = await tg_client.get_entity(channel)
assert isinstance(ent, tl.types.Channel)
meta = await tg_client(functions.channels.GetFullChannelRequest(channel))
posts = await tg_client.get_messages(channel, limit=10)
channel_details.update({
'title': meta.chats[0].title,
'description': meta.full_chat.about
})
if len(posts) > 0:
channel_details.update({'recent_posts': [p.message for p in posts if p.message is not None]})
return channel_details
To be honest, I’m not as familiar with our caching setup, but likely we didn’t design it when considering asyncio. This is probably due to Streamlit focusing on a script to design the interface (less async functions on the side).
I basically searched how to cache async functions to functions, and found this.
They describe a caching solution they created, but I realized they used a loop_run_until_completemethod, which would basically make an async call synchronous.
@Rusteam Hey dude. I was browsing to see if streamlit had implemented async caching yet and seems they haven’t. As an alternative you can use aiocache. Here is a quick example with your func name:
from aiocache import Cache
from aiocache import cached
Hey streamlit team. I’d very highly recommend y’all implement async caching. The reason is there has been a seismic shift in python API development since the release of FastAPI (30k+ starts on github in 2 years). I realize that many data scientist are building without the “need” to use async/await but it will become more necessary as more ML code needs to be integrated with an API like FastAPI which is gaining api market share vs Django/Flask.
Thanks for stopping by! We use cookies to help us understand how you interact with our website.
By clicking “Accept all”, you consent to our use of cookies. For more information, please see our privacy policy.
Cookie settings
Strictly necessary cookies
These cookies are necessary for the website to function and cannot be switched off. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms.
Performance cookies
These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us understand how visitors move around the site and which pages are most frequently visited.
Functional cookies
These cookies are used to record your choices and settings, maintain your preferences over time and recognize you when you return to our website. These cookies help us to personalize our content for you and remember your preferences.
Targeting cookies
These cookies may be deployed to our site by our advertising partners to build a profile of your interest and provide you with content that is relevant to you, including showing you relevant ads on other websites.