Next, i’m trying to cache an asyncio function (i’m forced to use it with the library) but i’m getting this Object of type builtins.coroutine: <coroutine object get_channel_details at 0x1a9be3560>
Builtins does not have coroutine, it’s inside asyncio.
How can i cache it? This does not help hash_funcs={asyncio.coroutine: lambda x: x} (probably because it does not correspond to builtins.coroutine as in the error message
Here is the function itself:
async def get_channel_details(channel):
tg_client = TelegramClient(SESSION_FILE, tg_env['TELEGRAM_APP_ID'], tg_env['TELEGRAM_APP_HASH'])
channel_details = {'title': "", 'description': "", "recent_posts": []}
async with tg_client:
ent = await tg_client.get_entity(channel)
assert isinstance(ent, tl.types.Channel)
meta = await tg_client(functions.channels.GetFullChannelRequest(channel))
posts = await tg_client.get_messages(channel, limit=10)
channel_details.update({
'title': meta.chats[0].title,
'description': meta.full_chat.about
})
if len(posts) > 0:
channel_details.update({'recent_posts': [p.message for p in posts if p.message is not None]})
return channel_details
To be honest, I’m not as familiar with our caching setup, but likely we didn’t design it when considering asyncio. This is probably due to Streamlit focusing on a script to design the interface (less async functions on the side).
I basically searched how to cache async functions to functions, and found this.
They describe a caching solution they created, but I realized they used a loop_run_until_completemethod, which would basically make an async call synchronous.
@Rusteam Hey dude. I was browsing to see if streamlit had implemented async caching yet and seems they haven’t. As an alternative you can use aiocache. Here is a quick example with your func name:
from aiocache import Cache
from aiocache import cached
Hey streamlit team. I’d very highly recommend y’all implement async caching. The reason is there has been a seismic shift in python API development since the release of FastAPI (30k+ starts on github in 2 years). I realize that many data scientist are building without the “need” to use async/await but it will become more necessary as more ML code needs to be integrated with an API like FastAPI which is gaining api market share vs Django/Flask.