Reanding parquet file from S3 using Dask Dataframe

Hey, guys since my dataset is very large to fit in memory, I have been trying to use Dask to manipulate the data to get a new dataframe that could fit in memory. However, I am getting a erro that I could not resolve…
I am using this function to read the data:

And I am getting the erro below we I use the data from above, inside the functions below

My reppo