I am trying to upload my streamlit app over community cloud but i got an error AttributeError: Can't get attribute 'EuclideanDistance' on <module 'sklearn.metrics._dist_metrics' from '/home/adminuser/venv/lib/python3.9/site-packages/sklearn/metrics/_dist_

i got an error AttributeError: Can’t get attribute ‘EuclideanDistance’ on <module ‘sklearn.metrics._dist_metrics’ from ‘/home/adminuser/venv/lib/python3.9/site-packages/sklearn/metrics/_dist_metrics.cpython-39-x86_64-linux-gnu.so’>

github repo link-GitHub - Mayankpathak07/st-heart-disease-prediction

This could be a version issue? I see in your requirements.txt file you’re running sklearn 1.2.2, maybe versioning up to 1.3.0 could be a solution according to a related StackOverflow: https://stackoverflow.com/questions/76631305/attributeerror-cant-get-attribute-euclideandistance-on-module-sklearn-metr

Edit: on my local machine I’m running sklearn 1.2.2 and your code functioned, so maybe it’s a different issue. But also by the way, your app gives a Positive or Negative Confidence of 100% for every single prediction regardless of settings. Is that intended?

Second edit: looks like this exact question may have been answered on this Streamlit community discussion 2 years ago: ModuleNotFoundError: No module named 'sklearn.neighbors._dist_metrics'

1 Like

after using sklearn 1.2.2 it got worked for me can you please tell why the app gives always positive and 100% confidence for each configurations . please check the code and provide the solution

I think it’s because in your printout statement you use a flooring function (i.e. double division - // ) in the f-string: {((confidence*10000)//1)/100} when you could use something like int(confidence * 100) or np.round(confidence * 100) but I didn’t test this. Are you getting confidence values less than 100% when you run the program?

Edit: also, I’m not sure “confidence” is the right word here, since the term implies “confidence interval” which is a very specific measurement in statistics. I think “probability” is the better word to use, since that’s what sklearn calls it.