Hi,
I guess this is a very basic issue but
I do not understand the st.number_input rounding convention when using the format option. It seems to be neither the usual rounding half to even strategy nor a round up if halfway convention Here is a minimal example :
t = st.number_input(
r'$t$',
value=st.session_state.get('t', 0.1),
min_value=0.0,
max_value=0.3,
format="%0.2f",
step=0.01,
help='Thrust deduction factor, resolution=0.01',
placeholder="Type a number..."
)
st.write('Value entered : ', t)
st.write('round Value entered : ', round(t, 2))
t = round_up_halfway(t, 2)
st.write('Value retained : ', t)
In the first example, 0.125 is entered and displayed value is 0.13 (0.12 for round function)
In the second example, 0.245 is entered and displayed value is 0.24 (0.24 for round function)
Any clue on this ?
Regards,
Jonas
The display format does not impact the underlying Python value.
If the widget holds the value .124
and is restricted to display two digits, it will appear on the front end as .12
(but still return .124
). If the widget holds the value of .125
and is is restricted to display two digits, it will appear on the front end as .13
(but still return .125
).
Thank you mathcatsand for you reply.
I understand the format option is just about the display but my issue is about how the rounding is performed (please consider exemple provided).
0.125 is displayed as 0,13 BUT
0.245 is displayed as 0,24
Can you explain the used rounding convention ?
Regards,
Jonas
My guess is binary conversion. .245
is actually .244999...
under the hood.
Ok, thank you.
Do you think this could be fixed, to consider 0.245 exactly ?
And, please, can you explain what do you mean by “under the hood” ?
By “under the hood” I mean what the computers are actually doing. We, as humans, represent numbers in base 10. Computers work in binary. We input something in base 10, the computer converts it to binary to store it and work with it, then it gets converted back to base 10 to show us. If we aren’t working with integers, that conversion to binary and back can introduce rounding errors.
I am doubtful that it would be prioritized since it’s a limitation of working with floats in programming. You could file a feature request on GitHub if you want, ut if it’s really important to you, your best bet is probably inserting your own logic to work with intergers instead of floats or somehow introduce your own margin of error to force the values to meet whatever threshold you want.
Thank you, it’s now clear for me.