Oh. vak meant "data representation in computer memory" (floating-point double vs decimal). I thought he meant representation/conversion of data to decimal string.
In any case, the primary way to deal with imprecision is not increasing precision, but replacing number equality comparison with [epsilon] range comparison.
Re: C#
I thought he meant representation/conversion of data to decimal string.
In any case, the primary way to deal with imprecision is not increasing precision, but replacing number equality comparison with [epsilon] range comparison.