This content has been marked as final. Show 5 replies
Float and float (and Double and double) have well defined limits to both their precision and their accuracy. If you are OK with these limits, you should probably be using one of these throughout your application. Since you're using BigDecimal, you are presumably NOT OK with these limits. Transforming your BigDecimal to a float/double could well cause you to lose data irretrievably. So, you need to stick in the BigDecimal realm.
Or... use fixed point. For example, if you are working in UK pounds, or US dollars, you know you only need 2dp, so you could use an int or a long that represents pence/cents. This gives you the best of both worlds - no objects/garbage, total precision (subject to an upper limit on the number you can represent) and a small network/memory footprint.
thanks for your response Danny --
I forgot to mention my precision/accuracy expectations -- the BigDecimal is related to this being an inherited application and now going back and reviewing for performance/optimizations (particularly on the data transport layer, where computations are not needed).
These values should be accurate out to 6 decimal locations, however the actual 'integer' portion will likely never go out more than 5 decimal places the other way (so 99999.999999 is the max business value needed). In researching float, I can not find a definitive reference as to the max 'value' and precision/accuracy leaving me in a limbo state :-).
Sorry for not posting that piece of information earlier, that would be important!
In my own testing, I found what I was looking for.
float is accurate for up to '9 locations', meaning 6 before/3 after, 3 before/6 after, 7 before/2 after -- which is why there is no definitive answer.
double is accurate for up to '15 locations'.
If someone posts a more definitive reference that shows this kind of information, that would be great!
There's a lot more to it than that. You will find that there are certain numbers which just can't be represented by a float/double. For example 1/3.
If you know you need 5 digits before the decimal place, and 6 after, I would suggest you just use a fixed-point representation as a long. i.e. when reading numbers into the system from a user, multiply them by 1000000 and store them in a long, and when displaying them, divide them back again. If there's a database, I would just store the long. If there are files, I would store the long, unless they are for interchange with another system which is expecting the decimal value.
The long takes up 8 bytes (same as a double) but will give you absolute precision and accuracy for the domain you care about which a double won't. Plus, you won't generate any garbage like you would with BigDecimal.
Forget all about this micro-optimization immediately. The original designer(s) did well to use BigDecimal for money amounts: I would bet the last thing they would want would be introduction of floating-point into the system. Floating-point just isn't an appropriate way of representing money, and the conversion BigDecimal->float->BigDecimal, or BigDecimal->double->BigDecimal isn't lossless. Nobody will thank you for literally losing money while saving a few bytes here and there.