If this is not the correct forum for this subject, please direct me to the correct forum.
I have a lot (2k-50k) object lists that contain DTO objects that contain monetary information. Currently the DTO has 10 fields that are at this time BigDecimal to hold financial/monetary values.
Problem is, these BigDecimal fields seem to double the size compared to an equivalent Float (object) approach, and three times the size of a float (primitive) approach.
Is there a definitive place/location/reference that talks about the precision of **transporting** these values from the database value all the way to the client where there is **no** computation happening? I feel like I'm paying a lot in memory/serialization transport for functionality for computational accuracy where I just want to transport accuracy/precision the data.
Please let me know ASAP if someone has input, thanks!
Float and float (and Double and double) have well defined limits to both their precision and their accuracy. If you are OK with these limits, you should probably be using one of these throughout your application. Since you're using BigDecimal, you are presumably NOT OK with these limits. Transforming your BigDecimal to a float/double could well cause you to lose data irretrievably. So, you need to stick in the BigDecimal realm.
Or... use fixed point. For example, if you are working in UK pounds, or US dollars, you know you only need 2dp, so you could use an int or a long that represents pence/cents. This gives you the best of both worlds - no objects/garbage, total precision (subject to an upper limit on the number you can represent) and a small network/memory footprint.
I forgot to mention my precision/accuracy expectations -- the BigDecimal is related to this being an inherited application and now going back and reviewing for performance/optimizations (particularly on the data transport layer, where computations are not needed).
These values should be accurate out to 6 decimal locations, however the actual 'integer' portion will likely never go out more than 5 decimal places the other way (so 99999.999999 is the max business value needed). In researching float, I can not find a definitive reference as to the max 'value' and precision/accuracy leaving me in a limbo state :-).
Sorry for not posting that piece of information earlier, that would be important!
There's a lot more to it than that. You will find that there are certain numbers which just can't be represented by a float/double. For example 1/3.
If you know you need 5 digits before the decimal place, and 6 after, I would suggest you just use a fixed-point representation as a long. i.e. when reading numbers into the system from a user, multiply them by 1000000 and store them in a long, and when displaying them, divide them back again. If there's a database, I would just store the long. If there are files, I would store the long, unless they are for interchange with another system which is expecting the decimal value.
The long takes up 8 bytes (same as a double) but will give you absolute precision and accuracy for the domain you care about which a double won't. Plus, you won't generate any garbage like you would with BigDecimal.
Forget all about this micro-optimization immediately. The original designer(s) did well to use BigDecimal for money amounts: I would bet the last thing they would want would be introduction of floating-point into the system. Floating-point just isn't an appropriate way of representing money, and the conversion BigDecimal->float->BigDecimal, or BigDecimal->double->BigDecimal isn't lossless. Nobody will thank you for literally losing money while saving a few bytes here and there.