This content has been marked as final. Show 3 replies
There are many use cases, but the big reason I personally prefer classes over primitives is to detect programming errors. My rule of thumb is to always use classes instead of primitives for class variables. For example, let's say you have a class with a int variable called "length." While you're debugging, you notice that the value of "length" is zero. The problem here is: You can't tell if the length really IS zero, or your code just forgot to initialize it. If length were an Integer instead of an int, there would be no doubt about whether it was initialized. If it's null, it wasn't initialized.
An even worse scenario is that you DON'T notice that the "length" zero, and your program runs without crashing. However, because the int length was not initialized, the program is silently producing incorrect output, wrong behavior, etc.. However, you don't KNOW it's wrong because it SEEMS to run smoothly. If the length variable had been declared as an Integer instead of an int, your program would probably have crashed the first the variable was used, because it null (would most likely throw a null pointer exception).
There is no significant performance hit for using a class instead of a primitive, in probably well over 95% of all apps you'll ever see. In a corporate environment, for example, the logjams are usually databases & network latency, not CPU time.
So back to my rule of thumb: Prefer classes over primitives whenever possible.