Jrm wrote:this is a dead end. this is exactly DRM. the idea behind DRM is that you want to give someone a locked box as well as the keys to open the box, but you want to restrict how and when the open the box. you are trying to run a program (the key to the locked box) on a user's computer that works with sensitive data (the locked box), but you only want your code to be able to open the box (DRM).
Thanks for the insight.
I am not trying to implement DRM, but I want to install my application on end user's PC. Let me explain a little more about my issue:
i) I have API interfaces and abstract classes. At runtime, I dynamically select a corresponding implementation (eventually via a full qualified name in a System property) and use reflection to instantiate the implementing class.
ii) Now, some API manipulate sensitive information. I would not want someone to "substitute" its implementation to steal that sensitive information.
From what you say, does it mean that if I restrict access to reflection (and system properties), users can still bypass any security mechanism and perform reflection? Or, is there a way to prevent this by using CodeSource and SecuredClassLoaders objects for example? Should I spend time investigating and learning about this or is it a dead end?
Jrm wrote:First of all, the SecurityManager is provided by the local computer, not the applet. But, the most important point is that the SecurityManager used when running third-party applet code is not trying to protect the third-party code, it is trying to protect the local computer from unknown third-party code. the user is perfectly able to disable the SecurityManager and/or give the third-party code whatever permissions it desires if they decide to trust the code. you are trying to protect your code (+which is the third-party code with respect to the user+) from the user. that is the opposite situation, and does not work.
Ok, fair enough regarding storing data on end user PC.
But I see a contradiction here (or I mis-read you). I understand that SecurityManagers are used for applets to restrict some of their actions. What if people are able to bypass SecurityManagers? What is the point of having them? If a .jar application is started with a SecurityManager, can an end user strip it and replace it with its own security manager (from its own code for example)?
I would be happy if I could deliver a .jar application with my customized and 'unremovable' SecurityManager. Is that possible or can one always fiddle the .jar to remove it?As i said in my previous post, there is no way to stop this. as a software developer, i'm sure you are aware that you can find "cracked" versions of any commercial software that you are interested in (if you know where to look). what makes you think that your java program is any more "secure" than those other programs?
Because if people can always remove it, it is a permanent open door for man-in-the-middle attacks when code is delivered to end-users, correct? Is there any way to protect .jar from tampering?
Jrm wrote:sorry, i think my comment sounded harsher than i meant it. what i meant to say, was that if you think about how hard it is in general to stop (dedicated) people from manipulating programs to remove "unwanted bits", it should be fairly obvious that java programs will fare no better.
I am exploring this issue, I am not assuming that my code would necessarily be more 'secured'. Just trying to learn something.
The conclusion I make is:exactly.
- The SecurityManager and any jar signing will reduce al use of undesirable code on anyone's PC, but won't prevent people with bad intention from proceeding with code tampering.
- That's as much as I can hope and expect, and I should live with that.pretty much.