This content has been marked as final. Show 4 replies
When make is running there isn't enough swap space to let us reserve 2GB for the Java object heap. We try to mmap the space and fail, so we print that error message. When make isn't running we are able to mmap the 2GB and we are happy. I'm sort of guessing here, based on a lot of experience. You could use something like vmstat to watch your available swap space.
If you are running that close to the edge of running out of swap space (well, 500MB isn't that close to the edge) you will have to be careful what else you run even if you are able to get your Java application started. You don't say how much physical memory (and swap space) you have, but if you don't have enough physical memory to run your Java application you won't like the performance you get (paging for each reference to the heap is painful).
Thank you for input.
I have 2.5GB of RAM + another 768MB of swap space.
Right now, when trying to run the make command, system has 2.1GB of RAM free and 740MB of swap free.
vmstat gives following output:
I made another experiments. With make, I can run program with Xmx1840M (Xmx1850 fails), without make, I can run program with Xmx2620M.
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------ r b swpd free buff cache si so bi bo in cs us sy id wa st 0 0 25188 2225148 4924 81980 0 1 27 53 1160 807 12 2 85 1 0
Difference seams to be the whole swap space... I tried the experiment without mounted swap and results are same.
Message was edited by:
added results when not mounted swap
I have the same problem too.
I'm using Linux fedora with jdk1.5.0_09
and a Dell with two
Intel(R) Xeon(R) CPU
5130 @ 2.00GHz
and 4151240 kB of ram
well the java -Xmx gets up to 2630m
thus preventing me to use more than this.
Any suggestions to break the wall?
Basic solution is to move to 64-bit system.
While you may try to fight for hundred MB here and there, general problem is that under linux (and most other 32-bit operating systems) there is a limit to maximum memory one process can allocate. I think (but I'm not an expert here) that for linux you will get 3GB for everything (code, libraries, heap, stack etc) and there are some limits how you can split memory between these parts.
I recall seeing some kernel patch which allowed to push this limit a bit - but generally, if you are so close to the border it is time to think about either changnig the approach (more jvms, cluster, etc) or moving to big heaps (64-bit systems or Azul applicance).