So I'm kind've excited by this since I'm an undergrad, and I'm hoping this is actually a bug, such that I've actually found a bug in the JVM!
So, the scary message that showed up in standard error:
# A fatal error has been detected by the Java Runtime Environment:
# EXCEPTION_ACCESS_VIOLATION (0xc0000005) at pc=0x000000005ec13ae1, pid=328, tid=68
# JRE version: 7.0_25-b17
# Java VM: Java HotSpot(TM) 64-Bit Server VM (23.25-b01 mixed mode windows-amd64 compressed oops)
# Problematic frame:
# C [zip.dll+0x3ae1] Java_java_util_zip_ZipFile_getZipMessage+0x1749
# Failed to write core dump. Minidumps are not enabled by default on client versions of Windows
# If you would like to submit a bug report, please visit:
# The crash happened outside the Java Virtual Machine in native code.
# See problematic frame for where to report the b
I've got 6 or 7 of those log files generated, I've uploaded the most recent one as a gist here:
I can reliably create this crash (though the stack frame would change slightly every time, though it is always caused either by OptimizerNode.run() or one of its spawned threads). Our application flow is one in which that you as a user navigate around our program through various dialogs and popups setting up variables and configuring our program for a run. When you hit the button to start the run, a huge chunk of memory is allocated, a bunch of threads are spawned and delegates are run. Our program consumes an order of magnitude more system resources, both in CPU time and in memory, when you hit our "run" button.
Shortly after hitting the run button, you'd get our spinner, sometimes you'd get a peek at the processing PApplet doing some UI work, and then the program would hard-crash with the above message in std:err.
the good news is that I've found at least a work around for our dev team, and possibly for your (reader who also has this error) production:
increasing the default heap size (by passing -Xms128m to the JVM at runtime, --I believe the default is 64m) results in the program running just fine.
This is a work-around for us, because we wrote this program with the expectation of being able to scale nicely from reasonably small problems (order 200 megs) up to huge problems (order dozens of gigs) of memory. We don't have the hardware or the problems in our development environment to test our code on multi-gig data sets, but I suspect it'll fail in the same way there, even with 128m or even up to 1024m as the default size. There is no realistic flow we can have the user go through to know ahead of time how big his data will balloon to when he hits the run button before we start the JVM. (ie, for us, this is not a production-ready workaround).
This leads me to believe there's a bug in memory allocation code in and around the the class loaders.
Thoughts? Can (or should) I post this to the Java Community Process page as a bug?