Skip to Main Content

Java HotSpot Virtual Machine

Announcement

For appeals, questions and feedback about Oracle Forums, please email oracle-forums-moderators_us@oracle.com. Technical questions should be asked in the appropriate category. Thank you!

JNI - (NT) debugging across JNI from Java host process

843829Sep 20 2010 — edited Sep 21 2010
Hi,

we have been using a C library from a Java host process through a JNI bridge for a while and this has worked like a charm so far. We are now changing the internal version of the C library and some of the unit (or integration) tests developed in Java to test this integration now fail with a JVM crash (a typical EXCEPTION_ACCESS_VIOLATION (0xc0000005)). No doubt the native code is at fault, and not the JNI bridge which is of limited complexity and has not changed while the underlying C library has changed.

We do know how to debug the native code (setting breakpoints, etc), unfortunately the problem occurs only after a rather long execution path where the same C code which now fails is called many times during the execution of the 'unit' tests. Writing the same unit tests directly in the C code will be quite expansive at this stage.

Question: is there is a way to make sure the 'native' debugger (VC++ in that instance) intercepts the native crash like any other application crashing under NT? The crash seems to be rather inappropriately handled by the JVM to produce hs_xxx.log followed by exit, thus preventing the debugger from being triggered on JVM crash: this behaviour is of little value in this particular case.

This is NT, sun JVM 1.6 (_06 in this particular case, not sure if this helps much).

Many thanks in advance for any useful hints,

Adrien

Comments

Solomon Yakobson
Answer

2727166 wrote:

I am getting ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired.

Is this because that I cant do both at the same time . Please let me know.

Yes, ALTER TABLE requires an exclusive lock.

SY.

Marked as Answer by Arunadeepti Jakka · Sep 27 2020
Arunadeepti Jakka

Hi Solomon,

Thankyou , Is there any alternate way to do this without locking the table ?

I cant use delete as data is very huge and it is taking longer time .

Thanks

Solomon Yakobson

You can try increasing DDL_LOCK_TIMEOUT:

alter session set DDL_LOCK_TIMEOUT = ...

but it doesn't guarantee your session will acquire a lock withing that DDL_LOCK_TIMEOUT.

SY.

Jonathan Lewis

Are you using partition-extended syntax to insert into the subpartition ?

Which version are you on ?

Demo from 12.2.0.1

Session 1:

SQL> select table_name, partition_name, subpartition_name, num_rows from user_tab_subpartitions;

TABLE_NAME          PARTITION_NAME        SUBPARTITION_NAME        NUM_ROWS

-------------------- ---------------------- ---------------------- ----------

PT_COMPOSITE_1      P2                    SYS_SUBP44402                  50

PT_COMPOSITE_1      P2                    SYS_SUBP44403                  49

PT_COMPOSITE_1      P2                    SYS_SUBP44404                100

PT_COMPOSITE_1      P2                    SYS_SUBP44405                200

PT_COMPOSITE_1      P3                    SYS_SUBP44406                100

PT_COMPOSITE_1      P3                    SYS_SUBP44407                150

PT_COMPOSITE_1      P3                    SYS_SUBP44408                  50

PT_COMPOSITE_1      P3                    SYS_SUBP44409                100

...

SQL> insert into pt_composite_1 subpartition (SYS_SUBP44408) select * from temp;

50 rows created.

-- note 44408 is from partition p3

-- go to session 2 and truncate another subpartition of p3

SQL> alter table  pt_composite_1 truncate subpartition SYS_SUBP44406;

Table truncated.

No problem.

If you are inserting data into the table "knowing" that it will go into a specific segment but without identifying that segment explicitly Oracle has to be much more aggressive about locking.

Regards

Jonathan Lewis

1 - 4
Locked Post
New comments cannot be posted to this locked post.

Post Details

Locked on Oct 19 2010
Added on Sep 20 2010
3 comments
343 views