Skip to Main Content

Application Development Software

Announcement

For appeals, questions and feedback about Oracle Forums, please email oracle-forums-moderators_us@oracle.com. Technical questions should be asked in the appropriate category. Thank you!

Interested in getting your voice heard by members of the Developer Marketing team at Oracle? Check out this post for AppDev or this post for AI focus group information.

Weblogic Eclipselink Sequence Table Connection Pool Sequence Separate transaction while JTA on main

sono99Nov 9 2014 — edited Nov 9 2014

Hi,

And thanks in advance for your support.

In weblogic 12, managing to get the eclipse link connection sequencing mechanism when one uses Tables for sequencing entity ids seems to be complicated.

QUICK REFERENCE:

http://www.eclipse.org/eclipselink/api/2.5/org/eclipse/persistence/config/PersistenceUnitProperties.html

The concept:

While having EJB, MDBs etc... run on a JEE container, be it glassfish or weblogic, it should be possible to have the main thread transaction be managed as part of JTA global transactions by the contianer.

Namely, pumping messages to JMS queues, persisting entities etc.

Meanwhile, it should be also possible to as the transaction is on going write and update entity ids from sequencing tables.

For this very purpose, eclipse link provides persistence.xml properties, such as the now deprecated eclipselink.jdbc.sequence-connection-pool" value="true", to fullfill this very purpose.

This option greatly avoids dead longs, by allowing eclipse link to fetch a non JTA managed connection, pseudo "two phase locking read table update table" go to the datbase and fetch a new sequence.

The same mechnism under JTA is a disaster. A transaction that creates ten different entities, might do ten reads and updates on this table, while mean while a competing transaction might be trying to do the same. It is guaranteed dead lock with minimal stress on the environment.

Under glassfish, for example, tagging a persistence.xml with :

<persistence-unit name="MY_PU" transaction-type="JTA">

        <provider>org.eclipse.persistence.jpa.PersistenceProvider</provider>

        <jta-data-source>jdbc/DERBY_DS</jta-data-source>

        <non-jta-data-source>jdbc/DERBY_DS</non-jta-data-source>       

        <properties>           

            <property name="eclipselink.jdbc.sequence-connection-pool" value="true" />

        </properties>

</peristence-unit>

does miracles, when entities are using TABLE sequencing.

Under weblogic, say you are using the Derby embedded XA driver with two phase commit, deploying the applicaiton immediately leads to:

Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.3.3.v20120629-r11760): org.eclipse.persistence.exceptions.DatabaseException

Internal Exception: java.sql.SQLException: Cannot call commit when using distributed transactions

Error Code: 0

  at org.eclipse.persistence.exceptions.DatabaseException.sqlException(DatabaseException.java:324)

  at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.basicCommitTransaction(DatabaseAccessor.java:426)

  at org.eclipse.persistence.internal.databaseaccess.DatasourceAccessor.commitTransaction(DatasourceAccessor.java:389)

  at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.commitTransaction(DatabaseAccessor.java:409)

  at org.eclipse.persistence.internal.sequencing.SequencingManager$Preallocation_Transaction_Accessor_State.getNextValue(SequencingManager.java:579)

  at org.eclipse.persistence.internal.sequencing.SequencingManager.getNextValue(SequencingManager.java:1067)

  at org.eclipse.persistence.internal.sequencing.ClientSessionSequencing.getNextValue(ClientSessionSequencing.java:70)

  at org.eclipse.persi

While weblogic is right that their might be a distributed transaction ongoing, it is mistaken in the fact tha tthe connection requested by eclipse link for generating the ID should be part of the global transaciton.

Eclipse link provides other ways to attempt to configure the sequencing mechanism, by sating for example a non-jta transaction.

I have attempted also using these properties both withe original data DERBY_DS that uses the XA driver, and later with a new data source i created on purpose to try to work around the sequencing contengy.

For example:

<!--property name="eclipselink.jdbc.sequence-connection-pool.nonJtaDataSource" value="jdbc/DERBY_SEQUENCING_NON_JTA" /-->

            <!--property name="eclipselink.connection-pool.sequence.nonJtaDataSource" value="jdbc/DERBY_SEQUENCING_NON_JTA" /-->

This new DERBY_SEQUENCING_NON_JTA is explicitly configured to use a NON_XA driver with global transactions flag set to disabled.

Regardless, the only thing I get out of this is that the application is deployed and super fast, up to the point where i stress it with a system test that introduces some degreee of concurrency, and then I see the dead locks on the sequencing table.

Meaning that the ongoing transactions are holding tight to their locks on the sequencing table.

Is this a known issue?

Is there something I am missing in the configuration?

It really should not be this diffcult to get eclipse link to run its sequence reads and updates on a separate transaction of the main JTA transaction, but so far looks impossible.

Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.3.3.v20120629-r11760): org.eclipse.persistence.exceptions.DatabaseException

Internal Exception: java.sql.SQLTransactionRollbackException: A lock could not be obtained within the time requested

Error Code: 30000

Call: UPDATE ID_GEN SET SEQ_VALUE = SEQ_VALUE + ? WHERE SEQ_NAME = ?

  bind => [2 parameters bound]

Query: DataModifyQuery(name="MyEntity_Gen" sql="UPDATE ID_GEN SET SEQ_VALUE = SEQ_VALUE + ? WHERE SEQ_NAME = ?")

Many thanks for your help.

Comments

Paulzip

This looks like a half house JSON document. Is this actual document JSON?

Paulzip

Here you go, something like this. I'm splitting on the braces, rather than the content.

with data(clb) as (
 select 
'{

adam smith

class:abcd

}

{

xxxyyyy

class:abcd

}

{

zzzz

class:abcd

}'
 from dual 
)
, sections(num, clb, start_, end_) as (
 select level, clb, instr(clb, '{', 1, level), instr(clb, '}', 1, level)
 from data
 connect by instr(clb, '}', 1, level) > 0
)
select num, substr(clb, start_, end_-start_+1) str
from sections
aetl

Hello Paulzip,

Your code is working correctly but i couldn't explain my problem.Yes it is an actual json .Here is a sample jscon code. i have a json file like this . When i tried to insert this it raised check constraint error . Becasue there are two "_id" values .İf i insert id by id not in same time it worked .İ want to split braches every "class_..."

{
"_id" : "1000099721",
"id" : [
{
"value" : "55555",
"schemeName" : "MSISDN",
"schemeAgencyName" : "xx"
},
{
"value" : "12416408",
"schemeName" : "CustomerId",
"schemeAgencyName" : "xx"
},
{
"value" : "441630",
"schemeName" : "OTP",
"schemeAgencyName" : "xx"
}
],
"_class" : "model.salesorder.SalesOrderVBO"
}
{
"_id" : "1000099721",
"id" : [
{
"value" : "6666",
"schemeName" : "MSISDN",
"schemeAgencyName" : "xx"
},
{
"value" : "12416408",
"schemeName" : "CustomerId",
"schemeAgencyName" : "xx"
},
{
"value" : "441630",
"schemeName" : "OTP",
"schemeAgencyName" : "xx"
},
{
"value" : "ffa357ee-9759-42ab-8a98-30ea9d410319",
"schemeName" : "ShoppingCartId",
"schemeAgencyName" : "xx"
},
{
"value" : "1000099721",
"schemeName" : "OrderId",
"schemeAgencyName" : "xx"
}
],
"_class" : "model.salesorder.SalesOrderVBO"
}

BluShadow

So, as it's actual JSON you're dealing with, have you considered using the Oracle built-in functionality for JSON data?
JSON Developer's Guide (0 Bytes)

Paulzip

Then why didn't you say it was JSON rather than post an example that wasn't? It just wastes people, like myself's time.

Which Oracle version?

aetl

Oracle 12.2

Paulzip
Answer

Again, your JSON isn't valid. It's an array and you don't have an array marker around it or a comma between the array items. You need to take more care in your posts.

with data(jsn) as (
 select 
 '[ 
  {
  "_id":"1000099721",
  "id":[
   {
     "value":"55555",
     "schemeName":"MSISDN",
     "schemeAgencyName":"xx"
   },
   {
     "value":"12416408",
     "schemeName":"CustomerId",
     "schemeAgencyName":"xx"
   },
   {
     "value":"441630",
     "schemeName":"OTP",
     "schemeAgencyName":"xx"
   }
  ],
  "_class":"model.salesorder.SalesOrderVBO"
}
,{
  "_id":"1000099721",
  "id":[
   {
     "value":"6666",
     "schemeName":"MSISDN",
     "schemeAgencyName":"xx"
   },
   {
     "value":"12416408",
     "schemeName":"CustomerId",
     "schemeAgencyName":"xx"
   },
   {
     "value":"441630",
     "schemeName":"OTP",
     "schemeAgencyName":"xx"
   },
   {
     "value":"ffa357ee-9759-42ab-8a98-30ea9d410319",
     "schemeName":"ShoppingCartId",
     "schemeAgencyName":"xx"
   },
   {
     "value":"1000099721",
     "schemeName":"OrderId",
     "schemeAgencyName":"xx"
   }
  ],
  "_class":"model.salesorder.SalesOrderVBO"
}
]'
from dual
)
select c.*
from data d, 
   json_table (
     d.jsn
   , '$[*]'
     columns (
       order_pos for ordinality
     , id varchar2(30) path '$."_id"'
     , nested path '$[*]'
       columns (
         content varchar2(4000 byte) format json path '$'
       ) 
     )
   ) c
/

ORDER_POS    ID               CONTENT
1            1000099721       {"_id":"1000099721","id":[{"value":"55555","schemeName":"MSISDN","schemeAgencyName":"xx"},{"value":"12416408","schemeName":"CustomerId","schemeAgencyName":"xx"},{"value":"441630","schemeName":"OTP","schemeAgencyName":"xx"}],"_class":"model.salesorder.SalesOrderVBO"}
2            1000099721       {"_id":"1000099721","id":[{"value":"6666","schemeName":"MSISDN","schemeAgencyName":"xx"},{"value":"12416408","schemeName":"CustomerId","schemeAgencyName":"xx"},{"value":"441630","schemeName":"OTP","schemeAgencyName":"xx"},{"value":"ffa357ee-9759-42ab-8a98-30ea9d410319","schemeName":"ShoppingCartId","schemeAgencyName":"xx"},{"value":"1000099721","schemeName":"OrderId","schemeAgencyName":"xx"}],"_class":"model.salesorder.SalesOrderVBO"} 


Marked as Answer by aetl · Nov 27 2020
1 - 7
Locked Post
New comments cannot be posted to this locked post.

Post Details

Locked on Dec 7 2014
Added on Nov 9 2014
0 comments
945 views