Retail Merchandising System (RMS)
Retail Integration Bus (RIB) 13.1.1
Store Inventory Management (SIM)
Platform: HP-UX Itanium
Platform Version: 11.31
The JMS and SUB Hospital Retry adapters of RIB-SIM are fallen with the following errors:
Caused by: javax.ejb.EJBException: Unable to flush hospital.; Nested exception is: com.retek.platform.persistence.PersistenceException:
Exception Description: Error while Obtaining information about the database. Please look at the nested exception for more details.; Nested exception is: Exception [TopLink-4019] (Oracle TopLink Essentials - 2.0.1 (Build b09d-fcs (12/06/2007))): oracle.toplink.essentials. exceptions.DatabaseException
Exception Description: Error while Obtaining information about the database. Please look at the nested exception for more details.
Caused by: java.sql.SQLException: javax.resource.ResourceException: RollbackException: Resource use was in. During abnormal shutdown of server processing and During subsequent recovery was unobtainable or failed Either To Have recovery permissions. If This issue logs Consult Continues to Occur after the next interval completes recovery processing
at oracle.oc4j.sql.spi.ManagedConnectionImpl.setupTransaction (ManagedConnectionImpl.java: 841)
at oracle.oc4j.sql.spi.ConnectionHandle.oc4j_intercept (ConnectionHandle.java: 305)
at oracle_jdbc_driver_LogicalConnection_Proxy.getMetaData ()
The following note (basically restarting instances) was attempted but did not resolve:
"Why Are tafr JMS Hospital Hospital Sub Retry Retry tafr And Shutting Down? [ID 1339893.1]"
Customer attempted the following with same resutls:
It carried out the following actions:
1 - was restarted only container RIB-SIM. The error persists.
2 - I restarted all of apps RIB (opmnctl). The error persists.
3 - I restarted all RIB containers and shutdown the RIB database, start the RIB database and containers and the error persists.
Despite this we get the same results.
Try below steps.
1)opmn stopall.... after this make sure opmn is not running.
2)If production system back up all xml you found in log folder.
3)Delete xml files from rib log folder.
Let me know if error happens again.
I carried out your instructions but the problem persists, it will not be the problem. The connection to the RIB database through Oracle Application Server completes successfully, the instructions for Oracle Support is to make a complete stop/start of OR but do not think this solves anything.
Regarding your suggestion to Article 1087537.1, I see that is for version 13.2.4, and ours is 13.1.1. Although the hospital is empty (RIB_MESSAGE and RIB_MESSAGE_FAILURE) these adapters fall. The strange thing about this is that last Wednesday was solved this problem doing a node switchover RIB application (our application is under mode Service Guard). In yesterday got one of the nodes for maintenance and fell again.
I can see this issue has been reported first for 13.0.2 and later in 184.108.40.206 & 13.2.4. But this issue will not occur if hospital is empty. Can you please provide the error which you are getting when hospital is empty.
Sorry, if there are errors in the hospital, but his ATTEMPT_COUNT = 1, ie, they have not processed nor even because the adapter is down. Now I am surprised that I have RIB_MESSAGE_FAILURE errors of this type:
"Caused by: java.sql.SQLException: Attempt to use an invalid handle 'oracle_jdbc_driver_LogicalConnection_Proxy @ bb5e2d'."
The JMS log tells me the following:
Caused by: javax.resource.ResourceException: RollbackException: Resource use was in. During abnormal shutdown of server processing and During subsequent recovery was unobtainable or failed Either To Have recovery permissions. If This issue logs Consult Continues to Occur after the next interval completes recovery processing
at com.evermind.server.connector.ConnectionContext.setupForJTATransaction (ConnectionContext.java: 374)
at com.evermind.server.connector.ConnectionContext.setupForTransaction (ConnectionContext.java: 301)
at com.evermind.server.connector.ConnectionContext.setupForTransaction (ConnectionContext.java: 286)
at oracle.j2ee.connector.OracleConnectionManager.lazyEnlist (OracleConnectionManager.java: 285)
at oracle.oc4j.sql.spi.ManagedConnectionFactoryImpl.enlist (ManagedConnectionFactoryImpl.java: 532)
at oracle.oc4j.sql.spi.ManagedConnectionImpl.setupTransaction (ManagedConnectionImpl.java: 839)
It seems that at some point it lost the connection to the database and this was in error.
It is tough to understand the issue without having access to servers :) Even then I will try.
"During abnormal shutdown of server processing and During subsequent recovery was unobtainable or failed Either To Have recovery permissions"
Can you check with DBA whether XA configuration steps are executed during intallation of RIB especially in rms-131-ig.pdf section "Verify that Database XA Resources are Configured for RIB" these steps give XA recovery permission for Servers.
I checked section you recommended and RIBAQ not had some permissions:
GRANT EXECUTE ON TO RIBAQ SYS.DBMS_AQ;
GRANT EXECUTE ON TO RIBAQ SYS.DBMS_AQADM;
GRANT EXECUTE ON TO RIBAQ SYS.DBMS_AQIN;
GRANT EXECUTE ON TO RIBAQ SYS.DBMS_AQJMS;
However, rebooted and the problem persists. You will know some recommendations RIB Guard Service mode?
Last night we take certain actions and we hope permanent. We cleaned the following points and we go up the adapters:
1. Pending transactions were cleared of the database (dba_2pc_pending) involved in RIB and SIM.
2. I deleted the files *.lock and *.resources of the folder "$ORACLE_HOME/j2ee/rib-sim/xa-log" including those corresponding to node RIB homolog app.
3. Were cleaned files with *.xml folder "$ORACLE_HOME/j2ee/rib-sim/xa-log/OPMN_cmn1ap06.cm.com.ve_rib-rms.default_group.1." And "$ORACLE_HOME/j2ee/rib-sim/xa-log/OPMN_cmn2ap06.cm.com.ve_rib-rms.default_group.1. "
4. I deleted the files *.log from the folder "$ORACLE_HOME/j2ee/rib-sim/log/rib-sim_default_group_1"
5. Cleaned up old files from the directory "$ORACLE_HOME/opmn/logs"
5. He wiped his persistence folder "$ORACLE_HOME/j2ee/rib-sim/persistence"
Apparently there was some "garbage" in Oracle Application Server was trying to make a recovery of certain transactions that remained in permanent status "Recovering Committing". Hopefully these actions are durable and which are the true cause of the problem.
Thanks for your help! :)