Forum Stats

  • 3,824,754 Users
  • 2,260,414 Discussions
  • 7,896,306 Comments

Discussions

Inserting records into a synchronized table is extremely slow

Hi,

We have 12c Mobile server on Linux.

On the Mobile server we created about 30 tables, and inserted about 500 000.

In a script we deleted all these records and insert new records in 5-6 seconds.

We created now publication items on these tables.

The script now runs 10000 times slower.

What can be the reason of this performance problem?

thanks

Zoltán

«1

Answers

  • userBDBDMS-Oracle
    userBDBDMS-Oracle Member Posts: 787 Employee
    edited Mar 24, 2016 11:45AM

    What is the script doing?

    When the original records are inserted are they inserted on the server side or client side?

    Is there a publication set up when the original records are inserted?

    In general performance questions are very difficult to deal with over the forum.

    There is just too much information needed and there isnt a good mechanism

    to pass files, etc over the forum.

  • user6438798
    user6438798 Member Posts: 12
    edited Mar 24, 2016 12:12PM

    All procedures are always running on the server side.

    When the original records were inserted no publication was set up.

    We have input tables containing the new records, and the output tables. Publications are defined on the output tables.

    What the script is doing in one transaction:

    • on each table a temporary table is created and the difference is “inserted” here
    • based on the difference the relevant output records are deleted, and the new records are inserted into the output table

    This procedure is useful if only low number of records are modified. But in our case almost all new records were modified, so all records were inserted again into the output.

    What is the proper place to deal with performance questions?

  • userBDBDMS-Oracle
    userBDBDMS-Oracle Member Posts: 787 Employee
    edited Mar 24, 2016 12:31PM

    once you add publications, you add change tracking to those tables.

    these are basically triggers that get executed.  There is some overhead

    associated with those. From your description is seems like your output

    table is a form of change tracking for your application.     As you are deleting

    and inserting records into the output table, we keep track of the changes internally.

    We need this information to keep the client side correct.   So, I would expect a

    performance difference.   Now there could be tuning you need to do on the server

    side.  The Oracle RDBMS has lots of tools to gather performance information and

    tune it.

  • user6438798
    user6438798 Member Posts: 12
    edited Mar 24, 2016 2:17PM

    Ok we expected performance problems, 10-15 times slower execution than before (without publication), but 10000 times slower seems to be too much.

  • userBDBDMS-Oracle
    userBDBDMS-Oracle Member Posts: 787 Employee
    edited Mar 24, 2016 2:46PM

    On our internal measurements it is certainly not 10000 times slower.

    Triggers general cost in the single digit percentages.  

    There could be some tuning things you can do.   It is hard to say what is going

    on without digging into your environment and looking at what you are doing. 

    This is really something that Oracle consulting would get involved in.   If that is of

    interest, I can point you in the right direction. 

    If you provide a detailed test case, we can try a test in house and see what we

    measure.   We would need a step by step test case for this to work.  We probably

    would also need a way to create the data.  We cannot pass 500K records over

    the forum.    This would be the type of thing we would need to schedule so we

    could not turn it around in a day or 2.  

  • user6438798
    user6438798 Member Posts: 12
    edited Mar 25, 2016 6:32AM

    Now we prepare data+procedures for your test. But first I'm going to check these on my virtual machine to avoid any problem.

    By the way: we are going to control synchronization on the client.

    Could you send us sample applications for instance to start msync from program code? (Or shall I open a new thread for this?)

  • userBDBDMS-Oracle
    userBDBDMS-Oracle Member Posts: 787 Employee
    edited Mar 25, 2016 9:52AM

    Controlling sync manually is done via APIs.   This is all covered in the docs.  What sample apps we have are all available in the install.

  • user6438798
    user6438798 Member Posts: 12
    edited Mar 30, 2016 10:07AM

    We could test API calls on the client, OK. One question: can we somehow control the timeout for synchronization? (How log shall we wait if the network is down?)

    Back to the original topic:

    We use 2 schemas and temporary tables during the update of the data tables on the Mobile Server.

    OBU_IN schema: the new version of data (collected from remote systems).

    OBU_OUT schema: tables of this schema are managed by the Mobile Server. (there is one publication created and one publication item is defined for each table)

    Our update process for each table is (one transaction / table):

    Version 1:

    Phase 1. records of the IN and OUT tables are compared. If there is any difference PK of the record is inserted into the TEMP table.

    Phase 2. Records of the OUT table are deleted according the records of the TEMP table.

    Phase 3. Records containing the new values are inserted into the OUT table.
    This last phase needs extremely long time.

    Version 2:

    Phase 1: the same as in case V1.

    Phase 2: All records that are not any more in the IN table are deleted from the OUT.

    Phase 3: modified existing records are updated in the OUT table.

    Phase 4. new records are inserted into the OUT table.
    This does not take extremely long time, but still much slower compared the initial state when no publications were defined on the OUT tables.

    Elapsed Time

    Version 1: 16 hours, Version 2: 30 minutes, If there is no publication defined on the OUT tables: 10 sec.

    We generated the following code for example for table JARAT_TAB:

    delete from TMP_JARAT_TAB

    insert into TMP_JARAT_TAB(VONAL,JARAT,JARAT_VERZIO,ERV_TOL,MH_SORSZAM) (select

    VONAL,JARAT,JARAT_VERZIO,ERV_TOL,MH_SORSZAM from OBU_OUT.OBU_JARAT_TAB minus

    select VONAL,JARAT,JARAT_VERZIO,ERV_TOL,MH_SORSZAM from OBU_IN.IF1_JARAT )

    delete from OBU_OUT.OBU_JARAT_TAB t1 where exists (select 1 from TMP_JARAT_TAB

    t2 where  t2.VONAL=t1.VONAL and t2.JARAT=t1.JARAT and

    t2.JARAT_VERZIO=t1.JARAT_VERZIO and t2.ERV_TOL=t1.ERV_TOL and

    t2.MH_SORSZAM=t1.MH_SORSZAM)

    delete from TMP_JARAT_TAB

    insert into TMP_JARAT_TAB ((select t1.* from OBU_IN.IF1_JARAT

    t1,OBU_OUT.OBU_JARAT_TAB t2 where t2.VONAL=t1.VONAL and t2.JARAT=t1.JARAT and

    t2.JARAT_VERZIO=t1.JARAT_VERZIO and t2.ERV_TOL=t1.ERV_TOL and

    t2.MH_SORSZAM=t1.MH_SORSZAM) minus (select * from OBU_OUT.OBU_JARAT_TAB) )

    update OBU_OUT.OBU_JARAT_TAB t1 set

    (ERV_IG,MH,MH_IND_NAP,MH_IND,MH_ERK_NAP,MH_ERK,KOCSIALLAS,KM,FEL,LE)= (select

    ERV_IG,MH,MH_IND_NAP,MH_IND,MH_ERK_NAP,MH_ERK,KOCSIALLAS,KM,FEL,LE from

    TMP_JARAT_TAB t2  where t2.VONAL=t1.VONAL and t2.JARAT=t1.JARAT and

    t2.JARAT_VERZIO=t1.JARAT_VERZIO and t2.ERV_TOL=t1.ERV_TOL and

    t2.MH_SORSZAM=t1.MH_SORSZAM) where exists (select 1 from TMP_JARAT_TAB t2 where

    t2.VONAL=t1.VONAL and t2.JARAT=t1.JARAT and t2.JARAT_VERZIO=t1.JARAT_VERZIO and

    t2.ERV_TOL=t1.ERV_TOL and t2.MH_SORSZAM=t1.MH_SORSZAM)

    delete from TMP_JARAT_TAB

    insert into TMP_JARAT_TAB (select * from IF1_JARAT minus  select * from

    OBU_OUT.OBU_JARAT_TAB)

    insert into OBU_OUT.OBU_JARAT_TAB (select * from TMP_JARAT_TAB)



    thanks

    Zoltán

  • userBDBDMS-Oracle
    userBDBDMS-Oracle Member Posts: 787 Employee
    edited Mar 30, 2016 11:07PM

    if you have publications on the OUT table then all activity on that table is tracked.   This is a component of what is causing the increase in time.

    Could you have a 2nd OUT table that represents just the records that should go to the client?

    From your description you have lots of activity on the OUT table, deleting and inserting of records.

    We do have network timeout params - these are covered in the docs.

  • user6438798
    user6438798 Member Posts: 12
    edited Apr 13, 2016 5:32AM

    Hi,

    Earlier I mentioned that we had successful test with the API calls. A small C program was created. There is one question about it:

    We use synchronization in both directions:

    oseSetNumOption(sess, OSE_OPT_TRANSPORT_DIRECTION, OSE_SYNC_DIR_SENDRECEIVE);

    There are 2 publications. When we select only one to synchronize:

       oseSelectPub(oseSess sess, const char *pub)

    records will be synchronized only in this publication / SQLite database.

    The problem is that both databases are locked during the synchronization. Is it a bug, or we can switch off this behavior?

    thanks

    Zoltán

This discussion has been closed.