Forum Stats

  • 3,817,447 Users
  • 2,259,334 Discussions


Oracle updates blocked on huge tables

3016062 Member Posts: 1
edited Aug 19, 2015 9:52PM in General Database Discussions

I am dealing with updating a table having ~400 columns and measuring ~600GB into an underlying Oracle RDS database hosted on AWS. I am firing UPDATE (by ID) on this table updating all columns (via Hibernate) which has been working for 4+ months now.

Recently, I faced an issue (3-4 times as of now) was that post UPDATE, when transaction commits, thread is getting blocked (waiting for IO) on from Oracle making all the threads wait. As an immediate fix, I rebooted the RDS instance and everything was back to normal !

It looks like until RDS get “hot” (when it has served certain UPDATE queries to this table) my queries are executing fine (a newly booted RDS instance works well) but after some time, all the threads are getting blocked on on the same table (bringing the throughput to zero).

Points to note:

  1. The table has ~400 columns. Its a sparse table (for most rows, some columns are null).
  2. The table has 80+ MM rows and measures ~600 GB.
  3. The table is not partitioned.
  4. The UPDATE query is based upon PRIMARY_KEY of the table (auto-generated ID field).
  5. Verified temp tablespace usage to be quite low. Details are as below:

Any pointers on this issue would be really helpful. Thanks !


  • Unknown
    edited Aug 19, 2015 7:50PM

    It appears to me that a bug exist in some Java code

    Can you bypass the Java application & issue SQL directly against the database when the slowdown exists?

    What is happening in the DB, during the slowness.

    Is application 3-tier?

    Is connection pooling being used?

    You have a mystery & we have no clues.

  • JohnWatson2
    JohnWatson2 Member Posts: 4,433 Silver Crown
    edited Aug 19, 2015 9:52PM

    Bugs in the Java code? This is probably an example of bugs in the Java programmers: another application designed by Java programmers who do not understand relational databases. Some evidence:

    1. "table having ~400 columns"

    Almost always, if a table has that many columns it means the designer didn't normalize his data.

    2. "I am firing UPDATE (by ID) on this table updating all columns"

    Updating all columns? In that case, you could do a delete and insert. Almost certainly your programmers are updating columns that haven't been changed. I bet they are doing SELECT *... as well.

    3. "Its a sparse table (for most rows, some columns are null)."

    Further evidence that the data ain't normalized.

    Rather than rebooting your RDS database server, you might get better results from rebooting your Hibernate processes. Reboot the programmers, too.

This discussion has been closed.