I think I must be delusional but I have a single table in our production database that I can not seem to gather stats on.
I've tried on three separate attempts and after 6 hours, I have to kill the job before it impacts the performance of the database during normal work hours.
Last night, it ran for six hours and while it was running, I could see in Toad that it was actually processing, and spinning the whole time.
I also confirmed there was nothing else holding any locks on the table.
Here's the stats:
I have a single table that has approx. 6 million records.
The table has 540 columns of which 20 of them are LOBs. (not that it matters, but just for info)
The table itself has approx. 6mil records, and is about 13.4G.
I have the same table in our test database but it is only about 4.5G and only has about 2mil records.
The gathering of stats in test only takes approx. 5 minutes.
command I am using is:
exec dbms_stats.gather_table_stats ('ARADMIN','T2444',cascade=>TRUE,estimate_percent=>dbms_stats.auto_sample_size,granularity=>'ALL',method_opt=>'for all columns size auto',degree=>8);
Any ideas why this takes so long to run?
One thing odd is that while I specify degree of parallelism to 8, when I look in Toad, I see 16 processes running for the gather stats job.
I looked at the note on the OTN link, along with the other bug/note referenced by Osama_mustafa, but they seem to be related to partitioning.
In our case, the table I am gathering stats on is not partitioned.
I use granularity=ALL as my standard command, even if the table is not partitioned, but this didn't make any difference in the test environment.
If the table is not partitioned, then this option is ignored.
As it turned out, I let it run overnight and it finally finished in a little more than 8 hours (I think).
This was done in a new test environment with no one else in the database.
This still seems excessive, but I also found out that this table has many lobs, which may be contributing to the problem.
I also found out that this table has fine grained access control, but again, i don't think that would make that much of a difference.
I beg to differ. Fine grained access control can make all the differences in the world. It is also quite possible that the stats job is slow because of the LOBs. But there is no point in guessing. If you have the diagnostic pack license then you may find the answer in ASH. Otherwise you can trace the stats gathering process(es). That will tell you which statements are slow, and from that you should be able to figure out which parameters may help speed the process up.
Be sure to enable waits when tracing, and also include system queries into tkprof report.