This content has been marked as final. Show 19 replies
i am surprised its bigger by 200% , thus my question from the orignal post;
Deletes will not reduce the number of blocks used. Even if you deleted ALL rows the number of blocks used would remain the same. And a full table scan of that, now empty, table will take just as long as it did when it was full since Oracle has to visit every block to see if the block has any rows that satisfy the query.
So if the second table at one time had a lot more rows and then had a lot more deletes those 'once used' blocks would still be part of the count.
Also you are now saying this
we need to update/insert the table, instead of truncate/insert
So what happened to the 'deletes'? Are you saying you don't delete rows anymore?
Justin Mungal wrote:Because there have been both myths and miscommunications about fragmentation, here and elsewhere. Sometimes the miscommunication has indeed been one of definition.
sybrand_b wrote:Can you tell me why people around here freak out whenever they hear the word fragmentation? It seems like it all depends on your definition of fragmentation.
10 columns as opposed to 24 columns.
What is the average length of each row?
Likely you are comparing apples with pears.
Apart from that, disk is cheap, and 654 Mb is currently not that big,
if you start micromanaging the database by -God forbid- listening to myths and fairytales to 'defragment' the database, whatever that may mean,
this will be much more expensive for your employer as compared to buying another disk.
Use the statistics in the dictionary and/or use dbms_space to see the actual space occupied.
As you don't provide any details on your database, and the answer is version and database organisation dependent, I can't provide a complete answer.
Senior Oracle DBA
If you delete half of the rows in a table, and it still occupies the same amount of space, couldn't one consider that fragmented? A real-world implication of this is that the HWM is in the same place, and so full table scans pre and post delete will take the same amount of time.Some people get upset if other people say what great gurus they are while spreading myths or otherwise suspect advice. If not upset, at least go on a [url http://xkcd.com/386/]quest to make internet advice better.
I'm not saying that it even matters; it purely depends on your situation and data access patterns. I'm just trying to understand why this is such a source of controversy.
It does matter if it makes significant performance difference, or especially if the definition of fragmentation includes something about not being able to reuse space.
So for example, a classic argument was about weekly index rebuilds. In one set of cases, the performance does increase for some brief period of time, then gets worse. So why not rebuild every week? Well, what if simply changing some screwy code or configuration actually fixes the problem? And what if it is true for a few out of thousands of indices? You might be creating an artificial large window of downtime, when the business trend is toward 24/7.
In the end, advice that may actually be good may not remain good over time, or may become bad when overgeneralized (this is true of any "best practice."). Someone who bases their livelihood on being paid for this advice may become upset and lash out when called on it. Something that applies to one site probably doesn't apply to all sites.
The thing is, you don't have to guess or take advice at face value. You can measure and evaluate. Those things have costs and limitations too. There are many times it makes perfect sense to have a consultant come in and tell you what to do. If you get a bad consultant, that stops making sense. A lot of the fragmentation controversy came from online communication allowing people to see that.
Edited by: Justin Mungal on May 3, 2013 9:31 AM - The arithmetic is based on an article by Jonathan Lewis: http://jonathanlewis.wordpress.com/2007/11/23/table-rebuilds/.Note that article gives an example of why a table might be larger than expected. If the OP means this kind of thing by replacing truncates with updates, that might be one explanation. And one answer may indeed be reload the table with low PCTFREE, then change PCTFREE to an appropriate value going forward.
Hopefully it will be obvious why such a procedure should be the exception rather than the rule, though of course it is still in the DBA purview to figure out where that should apply. With modern enhancements to Oracle and the cost spread of modern equipment, there's really not much to worry about in general, any problems often do wind up being unfortunate SQL. But it can still make a difference when the volume of data is large relative to the hardware.
No worries, you were giving a reasonable reply to Sybrand. It may seem out of order for him to say anything at all about fragmentation at that point of this thread, but we've seen that progression enough times that his reply seems reasonable enough to me, if no one else.
Personally I think the type of thread hijack you did isn't bad at all (as compared to someone asking a new question in some long-dead thread). It's the kind of interesting detour that can illuminate many issues, even those that should have been put to bed long ago but refuse to sleep. I'm probably in the minority there though.