Hello guys, I've been reading UNIFORM vs AUTOALLOCATE extent allocation.
I've read the following articles.
From what I understood, autoallocate trumps the uniform in all scenarios (unless I am missing something).
In the thread "AUTOALLOCATE vs UNIFORM SIZE"
for the benefits of autoallocate and uniform size allocation Kh$n wrote
Benefits of AUTOALLOCATE
* Prevents space fragmentation.
Benefits of UNIFORM extent sizes
* Prevents fragmentation.
(I dont understand what is the difference between those two fragmentation prevention, are those benefits one and the same?)
even in scenarios where we know exactly how much data will be loaded, there is always a chance of extent wastage and with out extent trimming that space will be unusable.
Can someone please explain in which cases we use uniform extent allocation?
for suppose we use uniform extent allocation and we have lot of unused space from the extent allocation, can that space be reclaimed using shrink space command for tables and indexes?
A smart human using UNIFORM extent sizes can always beat automatic segment space management. The issue is that the degree of benefit is generally radically less than the value of that smart human's time.
If we define "fragmentation" as "contiguous free space in a data file that is not large enough for any object in the tablespace to use for its next extent when there would be enough space if all the contiguous blocks of free space were combined", it is theoretically possible in some very pathological cases to have a small amount of fragmentation in a tablespace using automatic extent management. It is not possible to have that in a tablespace with uniform extents. Practically, it is virtually never worth a DBAs time to contemplate this question because it is so unlikely to happen and because the amount of space that is lost is so trivial. But a dedicated human could potentially save a few MB of space in their tablespace.
A tablespace using automatic extent management may have some wasted space at the end that is not large enough to accomodate the next extent for any object (but which is not fradmentation). By definition, the amount of waste must be less than the size of the next extent so at most on the order of a couple MB. A DBA allocating just the right size in a tablespace with uniform extents can ensure that there are an exact integer number of extents available in the tablespace. It is rarely worth the cost of wasting a few MB to have a DBA consider this, however.
If you are excessively concerned about every last byte of space used on disk and not at all concerned about the cost of the DBA's time to measure and monitor, it's entirely possible to come up with a set of tablespaces with uniform extents where you can put your objects and end up using slightly less space as long as you know the sizes of tables in advance. Once you factor in the cost of DBA time and the cost of disk, though, this rarely makes sense from a practical matter.
Thank you for the response Justin, I have a few questions though.
Theoretically how can there be fragmentation and unused space with autoallocate since it supports extent trimming, wouldn't the unused space be reused. where as in Uniform extent allocation the fragmented extents cannot be reused right? since there is no extent trimming in Uniform.
Well, let's imagine a simple tablespace using automatic extent management. We create table A and load it with 1 MB of data so it allocates 16 64kb extents. We create table B and load it with 2 MB of data so it has the next 2 MB of space in the data file again in 16 64kb extents and 1 1 MB extent. Now we truncate A and load a tiny bit of data so that we only need 1 64 kb extent and make the table read-only. So we have a "hole" in the tablespace that is 15 * 64kb in size. We repeat the same exercise with tables C & D so now we have 2 "holes" each of 15 * 64kb in size. Now, if either B or D wants another extent, there isn't 1 MB of contiguous free space in the data file. There is more than 1 MB in total (30 * 64 kb), but it's broken up in a way that it can't be used for a new 1 MB extent.
Now, as I said, this is generally more than a theoretical problem than a practical one. In a real tablespace, it's likely that, first off, you'd have plenty of tables that are still allocating 64 kb extents that would happily use the "holes" that we created. And it is highly unlikely that anyone really cares about wasting a few MB of space when people are buying disk in terms of TB these days. But it is theoretically possible.
In a tablespace with uniform extents, every extent in the tablespace will be uniform so any "hole" is, by definition, some integer number of extents large. Each "hole", then, can always be used by any object in the tablespace to allocate its next extent.
Also see Ask Tom Uniform Size on LMT and Jonathan's explanation further down for what kind of strange things you have to do to see an autoallocate problem. In my experience, most commercial apps have lots of tables, many of which have varying sizes and usage, so autoallocate is perfectly reasonable. I can imagine some very homogenous app that would benefit from uniform, but basically Justin's description is spot-on.
If you are controlling aircraft or nuke plants, then you might have different requirements, but Oracle wouldn't be appropriate anyways. Very tiny databases might also want more specific control.
>A smart human using UNIFORM extent sizes can always beat automatic segment space management
Guess you confused AutoAllocate with ASSM.
> So we have a "hole" in the tablespace that is 15 * 64kb in size. We repeat the same exercise with tables C & D so now we have 2 "holes" each of 15 * 64kb in size
Table C could have used 15 of the "holes" released by Table A when inserting the initial data.
Hemant K Chitale
> >A smart human using UNIFORM extent sizes can always beat automatic segment space management
> Guess you confused AutoAllocate with ASSM.
Yup, I meant to say automatic extent management, not segment space management.
> Table C could have used 15 of the "holes" released by Table A when inserting the initial data.
Yup, got my order of operations inverted. I thought I had filled C before truncating A but on a second look, I apparently reversed those steps. Just goes to show how difficult it is to construct a scenario where automatic extent management generates fragmentation.
hello guys, thank you for your responses. Can you please clarify some of the doubts I have
1. I understand how the fragmentation occurs in the uniform tablesspace (if the last extent is not completely filled, then there is a wastage of that unused space, since it cannot be used.), But in autoallocate tablespaces even after the extent trimming (barring truncates and deletes) how does fragmentation occur, how are there still holes present in the extents when there are no deletes and truncates?
2. In Uniform tablespaces is it possible to reclaim the unused space of the extents using "alter table .... shrink space" and "alter table ---- deallocate unused"?
3. In a uniform tablespace say that there are 2 extents of 2mb each,
the extent 1 has 1mb filled and the other 1mb is unused,
the extent 2 has 1mb filled and the other 1mb is unused.
so now we have a total of 2 mb unused, when a table needs 2mb of space for inserts, does it create new extent of 2mb or does it use the unused space from the first two segments? (this is what happens in autoallocate right? the holes present in the previous extents get reused)
Extent trimming, to the best of my knowledge, is something that only happens when you are using parallel query to do large loads, not something that happens during normal OLTP type operations. As with anything called "automatic" in Oracle, though, the internals are subject to change across versions (and patchsets) and are not necessarily documented, so it is entirely possible for behaviors to change over time. Relying on specific internal behaviors is generally not a good idea.
The example I gave (assuming you reverse the truncating of A and the loading of C, as Hemant pointed out) produces "fragentation" when you're using automatic extent management. It's not a particularly realistic scenario, but it is possible. If you never delete data, never truncate tables, (and, presumably, never shrink tables), extents would never be deallocated and there would, therefore, never be holes. That is just as true of ancient dictionary managed tablespaces as well as locally managed tablespaes whether you're using uniform or autoallocated extents.
Shrinking a table has nothing to do with defragmenting a tablespace. It is simply compacting the data in the table and then potentially deallocating extents. You can do that with any locally managed tablespace. There is still the possibility, of course, that you have just enough data in the table that you need to allocate 1 extra extent when you only need space for 1 row in 1 block. So there may be some number of MB of "wasted" space per segment (though, again, this is generally not something that is a practical concern since the data in tables generally changes over time and it's generally not worth the effort of worrying about a few MB).
For your third question, assuming both extents are part of the same segment, assuming that the space is actually usable based on things like the PCTUSED setting of the table, and assuming a nice, simple conventional path insert in a single-user, Oracle would use the free space in the extent for new inserts before allocating a new extent. Oracle generally doesn't allocate new extents unless it needs to (there are caveats to this-- if the only blocks with free space have a relatively large fraction of their space used such that a particular new insert only fits in 1 of the 1 million blocks in the currently allocated extents, Oracle will potentially give up before finding the 1 in a million block that it would need an may allocate a new extent).
Message was edited by: JustinCave
One caveat to add. If you are doing Parallel INSERTs, each Parallel Insert slave allocates a new extent. Therefore, there can be over-allocation of space, particularly with a high UNIFORM extent size.
Fairly recently introduced, the High Watermark Brokering features aims to eliminate or reduce that problem
I found a reference to High Watermark Brokering here : http://www.freelists.org/post/oracle-l/HV-enqueue,2 and here Oracle related stuff: Parallel DML - Conventional (non-direct-path) Inserts As Select
Hemant K Chitale
The only time I actually use UNIFORM is to deal with rapidly growing segments and enqueues related to HW in this aspect. Auto allocate may decide on a segment size that is filled up way too quickly, causing frequent allocating of new extents and thus incurring the enqueue. These are pretty extreme and rare cases when App is inserting large data at a rapid pace combined with a not ideal design, i.e. lack of partitioning, cramming all the inserts on a single segment.
Only in that situation I've seen a UNIFORM extent tablespaces with a large extent size contribute meaningfuly to a process.
Is there any advantage to having fewer large extents then having more number of smaller extents?
when there is lot of empty space present in the extent, that empty space is also scanned during the table scan which is a performance issue, but what if all the extents dont have holes in them, then in that case does the number of extents (few big extents v/s many small extents) pose any performance threat?