Hi everyone, I've presented this to Oracle Support and there dosesn't seem to be a simple way to do accomplish what I'm looking for. In my scenario, my production database (Oracle Enterprise Database 22.214.171.124) which has Data Guard Replication to a Standby DB has an issue where someone created a tablespace, data files and tables after the application was storing data in it. This resulted in a blob column being stored in the USERS tablespace and data files when it should have been put into Tablespace XXX. So while I can execute
ALTER TABLE userAccount.TablespaceXXX MOVE lob (MYLOBCOLUMN) STORE AS lobsegment (TABLESPACE XXX);
to move the column back to the correct tablespace and datafiles, there are two problems.
1. The application must be down or it will cause blocked sessions and we are talking about 240GB to be moved. It causes blocked sessions because the blob in question happens to be file attachments, email attachments (inbound and outbound) and a major feature of the application. It prevents the application from saving the data because it is in flight to be moved. So we are forced with an extended outage for a 24x5 environment.
2. I've tested the move on two other environments (Dev and QA) and it works fine. However, my USERS tablespace now ends up being 230GB smaller (data content), but the allocated size of the datafiles cannot be resized due to the highwater mark. So shrinking doesn't work and RESIZE doesn't work.
The only solution provided for shrinking USERS was to move each column from Tablespace USERS to Tablespace USERS2 manually and then remove USERS and set USERS2 to be the new default. Well with over 800 tables to move, this just makes it that much more difficult to do and high effort to accomplish it. So for my question which you may be able to surmise from the above is:
Is there a script that can be run to a) move the blob column to Tablespace XXX and b) automatically shrink the datafiles/tablespace USERS to say 20GB more than the now actual size of the data in it which would be about 60GB.
Is there a way to do this without blocking sessions to users and allow the application to continue to run?
Is there an alternative approach to what I'm doing that hasn't been discussed here to minimize down time for my users and get the data where it should be?
If you are asking why am I even bothering, it is because my actual USERS Tablespace is about 40GB in content and XXX is 230GB. Getting access to the blob for attachments means it reads from USERS first, then XXX and returns the results which will be slower. If all the XXX data is in one place, then the reads/queries are self-contained in one Tablespace and will improve performance. I'm expecting this XXX tablespace to double/triple over the next 12 months, so moving it now will save me grief later.
"Getting access to the blob for attachments means it reads from USERS first, then XXX and returns the results which will be slower."
Have you proven this to be true in the dev/QA environments where you have moved this lob segment from the USERS tablespace to the XXX tablespace?
If you absolutely have to move the data, is there a reason you cannot use DBMS_REDEFINITION? See http://mewithoracle.wordpress.com/2012/08/29/using-table-online-redefinition-for-migrating-lob-segment-to-securefile-in-oracle-11g/ for an example that includes conversion to a compressed SecureFile LOB.
Thanks for the reply. The link provided does not address the questions I had in my email, it only shows how to compress the file size. While useful, it doesn't answer the question about moving the lobsegment from A to B and correcting the high water mark of the tables to reflect the move. It also doesn't identify if sessions will be blocked during the move or whether it can be done online during normal use since we cannot shut down the application for 8 hours or more given your numbers on a 6gb file. Since our production environment is close to 450GB with about 70% being file attachments, I'm looking for a solution that accounts for real time use and not impact operations as much as possible.
DBMS_REDEFINITION works online, so while it may slow things down a bit for users (especially if you run it with many parallel threads) it shouldn't interrupt access at all. When you pre-create the interim table prior to using DBMS_REDEFINITION.START_REDEF_TABLE, you specify the tablespace(s) you want to use for the new table/lobsegment, so that would be a way to get the data moved where you want it without downtime.
If you're space-constrained, this won't work for you -- you'll need to have the space available for two full copies of the table+LOB data during the redefinition process, and since you're concerned about the HWM and have a need to shrink the original tablespace, this may not be an option. Basically it creates a materialized view against the source table, populating the existing data (and, most importantly, any concurrent DML that occurs on the original table during the redefinition) into the interim table, then at the very end when you finish the redefinition it swaps the two table names in the data dictionary, after which you drop the old table. All transparent to users other than the additional load on the DB.
I've used it to unpartition and repartition large tables, online, and it's great. But I've made no effort to shrink the original tablespace afterwards so I have no idea if it'll update the HWM afterwards. My guess is that it doesn't.