This content has been marked as final. Show 18 replies
Which is just about the same example I gave him in my earlier post! ;)
>1 person found this helpful
My application is site specific and deployed at many sites around the world, each having its own instance of the DB. Many sites do not have DBA's present, so monitoring and maintenance of the DB is not very easy.
Then you are doing it wrong.
I have been monitoring and maintening DBs located all over the world remotely for 20 years. If you were employed at any of the major corporations virtually ALL of the monitoring and maintenance would be done remotely.
The application has many transactional tables that are year specific. They use ''YearID' column used to seperate each years worth of data (see sample table below). Many of the larger customers (or customers that have been on the system for many years) have seen their performance degrade on these tables. The band-aid solution was to move previous years data out of these tables into a historical instance of the DB. This is very time consuming and wasteful. As I read more about the table partitioning option, I thought it might be something that could solve the problem.
And that is why we were asking the questions we asked. It is imperative that you determine what the problem is, not just what you think it is. If you have not really confirmed that the performance degradation was due to old data getting in the way then the first thing you should do is volume testing to confirm that is your only issue.
A possible risk when you begin using partitioning is that you will degrade the performance of some queries that used to work fine. So you will need to perform extensive testing anyway to confirm that you have not introduced new problems.
1) What would be the best partitioning option to use knowing how the tables are set up?
That question can't be answered based on the one table that you provided. As one of the other responders hinted at there are likely referential integrity issues to take into account and the question of the numbers, and types (global or local), of indexes to be used.
The RANGE-Interval code example I provided is the type of partitioning you COULD use for that one table.
2) Can it be set up so that it automatically creates a new partition of the table when a new value of the 'YearID' column is enterered into the table? Is this where the interval option is used? I woul dnot be able to do this manually at every site as a new year ('YearID') comes.
Yes - that is what my interval partitioning example would do. The CAVEAT is that ANY data with a future date will cause Oracle to create a new partition. If your user accidentally enters a date for the year 4185 Oracle will dutifully create a new partition for it. Someone will evil intentions could easily insert data that would cause hundreds of partitions to be created.
You not only CAN create partitions manually at every site every year but you can do it remotely and easily with some simple scripts. You can create partitions for the next 20 years in advance if you want and, using deferred segment creation, those won't take up any space until they are used.
Your "can't do it remotely" argument doesn't fly.
3) Once the partitioning option is turned on, are there specific maintenance/management tasks that a DBA would have to do on this (remember, no DBA's at my sites)?
In the real world there is no such thing as a DB that doesn't need to be monitored or maintained. Like most things Oracle when it works properly there is little or nothing to be done. When it doesn't work that can be a lot to be done.
1. You haven't said how this 'old' data, or old partitions, will be removed from the system and archived.
2. You haven't said how backup and recovery is done on these systems if you don't have a dba and can't do it remotely
3. You haven't said how Oracle or OS patches are applied and maintained on these systems without a DBA
When you new system is deployed you will immediately get complaints from some of your sites about performance or something so you will need to be able to deal with them remotely. And if it turns out there is an issue you didn't discover during testing and you need to change the partitioning you will have a major problem.
Partitioning is nice but you need to make sure you get it right the first time.
thanks for all the great info. Knowing the prereqs and the caveats to partitioning really help me get an idea of what my options are. Unfortunately I was thrown into a problem that existsed for years but was ignored until now. I'm hoping that I can play with this some to see what effect this will have on performance.
Knowing the prereqs and the caveats to partitioning really help me get an idea of what my options are. Unfortunately I was thrown into a problem that existsed for years but was ignored until now. I'm hoping that I can play with this some to see what effect this will have on performance.
Here is an approach you can use to sort of 'ease' into it. There is very little risk to this approach. Basically you use range partitioning (manual, not interval) but just create one partition - MAXVALUE.
That sets up the table as partitioned but doesn't make you go through having to actually move any data. You just create a new partitioned table and do an EXCHANGE PARTITION with the existing table. Voila! You now have a partitioned table with only a data dictionary update.
Then for one of your sites where you think partition would help do a SPLIT PARTITION to create a new partition for some period of data.
If you run into any issue you can quickly recover by just doing a DROP partition to put the data back into the MAXVALUE partition again.
Once you get any issues worked out for your 'beta' site you can then proceed to a roll-out of the partitioning to your other sites.
Having some sites with more than one partitions and others with only one doesn't really have any risks involved in terms of maintenance. The only issues will be with the multiple partitions if you didn't test properly.