1 person found this helpful
With all disks of same size and same sized allocation unit ,there should not be much impact of ASM performance.
Also rather keeping different diskgroup ,I would suggest to keep one diskgroup for data and another one for FRA.
With multiplexed control file and redo log files on these diskgroups.
1 person found this helpful
The size of DG does not have an impact on the performance as long as the luns carved out are across all disks available on the array. From the operational support perspective it is much easier to manage the usage of one diskgroup with all datafiles in the DG. The FRA should have it's own diskgroup to keep the recovery related files in a seperate area.
- Natik Ameen
You have an option to migrate online within the same disk group just by adding/removing disks.
We are doing an external redundancy and i am hearing from Storage vendor , Its better to restrict the LUN to less than 2TB to OS Queue depth size.
ASM disk cannot be more than 2Tb.
ASM do validation while adding the disk ,if disk size is more than 2Tb ,it will fail with ORA-15099 error.
Voting/OCR diskgroup is typically a normal redundancy dg (2 x failgroups) plus quorum - which means a minimum of 3 disks.
I move storage (and have done numerous times) between storage servers and SANs by using the following basic ASM command:
alter diskgroup <DG_NAME> add failgroup <FG_NAME> disk '<NEW_DISK>' drop disk <DISK_NAME> rebalance power <SETTING>
So basically (for normal or high redundancy) add the new disk(s) to the failgroup, and drop the existing disk(s) from the failgroup, and rebalance. For external redundancy omit the failgroup and simply add the new disk(s) and drop the old disk(s).
It works. Works well. And works without requiring a single second downtime from any of the db instances using ASM/that diskgroup.
Really thanks for your inputs- i will try on uat tomorrow and will let you know
Below is my approach. please have your inputs as well.
We have created new DG(normal) for voting disk/ocr
$GRID_HOME/bin/ocrconfig -add +ASM_VOCR
$GRID_HOME/bin/ocrconfig -delete +ASM_OCRVOTE
$GRID_HOME/bin/crsctl replace votedisk +ASM_VOCR
I have a RAC one DB.
I have 7 diskgroups - very uneven sized luns - 200GB,250GB,500GB and 1TB Luns. Since i have a downtime to carry out the activity i am doing the migration in the following manner
1. Create 1 TB luns new diskgroups from the new storage
2. Mount the DB
3. Backup datafile as copy format "NEWDG"
plus thecontrolfile/ logfile/temfile/archive and so on
I have performed this task hundreds times.
Why not map new luns, reabalance and remove old luns?
if you can map the new and old luns on same host. You can perform this work without downtime.
1) Map New Luns
2) Configure Permission of New Luns
3) Add New Lun a existing Diskgroup
4) After rebalance process finished, remove old Luns from ASM
5) Remove Luns from host.
You can do it in one step:
alter diskgroup <dg_name> add disk <path..> ## new luns drop disk <asmdisk_name...> ## old asmdisk rebalance power 9 wait;
1 Set rebalance power of 0 for diskgroup
2 Add New Luns
3 Drop Old Luns
4 Set rebalance power to 9 for diskgroup with "wait" option
5 Wait rebalance process finish, so you can remove the old Luns
No Change necessary on database.
The only necessary step on cluster is move voting disk and must be done online with all nodes active.
1) Diskgroup +VOTE: 3 Luns (old)
2) Move Votedisk from +VOTE to Diskgroup +DATA
3) Change diskgroup +VOTE Adding New Luns on Diskgroup and Removingl Old Luns
4) Move Voting Disk from +DATA to +VOTE
Easy task with no downtime and no reconfiguration except add/remove asmdisk and move votedisk.
I moved OCR and voting files last week (on an ASM diskgroup) from one storage chassis to another. And want to echo Levi's and my previous comments - it is easy using the ASM alter diskgroup command.
3 basic steps - move the 2 normal redundancy disk groups disks, move the quorum disk. E.g.
- alter diskgroup add failgroup <mirror1> disk .. drop .. rebalance ..
- alter diskgroup add failgroup <mirror2> disk .. drop .. rebalance ..
- alter diskgroup add quorum disk .. drop .. rebalance ..
Excellent thanks Billy/Levi