7 Replies Latest reply on Apr 6, 2016 5:38 PM by Cindys-Oracle

    Moving EMC storage bay to new secundairy datacenter




      Next month we will be moving our EMC storage bay to our new datacenter. And since this is my first time moving a storage bay, I'm not sure what to do on our Solaris boxes.

      All our solaris systems are Solaris 10 with only ZFS as the filesystem. All the pools are mirrors of two LUN's. One LUN on our bay in the primary datacenter and the other one on our bay in the secundairy datacenter.

      So now I was thinking, as far as ZFS is concerned, the "cleanest" way to go about this is to offline the LUN that resides on the bay that will be moved. This way ZFS won't try to access it any more.

      But on the OS level, I'm not sure.

      Do I leave everything as is, so when the bay will shut down, Solaris will lose connectivity to the LUN's residing on this bay and when it come's up again, Solaris will take care of it and "reconfigure" the LUN's again so afterwards I can just online the LUN's in ZFS and it resilvers.


      Do I take the LUN's offline with luxadm, cfgadm, ...... before the storage bay shut down ? And if so, will these LUN's be visible again after the storage bay boot's up again ?


      Is there another way to go about this


      If someone has any thoughts on this, feel free.





        • 1. Re: Moving EMC storage bay to new secundairy datacenter

          HI Pascal,


          Very good questions you are asking, but the most important one is:

          Is it a requirement that your ZFS storage pools remain active when the storage (one mirrored LUN) is moved?


          Thanks, Cindy

          • 2. Re: Moving EMC storage bay to new secundairy datacenter

            Hi Cindys,


            I know, that would make things infinitely easier. But alas, yes, there are a bunch of pools on production systems that we can't shut down. So any thoughts would be helpful





            • 3. Re: Moving EMC storage bay to new secundairy datacenter

              I'm not the zfs expert that Cindy is, but I would suggest a starting point of something like this:

              * Offline the disk

              * Disconnect it using cfgadm or something like that

              * Perform the move

              * Reconnect it

              * Online the disk


              If that does not work for some reason, then you could use the zdb command to pull the headers on the disks when they pop up, and then use zpool replace to add the correct disk back in to the correct pool.  All this does is save you some resilver time, though, because the pool will change while the disk is offline.  You might be just as well off if you detach the mirror disk, and then move the array, and then reattach it.  You'd be resilvering all the disks, but the complexity of the move would be way lower.


              I think there are a lot of ways to do this, just wanted to throw out some ideas on how to proceed.


              Before you do the move, though, your first task should be a zfs send and receive of all the pools, so you can recover them if the single mirror dies somewhere along the way.


              Sending and Receiving ZFS Data



              Sending and Receiving ZFS Data (Solaris 11)



              How to Perform System Archival and Recovery Procedures with Oracle Solaris 11



              About Backing Up an Oracle Solaris System With Zones Installed



              How to Backup and Restore the Solaris 10 ZFS Root Pool (Doc ID 1020257.1)


              Thanks, Ted

              • 4. Re: Re: Moving EMC storage bay to new secundairy datacenter

                If the pools must remain online then I'm a bit concerned. ZFS tracks its active pool devices with device IDs and not all 3rd party storage generate or fabricate device IDs. This means that if the storage is moved and pool is live, ZFS might not be able to find the pool devices because the device IDs change. If ZFS can't find its pool devices, you can imagine that this is a problem for a live pool and can be a catastrophic pool failure. If the pool is mirrored, then I'm less concerned particularly if the primary mirrored device remains stable, but please make sure that you have current backups of the pool data and you have identified and labeled the pool devices externally. 


                I would highly recommend taking an outage and exporting the pools before you move the storage so that ZFS has an opportunity to re-read any device information that might have changed. Enterprise storage like EMC is probably fine but I've seen enough pool damage in the external ZFS community to be paranoid. I wrote a blog recently about the devID problem here.


                Before you move any hardware, please be sure that you can capture all of the zdb -l output for the pool devices as I provide below, adding a s0 even if the pool is made from d0 devices, just in case you need to recreate the original pool device IDs.

                Thanks, Cindy


                # zdb -l /dev/dsk/c0t5000C500335E2F03d0s0


                LABEL 0


                    timestamp: 1459917047 date = Tue Apr  5 22:30:47 MDT 2016

                    version: 37

                    name: 'pond'

                    state: 0

                    txg: 1921475

                    pool_guid: 6724568105275182337

                    hostid: 2243798384

                    hostname: 'tardis'

                    top_guid: 9481525418723968645

                    guid: 11583200912312915994

                    vdev_children: 1


                        type: 'mirror'

                        id: 0

                        guid: 9481525418723968645

                        whole_disk: 0

                        metaslab_array: 27

                        metaslab_shift: 31

                        ashift: 9

                        asize: 298500227072

                        is_log: 0

                        create_txg: 4


                            type: 'disk'

                            id: 0

                            guid: 11583200912312915994

                            path: '/dev/dsk/c0t5000C500335E2F03d0s0'

                            devid: 'id1,sd@n5000c500335e2f03/a'

                            phys_path: '/scsi_vhci/disk@g5000c500335e2f03:a'

                            devchassis: '/dev/chassis/SYS/MB/HDD7/disk'

                            chassissn: '1120BDRCD7'

                            location: '/SYS/MB/HDD7'

                            whole_disk: 1

                            DTL: 121

                            create_txg: 4


                            type: 'disk'

                            id: 1

                            guid: 2142076025814073444

                            path: '/dev/dsk/c0t5000C500335FAFA7d0s0'

                            devid: 'id1,sd@n5000c500335fafa7/a'

                            phys_path: '/scsi_vhci/disk@g5000c500335fafa7:a'

                            devchassis: '/dev/chassis/SYS/MB/HDD2/disk'

                            chassissn: '1120BDRCD7'

                            location: '/SYS/MB/HDD2'

                            whole_disk: 1

                            DTL: 273

                            create_txg: 4

                            msgid: 'ZFS-8000-QJ'


                LABEL 1 - CONFIG MATCHES LABEL 0



                LABEL 2 - CONFIG MATCHES LABEL 0



                LABEL 3 - CONFIG MATCHES LABEL 0



                • 5. Re: Moving EMC storage bay to new secundairy datacenter

                  Another option would be if you have another set of spare devices to create a 3-way mirror and then outright detach the pool devices to be moved. That way you have an additional safety net for the live pools during the move. If the storage move goes well, then you can re-attach the original (but detached) devices that were moved and detach the temporary mirrored devices. Thanks, Cindy

                  • 6. Re: Moving EMC storage bay to new secundairy datacenter

                    Tedw, thanks for your input. Alas we don't have the space for the zfs receive, it's a lot of data :-(



                    Yeah, I forgot to mention that we only have 2-way mirrors and a few concatenations of 2-way mirrors (don't ask why, this happened before I got here).

                    The 3-way mirror is not an option, we're talking several TB's of data and we don't have the diskspace :-(

                    Thanks for pointing me to your blog, I will definitely be reading this before the move. Also the external label is a very good idea, something I've planned to do a long time ago but never got to it, you know how it is ;-)

                    Normally EMC does not tamper with the deviceID's upon a shutdown/restart of a storage bay so this should be fine.

                    Anyhow, I'm gonna test some procedures and I'll update you guys on my findings ;-)





                    • 7. Re: Moving EMC storage bay to new secundairy datacenter

                      Its good to know that EMC persists the device IDs and running a test is a very good idea. I would also recommend keeping a copy of the zdb -l data for the pools that are on the devices to be moved.


                      Thanks, Cindy