1 2 Previous Next 27 Replies Latest reply: Jun 8, 2014 9:47 PM by BVV RSS

    Storage initiators issue - assistance needed

    BVV

      Hi All,

       

      I am trying to connect another server to my repository. When I go to Storage and try to add this servr as a storage initiator, I get the warning: StorageDeviceOfflineEvent      Physical disk is missing

       

      Description:  (04/28/2014 11:10:35:768 AM) OVMEVT_007005D_000 Discover storage elements on server [server04] did not return physical disk [NAS (1)] for storage array [Thecus].

       

       

      I do not understand why I can see my storage as two records NAS (1) and NAS (2)? Why do I get this warning only on one of them?

       

      Have you encountered such issue before? Can You advise how to handle it?

       

      Thank you in advance for your assistance.

       

      Best regards,

       

      BVV

        • 1. Re: Storage initiators issue - assistance needed
          budachst

          If you click on the little triangle to reveal the details of your storage items, it will show you the mapper-id. It might be, that there is a stale one in there, which is not valid anymore. OVMM has the habbit of not removing stale physical device items, although they might not be there anymore.

           

          So, check the details of them and see, which one is thre "real" one by checking their mapper IDs.

          • 2. Re: Storage initiators issue - assistance needed
            BVV

            Hi budachst,

             

            I checked and in fact the storages have different IDs and paths. I also checked on servers:

             

            [root@server01 mapper]# ll /dev/mapper

            total 0

            brw-rw---- 1 root disk 252,   0 Apr  9 13:42 14e4153000000000000000000010000005a2f000000001000

            crw------- 1 root root  10, 236 Apr  9 13:42 control

            brw-rw---- 1 root disk 252,   1 Apr  9 13:43 ovspoolfs

             

            [root@server04 ~]# ll /dev/mapper/

            total 0

            brw-rw---- 1 root disk 252,   2 Apr 28 12:09 14e4153000000000000000000010000005a2f000000001000

            brw-rw---- 1 root disk 252,   0 Apr 28 11:57 36001c230c21fac001af0d650fab47b50

            crw------- 1 root root  10, 236 Apr 28 11:57 control

            brw-rw---- 1 root disk 252,   1 Apr 30 14:22 ovspoolfs

             

            The underlined ID corresponds to the storage which causes no issue BUT this is the one that has no Repository on it. So if I understand it correctly VM Manager uses the storage that is in fact not used "physically"??

             

            How can I fix it? Do I need to create new repository and lose all the machines? I can see NAS (2) partition when I try to create another repository but it does not allow me to create one there.

             

            Thank you.

             

            Best regards,

            BVV

            • 3. Re: Storage initiators issue - assistance needed
              budachst

              Can you post the output of df -h as well? I suspect the one device to be your poolfs, where OVM keeps it's cluster information on…

              You know, that you'll need two LUNs/devices to operate two server in a clustered pool, do you?

               

              What type of storage are you exporting from your Thecus?

              • 4. Re: Storage initiators issue - assistance needed
                BVV

                Hi budachst,

                 

                below is the output of df -h.

                 

                [root@server01 ~]# df -h

                Filesystem            Size  Used Avail Use% Mounted on

                /dev/sda2              65G   13G   49G  21% /

                /dev/sda1              99M   28M   67M  30% /boot

                tmpfs                 852M   16K  852M   1% /dev/shm

                none                  852M  280K  851M   1% /var/lib/xenstored

                192.168.1.55:/raid0/data/pool01

                                      406G  331M  405G   1% /nfsmnt/109657f7-dd89-492d-967b-a76bb628e95c

                /dev/mapper/ovspoolfs

                                       10G  263M  9.8G   3% /poolfsmnt/0004fb0000050000e2db239d42d30d4e

                /dev/mapper/14e4153000000000000000000010000005a2f000000001000

                                      3.7T  2.4T  1.4T  63% /OVS/Repositories/0004fb00000300003db8401f22a98247

                 

                [root@server04 ~]# df -h

                Filesystem            Size  Used Avail Use% Mounted on

                /dev/sda2              65G  822M   61G   2% /

                /dev/sda1              99M   30M   64M  32% /boot

                tmpfs                 530M     0  530M   0% /dev/shm

                none                  530M   40K  530M   1% /var/lib/xenstored

                192.168.1.55:/raid0/data/pool01

                                      406G  331M  405G   1% /nfsmnt/109657f7-dd89-492d-967b-a76bb628e95c

                /dev/mapper/ovspoolfs

                                       10G  263M  9.8G   3% /poolfsmnt/0004fb0000050000e2db239d42d30d4e

                 

                Thecus serves as iSCSI storage.

                 

                I am not exactly sure what you mean by "need two LUNs/devices to operate two server in a clustered pool"?

                 

                Thank you,

                 

                Best regards,

                 

                BVV

                • 5. Re: Storage initiators issue - assistance needed
                  budachst

                  Seems that you have created a storage repository on server01:

                   

                  /dev/mapper/14e4153000000000000000000010000005a2f000000001000  3.7T  2.4T  1.4T  63% /OVS/Repositories/0004fb00000300003db8401f22a98247

                   

                  All you'd have to do is present the existing repository to the second server as well - that should be all. You can't, of course create a second repo on an already existing one.

                  Create a repo once and present it to all the server, that need it.

                  • 6. Re: Storage initiators issue - assistance needed
                    BVV

                    The problem is when I try to present this server to this repository I can't see server04 on the list although 2other servers are present and can be assigned...

                    • 7. Re: Storage initiators issue - assistance needed
                      budachst

                      Does the storage show up in the storage tab of server04 in OVMM? If not, you could try to either restart the ovs-agent on server04 and rediscover the server.

                      Also, while performing the rediscover, take a loot at the ovs-agent's log - that may provide a hint, as of why the storage is not regocnized on that server.

                       

                      …or simply reboot your server04 and check it's storage tab after restart/rediscover.

                      • 8. Re: Storage initiators issue - assistance needed
                        BVV

                        Only NAS (2) is visible in Physical Disks of server04 while other two see NAS (1) and NAS (2).

                         

                        I even reinstalled whole OS on server04 and still no luck... :-(

                        • 9. Re: Storage initiators issue - assistance needed
                          budachst

                          What does

                           

                          iscsiadm -m node session

                           

                          give on server01 and server04?

                          • 10. Re: Storage initiators issue - assistance needed
                            BVV

                            ovs_agent.log:

                             

                            [2014-05-04 21:35:54 4007] DEBUG (ocfs2:270) Trying to mount 192.168.1.55:/raid0/data/pool01 to /nfsmnt/109657f7-dd89-492d-967b-a76bb628e95c

                            [2014-05-04 21:35:59 4007] DEBUG (ocfs2:295) 192.168.1.55:/raid0/data/pool01 mounted to /nfsmnt/109657f7-dd89-492d-967b-a76bb628e95c

                            [2014-05-04 21:35:59 4007] DEBUG (ocfs2:221) dmsetup output:

                            [2014-05-04 21:36:00 4007] DEBUG (ocfs2:162) cluster debug: {'/sys/kernel/debug/o2dlm': [], '/sys/kernel/debug/o2net': ['connected_nodes', 'stats', 'sock_containers', 'send_tracking'], '/sys/kernel/debug/o2hb': ['failed_regions', 'quorum_regions', 'live_regions', 'livenodes'], 'service o2cb status': 'Driver for "configfs": Loaded\nFilesystem "configfs": Mounted\nStack glue driver: Loaded\nStack plugin "o2cb": Loaded\nDriver for "ocfs2_dlmfs": Loaded\nFilesystem "ocfs2_dlmfs": Mounted\nChecking O2CB cluster "5597aa8129e50c07": Offline\n'}

                            [2014-05-04 21:36:12 4007] DEBUG (ocfs2:162) cluster debug: {'/sys/kernel/debug/o2dlm': [], '/sys/kernel/debug/o2net': ['connected_nodes', 'stats', 'sock_containers', 'send_tracking'], '/sys/kernel/debug/o2hb': ['0004FB0000050000E2DB239D42D30D4E', 'failed_regions', 'quorum_regions', 'live_regions', 'livenodes'], 'service o2cb status': 'Driver for "configfs": Loaded\nFilesystem "configfs": Mounted\nStack glue driver: Loaded\nStack plugin "o2cb": Loaded\nDriver for "ocfs2_dlmfs": Loaded\nFilesystem "ocfs2_dlmfs": Mounted\nChecking O2CB cluster "5597aa8129e50c07": Online\n  Heartbeat dead threshold: 61\n  Network idle timeout: 60000\n  Network keepalive delay: 2000\n  Network reconnect delay: 2000\n  Heartbeat mode: Global\nChecking O2CB heartbeat: Active\n  0004FB0000050000E2DB239D42D30D4E /dev/dm-2\nNodes in O2CB cluster: 0 1 2 \n'}

                            [2014-05-04 21:36:12 4007] DEBUG (ocfs2:270) Trying to mount /dev/mapper/ovspoolfs to /poolfsmnt/0004fb0000050000e2db239d42d30d4e

                            [2014-05-04 21:36:13 4007] DEBUG (ocfs2:295) /dev/mapper/ovspoolfs mounted to /poolfsmnt/0004fb0000050000e2db239d42d30d4e

                            [2014-05-04 21:36:13 4320] INFO (notificationserver:213) NOTIFICATION SERVER STARTED

                            [2014-05-04 21:36:13 4322] INFO (remaster:140) REMASTER SERVER STARTED

                            [2014-05-04 21:36:13 4324] INFO (monitor:23) MONITOR SERVER STARTED

                            [2014-05-04 21:36:13 4326] INFO (ha:89) HA SERVER STARTED

                            [2014-05-04 21:36:13 4328] INFO (stats:26) STAT SERVER STARTED

                            [2014-05-04 21:36:13 4330] INFO (xmlrpc:306) Oracle VM Agent XMLRPC Server started.

                            [2014-05-04 21:36:13 4330] INFO (xmlrpc:315) Oracle VM Server version: {'release': '3.2.2', 'date': '201302181801', 'build': '520'}, hostname: server04, ip: 192.168.1.54

                            [2014-05-04 21:36:13 4320] DEBUG (notificationserver:237) Trying to connect to manager.

                            [2014-05-04 21:36:13 4320] DEBUG (notificationserver:239) Connected to manager.

                            [2014-05-04 21:36:14 4320] INFO (notificationserver:267) Service started.

                            [2014-05-04 21:36:15 4453] DEBUG (service:76) call start: get_api_version

                            [2014-05-04 21:36:15 4453] DEBUG (service:76) call complete: get_api_version

                            [2014-05-04 21:36:15 4454] DEBUG (service:76) call start: discover_server

                            [2014-05-04 21:36:16 4454] DEBUG (service:76) call complete: discover_server

                            [2014-05-04 21:36:16 4468] DEBUG (service:76) call start: discover_hardware

                            [2014-05-04 21:36:17 4468] DEBUG (service:76) call complete: discover_hardware

                            [2014-05-04 21:36:17 4483] DEBUG (service:76) call start: discover_network

                            [2014-05-04 21:36:17 4483] DEBUG (service:76) call complete: discover_network

                            [2014-05-04 21:36:17 4484] DEBUG (service:76) call start: discover_storage_plugins

                            [2014-05-04 21:36:18 4484] DEBUG (service:76) call complete: discover_storage_plugins

                            [2014-05-04 21:36:18 4324] DEBUG (monitor:36) Cluster state changed from [Unknown] to [DLM_Ready]

                            [2014-05-04 21:36:18 4324] INFO (notification:47) Notification sent: {CLUSTER} {MONITOR} Cluster state changed from [Unknown] to [DLM_Ready]

                            [2014-05-04 21:36:18 4487] DEBUG (service:74) call start: discover_physical_luns('',)

                            [2014-05-04 21:36:18 4320] INFO (notificationserver:139) Sending notification: {CLUSTER} {MONITOR} Cluster state changed from [Unknown] to [DLM_Ready]

                            [2014-05-04 21:36:19 4487] DEBUG (service:76) call complete: discover_physical_luns

                            [2014-05-04 21:36:19 4505] DEBUG (service:74) call start: discover_physical_luns('14e4153000000000000000000010000005a2f000000001000',)

                            [2014-05-04 21:36:19 4505] DEBUG (service:76) call complete: discover_physical_luns

                            [2014-05-04 21:36:19 4517] DEBUG (service:76) call start: discover_repository_db

                            [2014-05-04 21:36:19 4517] DEBUG (service:76) call complete: discover_repository_db

                            [2014-05-04 21:36:20 4518] DEBUG (service:74) call start: storage_plugin_listMountPoints('oracle.ocfs2.OCFS2.OCFS2Plugin', {'status': '', 'admin_user': '', 'admin_host': '', 'uuid': '0004fb000009000014e2c1ddf554966d', 'total_sz': 0, 'admin_passwd': '******', 'free_sz': 0, 'name': '0004fb000009000014e2c1ddf554966d', 'access_host': '', 'storage_type': 'FileSys', 'alloc_sz': 0, 'access_grps': [], 'used_sz': 0, 'storage_desc': ''})

                            [2014-05-04 21:36:20 4518] INFO (storageplugin:109) storage_plugin_listMountPoints(oracle.ocfs2.OCFS2.OCFS2Plugin)

                            [2014-05-04 21:36:20 4518] DEBUG (service:76) call complete: storage_plugin_listMountPoints

                            [2014-05-04 21:36:20 4522] DEBUG (service:74) call start: storage_plugin_listMountPoints('oracle.generic.NFSPlugin.GenericNFSPlugin', {'status': '', 'admin_user': None, 'admin_host': None, 'uuid': '0004fb000009000011712a9a90c98a29', 'total_sz': 0, 'admin_passwd': '******', 'free_sz': 0, 'name': '0004fb000009000011712a9a90c98a29', 'access_host': '192.168.1.55', 'storage_type': 'FileSys', 'alloc_sz': 0, 'access_grps': [], 'used_sz': 0, 'storage_desc': ''})

                            [2014-05-04 21:36:20 4522] INFO (storageplugin:109) storage_plugin_listMountPoints(oracle.generic.NFSPlugin.GenericNFSPlugin)

                            [2014-05-04 21:36:20 4522] DEBUG (nfs_linux:47) cmd = /usr/sbin/showmount --export 192.168.1.55

                            [2014-05-04 21:36:20 4522] DEBUG (service:76) call complete: storage_plugin_listMountPoints

                            [2014-05-04 21:36:21 4529] DEBUG (service:76) call start: get_yum_config

                            [2014-05-04 21:36:21 4529] DEBUG (service:76) call complete: get_yum_config

                            [2014-05-04 21:36:21 4530] DEBUG (service:76) call start: discover_cluster

                            [2014-05-04 21:36:21 4530] DEBUG (service:76) call complete: discover_cluster

                            [2014-05-04 21:36:22 4320] INFO (notificationserver:139) Sending notification: May  4 21:36:22 {NETWORK} net : ADD : eth0 (0)

                             

                            [2014-05-04 21:36:25 4320] INFO (notificationserver:139) Sending notification: May  4 21:36:25 {NETWORK} net : ADD : eth1 (1)

                             

                            [2014-05-04 21:36:27 4541] DEBUG (service:76) call start: discover_cluster

                            [2014-05-04 21:36:27 4541] DEBUG (service:76) call complete: discover_cluster

                            [2014-05-04 21:36:28 4320] INFO (notificationserver:139) Sending notification: May  4 21:36:28 {NETWORK} net : ADD : bond0 (1)

                            • 11. Re: Storage initiators issue - assistance needed
                              BVV

                              [root@server01 ~]# iscsiadm -m node session

                              192.168.1.55:3260,1 iqn.2013-10.bi.accre:RAID1.iscsi0.vg0.iscsi1

                               

                              [root@server04 ~]# iscsiadm -m node session

                              192.168.1.55:3260,1 iqn.2013-10.bi.accre:RAID1.iscsi0.vg0.iscsi1

                              • 12. Re: Storage initiators issue - assistance needed
                                budachst

                                This is really strange… what do you get when you click on the triangle of NAS(1) and NAS(2) on each server?

                                • 13. Re: Storage initiators issue - assistance needed
                                  BVV

                                  Is there any way I can send you the screen? OTN refuses to upload jpeg file...

                                  • 14. Re: Storage initiators issue - assistance needed
                                    budachst

                                    I think if you just copy the text from the windows, it will suffice. Just make sure, that it's obvious which text belongs to which storage item on which server.

                                    …way too many "which"s

                                    1 2 Previous Next