1 2 Previous Next 26 Replies Latest reply: Apr 30, 2013 5:54 PM by Cindys-Oracle RSS

    Zpool create throws Segmentaion Fault (Core dumped ) on Solaris 10 Galaxy

    1005748
      Hi All,


      I am facing segmentation fault issue while creating a pool via zpool command on Solaris 10.
      Its a galaxy machine.

      OS version : 5.10

      # zpool create -f storage c5t5d0
      Segmentation Fault - core dumped



      Can anyone help me out ?

      Thanks,
      Basant

      Edited by: 1002745 on Apr 26, 2013 5:54 AM
        • 1. Re: Zpool create throws Segmentaion Fault (Core dumped ) on Solaris 10 Galaxy
          Cindys-Oracle
          Hi--

          Galaxy is a x4200, I think. Can you identify the full Solaris 10 release name:

          # cat /etc/release

          Is there any intervening layer, like some virtualization feature or volume management software?

          Thanks, Cindy
          • 2. Re: Zpool create throws Segmentaion Fault (Core dumped ) on Solaris 10 Galaxy
            1005748
            Hello Cindy,

            Thanks for looking into this !!!

            Yes it is Sun X4200 Galaxy machine.

            and


            # cat /etc/release
            Solaris 10 10/08 s10x_u6wos_07b X86
            Copyright 2008 Sun Microsystems, Inc. All Rights Reserved.
            Use is subject to license terms.
            Assembled 27 October 2008
            #

            Is there any intervening layer, like some virtualization feature or volume management software?

            ->> I didn't get this. please tell me software name ??


            Thanks,
            Basant
            • 3. Re: Zpool create throws Segmentaion Fault (Core dumped ) on Solaris 10 Galaxy
              Cindys-Oracle
              Hi Basant,

              Virtualization software like virtual box or VWware or volume management product like Veritas or SVM.

              Can you provide the disk format output?

              Also, a quick test of the following test would help isolate this as a device related problem:

              1. Create a 200 MB file, like this:

              # mkdir /disks

              # mkfile 200mb /disks/file.1

              2. Create a test pool:

              # zpool create test /disks/file.1

              Let us know the results.

              Thanks, Cindy
              • 4. Re: Zpool create throws Segmentaion Fault (Core dumped ) on Solaris 10 Galaxy
                1005748
                Hi Cindy,

                # format output :

                # format
                Searching for disks...done


                AVAILABLE DISK SELECTIONS:
                0. c4t0d0 <DEFAULT cyl 17747 alt 2 hd 255 sec 63>
                /pci@79,0/pci1022,7458@11/pci1000,3060@2/sd@0,0
                1. c5t3d0 <FUJITSU-MAW3300NCSUN300G-1703-279.40GB>
                /pci@0,0/pci10de,5d@d/pci10b5,8114@0/pci1000,10b0@8/sd@3,0
                2. c5t4d0 <FUJITSU-MAW3300NCSUN300G-1703-279.40GB>
                /pci@0,0/pci10de,5d@d/pci10b5,8114@0/pci1000,10b0@8/sd@4,0
                3. c5t5d0 <FUJITSU-MAW3300NCSUN300G-1703-279.40GB>
                /pci@0,0/pci10de,5d@d/pci10b5,8114@0/pci1000,10b0@8/sd@5,0
                4. c5t8d0 <SEAGATE-ST330000LSUN300G-045A-279.40GB>
                /pci@0,0/pci10de,5d@d/pci10b5,8114@0/pci1000,10b0@8/sd@8,0
                5. c5t9d0 <SEAGATE-ST330000LSUN300G-045A-279.40GB>
                /pci@0,0/pci10de,5d@d/pci10b5,8114@0/pci1000,10b0@8/sd@9,0
                6. c5t10d0 <FUJITSU-MAW3300NCSUN300G-1703-279.40GB>
                /pci@0,0/pci10de,5d@d/pci10b5,8114@0/pci1000,10b0@8/sd@a,0
                7. c5t11d0 <SEAGATE-ST330000LSUN300G-045A-279.40GB>
                /pci@0,0/pci10de,5d@d/pci10b5,8114@0/pci1000,10b0@8/sd@b,0
                Specify disk (enter its number): 1
                selecting c5t3d0
                [disk formatted]


                FORMAT MENU:
                disk - select a disk
                type - select (define) a disk type
                partition - select (define) a partition table
                current - describe the current disk
                format - format and analyze the disk
                fdisk - run the fdisk program
                repair - repair a defective sector
                label - write label to the disk
                analyze - surface analysis
                defect - defect list management
                backup - search for backup labels
                verify - read and display labels
                inquiry - show vendor, product and revision
                volname - set 8-character volume name
                !<cmd> - execute <cmd>, then return
                quit
                format> p


                PARTITION MENU:
                0 - change `0' partition
                1 - change `1' partition
                2 - change `2' partition
                3 - change `3' partition
                4 - change `4' partition
                5 - change `5' partition
                6 - change `6' partition
                select - select a predefined table
                modify - modify a predefined partition table
                name - name the current table
                print - display the current table
                label - write partition map and label to the disk
                !<cmd> - execute <cmd>, then return
                quit
                partition> p
                Current partition table (original):
                Total disk sectors available: 585921081 + 16384 (reserved sectors)

                Part Tag Flag First Sector Size Last Sector
                0 usr wm 256 279.39GB 585921081
                1 unassigned wm 0 0 0
                2 unassigned wm 0 0 0
                3 unassigned wm 0 0 0
                4 unassigned wm 0 0 0
                5 unassigned wm 0 0 0
                6 unassigned wm 0 0 0
                8 reserved wm 585921082 8.00MB 585937465

                partition>


                # mkdir /storage
                OK

                # mkfile 200 /storage/file

                OK

                # zpool create test /storage/file
                Segmentation Fault - core dumped
                #


                Thanks,
                Basant
                • 5. Re: Zpool create throws Segmentaion Fault (Core dumped ) on Solaris 10 Galaxy
                  Cindys-Oracle
                  Okay, thanks. I'm somewhat confused now by these results.

                  This is an older Solaris 10 release and I'm remembering another problem that is fixed in a later release.

                  Please try this workaround that disables device in use checking so its very important that you make sure that you are using an unused disk because the device error checking will be disabled:

                  1. Disable device in use checking :

                  # NOINUSE_CHECK=1; export NOINUSE_CHECK

                  2. Retry the zpool create:

                  # zpool create -f storage c5t5d0

                  Let us know the results.

                  Thanks, Cindy
                  • 6. Re: Zpool create throws Segmentaion Fault (Core dumped ) on Solaris 10 Galaxy
                    1005748
                    Hi Cindy,

                    Thanks for prompt reply.
                    I will let you know the result of zpool after disable the device check on Monday (29 April) because I left the office.
                    Below are some my understanding and queries, please have a look :


                    # format
                    Searching for disks...done

                    AVAILABLE DISK SELECTIONS:
                    0. c4t0d0 <DEFAULT cyl 17747 alt 2 hd 255 sec 63>
                    /pci@79,0/pci1022,7458@11/pci1000,3060@2/sd@0,0
                    1. c5t3d0 <FUJITSU-MAW3300NCSUN300G-1703-279.40GB>
                    /pci@0,0/pci10de,5d@d/pci10b5,8114@0/pci1000,10b0@8/sd@3,0
                    2. c5t4d0 <FUJITSU-MAW3300NCSUN300G-1703-279.40GB>
                    /pci@0,0/pci10de,5d@d/pci10b5,8114@0/pci1000,10b0@8/sd@4,0
                    3. c5t5d0 <FUJITSU-MAW3300NCSUN300G-1703-279.40GB>
                    /pci@0,0/pci10de,5d@d/pci10b5,8114@0/pci1000,10b0@8/sd@5,0
                    4. c5t8d0 <SEAGATE-ST330000LSUN300G-045A-279.40GB>
                    /pci@0,0/pci10de,5d@d/pci10b5,8114@0/pci1000,10b0@8/sd@8,0
                    5. c5t9d0 <SEAGATE-ST330000LSUN300G-045A-279.40GB>
                    /pci@0,0/pci10de,5d@d/pci10b5,8114@0/pci1000,10b0@8/sd@9,0
                    6. c5t10d0 <FUJITSU-MAW3300NCSUN300G-1703-279.40GB>
                    /pci@0,0/pci10de,5d@d/pci10b5,8114@0/pci1000,10b0@8/sd@a,0
                    7. c5t11d0 <SEAGATE-ST330000LSUN300G-045A-279.40GB>
                    /pci@0,0/pci10de,5d@d/pci10b5,8114@0/pci1000,10b0@8/sd@b,0
                    Specify disk (enter its number):

                    ----------------

                    Form the format output , only c4t0d0 disk is being used as internal disk and for internal storage as well.
                    Rest of all the disk are external and would like to use for storage ( expand the size of the disk).

                    In fact, I would like to create storage for 6 disks like :
                    # zpool create -f storage c5t3d0 c5t8d0 c5t10d0

                    ----------


                    Could you please let me explain about
                    # NOINUSE_CHECK=1;
                    export NOINUSE_CHECK

                    ?

                    Because I need to perform these steps on Live Servers (Production Servers).

                    Thanks,
                    Basant
                    • 7. Re: Zpool create throws Segmentaion Fault (Core dumped ) on Solaris 10 Galaxy
                      Cindys-Oracle
                      Actually, I had a chance to confirm that if the zpool create on a file also core dumped, then its probably not a device-in-use issue
                      so lets skip that test for now.

                      When you can get back to the office, provide the results of this command:

                      # pstack core

                      It would have been faster to start with this. :-)

                      Thanks, Cindy
                      • 8. Re: Zpool create throws Segmentaion Fault (Core dumped ) on Solaris 10 Galaxy
                        1005748
                        Hi Cindy,

                        I have a pstack output of core file. Please have a look :
                        ---------------------------------------------
                        core 'core' of 5613: zpool create -f storage c5t3d0 c5t8d0 c5t10d0
                        ----------------- lwp# 1 / thread# 1 --------------------
                        fef13855 nvlist_next_nvpair (8047eca, 0) + 45
                        fef37c0f zfs_valid_proplist (80dd0c8, 1, 8047eca, 0, 0, 0) + 3e
                        fef44893 zpool_create (80dd0c8, 8047ed4, 80cee50, 0) + 13e
                        080540a3 ???????? (6, 8047e10)
                        0805924f main (7, 8047e0c, 8047e2c) + 134
                        08053292 ???????? (7, 8047ec4, 8047eca, 8047ed1, 8047ed4, 8047edc)
                        ----------------- lwp# 2 / thread# 2 --------------------
                        fedb8fe3 doorreturn (0, 0, 0, 0) + 23
                        fea80d3d door_create_func (0) + 29
                        fedb59a9 thrsetup (fe760200) + 4e
                        fedb5c90 lwpstart (fe760200, 0, 0, fe86eff8, fedb5c90, fe760200)
                        ----------------- lwp# 3 / thread# 3 --------------------
                        fedb5ceb __lwp_park (814c5b0, 814c5c0, 0) + b
                        fedb04f0 cond_wait_queue (814c5b0, 814c5c0, 0) + 5e
                        fedb09c4 condwait (814c5b0, 814c5c0) + 64
                        fedb0a06 cond_wait (814c5b0, 814c5c0) + 21
                        feb12bc8 subscriber_event_handler (80f1f88) + 3f
                        fedb59a9 thrsetup (fe760a00) + 4e
                        fedb5c90 lwpstart (fe760a00, 0, 0, fe75eff8, fedb5c90, fe760a00)
                        ----------------- lwp# 4 / thread# 4 --------------------
                        fedb8717 __pollsys (fe65ff78, 1, 0, 0) + 7
                        fed5f4f2 poll (fe65ff78, 1, ffffffff) + 52
                        feeaf300 watch_mnttab (0) + af
                        fedb59a9 thrsetup (fe761200) + 4e
                        fedb5c90 lwpstart (fe761200, 0, 0, fe65fff8, fedb5c90, fe761200)
                        ----------------- lwp# 5 / thread# 5 --------------------
                        fedb5ceb __lwp_park (814c550, 814c560, 0) + b
                        fedb04f0 cond_wait_queue (814c550, 814c560, 0) + 5e
                        fedb09c4 condwait (814c550, 814c560) + 64
                        fedb0a06 cond_wait (814c550, 814c560) + 21
                        feb12bc8 subscriber_event_handler (80f1b88) + 3f
                        fedb59a9 thrsetup (fe761a00) + 4e
                        fedb5c90 lwpstart (fe761a00, 0, 0, fe46eff8, fedb5c90, fe761a00)
                        #

                        -----------------------------------------

                        Thanks,
                        Basant
                        • 9. Re: Zpool create throws Segmentaion Fault (Core dumped ) on Solaris 10 Galaxy
                          1005748
                          Hi Cindy,

                          Did you get a time to have a look on pstack of core file ??

                          Any suggestion ?

                          Thanks,
                          Basant
                          • 10. Re: Zpool create throws Segmentaion Fault (Core dumped ) on Solaris 10 Galaxy
                            800381
                            I'm not familiar with ZFS source, but that looks like a strange place to SEGV. From the function names, it looks like the process is parsing properties - something I think it would do before doing the actual creation.

                            This is just a shot in the dark, but I'm wondering if you're getting a library mismatch or something similar from you environment.

                            What do "pldd core", "pargs core", and "pargs -e core" show? I'm wond
                            • 11. Re: Zpool create throws Segmentaion Fault (Core dumped ) on Solaris 10 Galaxy
                              1005748
                              Hi,

                              I have run zpool create with c5t11d0 disk.

                              Please have a look for results of some commands ::

                              # pldd core
                              core 'core' of 23551: zpool create -f storage c5t11d0
                              /lib/libumem.so.1
                              /lib/libzfs.so.2
                              /lib/libnvpair.so.1
                              /lib/libdevid.so.1
                              /lib/libefi.so.1
                              /usr/lib/libdiskmgt.so.1
                              /lib/libuutil.so.1
                              /lib/libc.so.1
                              /lib/libm.so.2
                              /lib/libdevinfo.so.1
                              /lib/libgen.so.1
                              /lib/libavl.so.1
                              /lib/libnsl.so.1
                              /lib/libuuid.so.1
                              /lib/libadm.so.1
                              /lib/libkstat.so.1
                              /lib/libsysevent.so.1
                              /usr/lib/libvolmgt.so.1
                              /lib/libsec.so.1
                              /lib/libsocket.so.1
                              /lib/libdoor.so.1
                              /lib/libiscsitgt.so.1
                              /lib/libscf.so.1
                              /usr/lib/libxml2.so.2
                              /lib/libpthread.so.1
                              /lib/libz.so.1
                              /usr/lib/libshare.so.1
                              /lib/libmeta.so.1
                              /lib/libmd.so.1
                              /lib/libmp.so.2
                              #

                              # pargs core
                              core 'core' of 23551: zpool create -f storage c5t11d0
                              argv[0]: zpool
                              argv[1]: create
                              argv[2]: -f
                              argv[3]: storage
                              argv[4]: c5t11d0


                              # pargs -e core
                              core 'core' of 23551: zpool create -f storage c5t11d0
                              envp[0]: HOME=/
                              envp[1]: LOGNAME=root
                              envp[2]: MAIL=/var/mail//root
                              envp[3]: PATH=/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin
                              envp[4]: SHELL=/sbin/sh
                              envp[5]: SSH_CLIENT=10.48.41.161 58252 22
                              envp[6]: SSH_CONNECTION=10.48.41.161 58252 10.48.41.142 22
                              envp[7]: SSH_TTY=/dev/pts/1
                              envp[8]: TERM=vt100
                              envp[9]: TZ=Europe/Brussels
                              envp[10]: USER=root
                              #


                              Thanks,
                              Basant
                              • 12. Re: Zpool create throws Segmentaion Fault (Core dumped ) on Solaris 10 Galaxy
                                Cindys-Oracle
                                Hi Basant,

                                We haven't seen a zpool create operation core dump in zfs_valid_proplist before and it might take some time to look through old Solaris 10 issues.

                                Did any of these disks come from a Solaris system that is later than Solaris 10 10/08 with pools of later versions? That's the only thing that comes to mind at the moment. The idea being that some later pool version with future properties is on this disk. If so, we might need to wipe the disk with dd and try the zpool create again.

                                Let me know if this idea is possible.

                                Thanks, Cindy
                                • 13. Re: Zpool create throws Segmentaion Fault (Core dumped ) on Solaris 10 Galaxy
                                  1005748
                                  Hi Cindy,

                                  Did any of these disks come from a Solaris system that is later than Solaris 10 10/08 with pools of later versions?

                                  ->> I am not aware of these kind of things because this server is running since 2 to 3 years. I dont know about the above info.

                                  If so, we might need to wipe the disk with dd and try the zpool create again.

                                  ->> Actually I tried the disk label conversion like EFI to VTOC. In this step I deleted the partition and create it again and then changed the label.
                                  But finally I saw it automatically back to EFI label. I could not understand this strange situation.

                                  In process of wipe the disk with dd, is there any other procedure ?
                                  will that procedure harm the server ?

                                  Please suggets.
                                  I am looking forward to you.

                                  Thanks in Adv,
                                  Basant
                                  • 14. Re: Zpool create throws Segmentaion Fault (Core dumped ) on Solaris 10 Galaxy
                                    Cindys-Oracle
                                    Hi Basant,

                                    Yes, the EFI--VTOC revert back to EFI sounds vaguely familiar but I'm just not remembering
                                    from so long ago.

                                    Let's recap:

                                    -No virtualization or volume manager, correct?
                                    -I wonder if there was some other file system format on this disk.

                                    The zfs_valid_proplist core dump is strange and I can't find any bugs related to this specific issue.

                                    Here's another idea. Can you change the disk back to VTOC with one large slice 0 and create
                                    a UFS file system, like this:

                                    # newfs /dev/rdsk/c5t5d0s0

                                    If the above completes successfully, try this:

                                    # zpool create -f test c5t5d0s0

                                    Wiping the disk label with dd is a little dangerous because it doesn't to any error checking so please try the above first.
                                    1 2 Previous Next