3 Replies Latest reply: Jun 10, 2011 2:51 PM by Nik RSS

    Enabling Auditing in Sun Storage Tek 6140


      I would like to know how to enable the auditing in Sun Storage Tek 6140. In the same ref. I have below doubts.

      1. How to enable the auditing for /opt/SUNWsesscs/cli/bin/sscs command.

      2. I got fms log under /var/opt/SUNWsefms/log which only audit commands like service, not sscs.

      3. Also In case someone will do any changes through GUI CAM, where the details will be captured, I mean log.

      In case of Netapp there is auditlog in which we can capture all activities/commands entered from console, admin host, filerview, by keeping auditlog.enable on.

      Please suggest.

        • 1. Re: Enabling Auditing in Sun Storage Tek 6140
          You can configure CAM for send E-mail notification about any configuration changes and other alert on array.
          See CAM -> Notification.

          MAIN problem - CAM is middle between administartor and array. So someone else can install own version of CAM, register your array and manage it. For prevent it - you should change default password on array.

          • 2. Re: Enabling Auditing in Sun Storage Tek 6140
            Hi Nik,

            OK, can you send me link of url from where I can get help of commands under /opt/SUNWsefms/bin i.e. suuportdata, sunmc, service, register, rasagent, ras_admin, lsscs diag csmservice.

            I have doc od Oracle on sscs only.

            • 3. Re: Enabling Auditing in Sun Storage Tek 6140
              I don't know official big docs for this commands.

              supportData used for get suppordata - zip arhive with configuration,states, events on array.
              # ./supportData
              usage supportData [-d <identifier> [-l]] <-p <path> -o <output file> [-a]| -c <case id>>
              Valid identifiers are the deviceKey, array name, controller ip number or controller DNS name.
              Use the command 'ras_admin device_list' to find a valid array name and ip number.
              To collect CAM application data instead of array data, use 'localhost' for the identifier.
              If no device is specified, support data from all arrays as well as the application logs will be collected.
              If a device is specified and the -l option is selected, the application logs will be included.
              The -a option will avoid collection of the host data.
              *service* - Command for make some special service procedure on array.
              # ./service

              usage service -d <identifier> -c <command> [-q <qualifier> -t <target>]

              "Use the command 'ras_admin device_list' to find a valid identifier.

              ex: service -d Array-15 -c redistribute
              Note: service -d <identifier> -c list lists the available commands for the target device

              # ./service -d XXXXX -c list
              Executing the list command on XXXXX

              Available commands:

              service -d <deviceid> -c fail -t <a|b|tXctrlY|tXdriveY>
              service -d <deviceid> -c revive -t <a|b|tXctrlY|tXdriveY>
              service -d <deviceid> -c reconstruct -t <tXdriveY>
              service -d <deviceid> -c redistribute -q volumes
              service -d <deviceid> -c locate -t <tXdriveY|tX|array|off>
              service -d <deviceid> -c set -q array <name=newname>
              service -d <deviceid> -c set -q redundancy -t <simplex|duplex>
              service -d <deviceid> -c set -q nvsram <region=0xXX> <offset=0xXX> <value=0xXX> [host=0xXX]
              service -d <deviceid> -c read -q nvsram <region=0xXX> [host=0xXX]
              service -d <deviceid> -c reset -t <a|b|tXctrlY|tXbatY|mel|rls|ddc|soc>
              service -d <deviceid> -c reset -q driveChannel -t <channel>
              service -d <deviceid> -c reset -q usm -t <volume>
              service -d <deviceid> -c degrade -q driveChannel -t <channel>
              service -d <deviceid> -c initialize -t <dacstore|txdriveY|vdiskID>
              service -d <deviceid> -c contact [-t <a|b>]
              service -d <deviceid> -c download -t ddc -p <path> -o <output file>
              service -d <deviceid> -c unassign -t <driveid>
              service -d <deviceid> -c print -t <profile|rls|mel>
              service -d <deviceid> -c save -t <iom|mel|profile|rls|state|soc> -p <path> -o <filename>
              service -d <deviceid> -c migrate -q <export|import|cancel|force> -t <virtual disk>
              service -d <deviceid> -c migrate -q <exportDependencies|importDependencies> -t <virtual disk>
              service -d <deviceid> -c replace -q list [-t <disk>]
              service -d <deviceid> -c replace -t <current disk> -q <replacement disk>
              service -d <deviceid> -c adopt [-t <tXdriveY>]
              service -d <deviceid> -c secure -q create -p <outputpath> -o <filename> prefix=<prefix>
              service -d <deviceid> -c secure -q export -p <outputpath> -o <filename>
              service -d <deviceid> -c secure -q import -p <key file including path>
              service -d <deviceid> -c secure -q erase disk=<diskid[,diskid,...]>
              service -d <deviceid> -c recover label=<vol name> manager=<a|b> vdiskID=<vdisk idx> raidLevel=<0|1|3|5|6> capacity=<bytes> segmentSize=<bytes> offset=<blocks> readAhead=<0|1> [drives=<tXdriveY,tXdriveY...> vdiskLabel=<vdisk label>]
              Completion Status: Success