1 2 Previous Next 25 Replies Latest reply on Sep 8, 2016 12:45 PM by Dude! Go to original post
      • 15. Re: ASM normal redundancy

        ASM does not mirror disks.

        • 16. Re: ASM normal redundancy

          1) What are advantages for having each disk an own failure group, compare to having separate failure groups?

          Not sure what you mean by separate failure groups?  You can create failure groups that consist of more than one disk device, which is usually done for the purpose to have controller path redundancy. ASM redundancy is only between failure groups. For example, assume you have 4 disks on controller A and another 4 disks on controller B, then for normal redundancy, you would make sure that mirroring of data happens between controller A and B and not between storage devices that are connected to the same storage controller.


          2) Suppose if I have two failure groups ( FAL 1, FAL 2) with two disks each - If one failure group (FAL 1) is crashed (two disks became offline) - Do i lose my data?

          No, of course not. That's the reason for having redundancy. ASM redundancy functions between disk failure groups using the available free disks space to mirror file extents. If one failure group is dead, data will still be accessible on the other failure group. You can even have multiple disk failure groups and use ASM normal redundancy, in which case ASM will attempt to split the data evenly among all disk failure groups, writing data in a round-robing fashion.


          3) Also as per my current environment architecture (normal redundancy) i.e., every disk an own failure group - In this case If two disk became offline - Do i lose my data? or my ASM stop working?

          That's the same answer before. When a device fails it becomes offline. By default, ASM drops a disk in 3.6 hours after it is taken offline. Since every disks is normally it's own failure group and you usually have more than just only 2 disks, ASM redundancy will continue to function, but overall capacity will be reduced. In the event that too many devices become offline and redundancy cannot be accomplished, you will essentially run out of disk space. Btw, for best performance, you should at least have 4 disks in a disk group.

          • 17. Re: ASM normal redundancy
            SUPRIYO DEY

            Yes ASM mirrors extents I know. But doing it again at ASM level will be a performance penalty as storage has already done it.

            • 18. Re: ASM normal redundancy

              Performance penalty for what? Can you show some reference or explain? As far I as I understand it's more expensive, but there is no performance penalty. The more disks involved the better the performance, at least for reading data and as far as the I/O bus can handle. It's about I/O kernel contention.

              • 19. Re: ASM normal redundancy

                In ASM you can configure preferred read failure groups, which might be more efficient for a node to read from an extent that is closest to the node, even if that extent is a secondary extent.

                • 21. Re: ASM normal redundancy

                  The document outlines that ASM & RAID striping are complimentary to each other. When a SAN or disk array provides striping, that can be used in a manner which is complementary to ASM. ASM mirroring has a small overhead on the server (specially on write performance).

                  • 22. Re: ASM normal redundancy
                    SUPRIYO DEY

                    Yes write performance.

                    • 23. Re: ASM normal redundancy

                      More overhead means more expensive, but not necessarily a performance penalty.

                      • 24. Re: ASM normal redundancy
                        SUPRIYO DEY

                        performing the same operation twice , at ASM and storage level. Mirroring extents .

                        • 25. Re: ASM normal redundancy

                          .... which produces more overhead on write performance, but not necessarily a performance penalty. Just because there is somewhat more overhead does not mean writing is slower. Practically, I have never seen any software or hardware mirroring solution that reduced write performance below the performance of a single drive. Most mirroring solutions however increase read performance.

                          1 2 Previous Next