Veritas™ Services and Operations Readiness Tools (SORT)
Custom Reports Using Data Collectors
Product: Storage Foundation for Oracle
Platform: Linux
Product version: 5.1
Product component: Volume Manager
Check category: All
Check category: Availability
Check description: Checks for the use of instant snapshots and SmartMove. Data corruption can occur with Storage Foundation 5.1 when you use instant snapshots and SmartMove with certain combinations of I/O and administrative operations.
Check procedure:
Check recommendation: If you plan to use instant snapshots (either full-sized or space-optimized), turn off SmartMove. Enter:
# vxdefault set usefssmartmove none
Do one of the following:
# vxdefault list usefssmartmove
Learn More...
About SmartMoveCheck category: Availability
Check description: Checks whether the vxconfigbackupd daemon is running on the system.
Check procedure:
Check recommendation: It is recommended that you start the vxconfigbackupd daemon. The vxconfigbackupd daemon monitors changes to the disk group configuration in Volume Manager (VxVM), and stores any output in the configuration directory. This assists in recovering lost or corrupt disk groups/volumes when there are no backup copies of their configuration. Restart the vxconfigbackupd by running the following command: # /etc/vx/bin/vxconfigbackupd &
Learn More...
About vxconfigbackupd daemonCheck category: Availability
Check description: Checks whether all the disks in the VxVM disk group are visible on the cluster node.
Check procedure:
Check recommendation: Make sure that all VxVM disks have been discovered. Do the following:
1. Run an operating system-specific disk discovery command such as lsdev (AIX), ioscan (HP-UX), fdisk (Linux), or format or devfsadm (Solaris).
2. Run vxdctl enable.
# vxdctl enable.
Check category: Availability
Check description: Checks for valid Volume Manager (VxVM) licenses on the cluster systems.
Check procedure:
Check recommendation: Use the /opt/VRTS/bin/vxlicinst utility to install a valid VxVM license key.
Check category: Availability
Check description: On the local system where the DiskGroup resource is offline, it checks whether the unique disk identifiers (UDIDs) for the disks match those on the online systems.
Check procedure:
Check recommendation: Make sure that the UDIDs for the disks on the cluster nodes match. To find the UDID for a disk, enter the following command:
# vxdisk -s list disk_name.
Note: The check does not handle SRDF replication. In case of SRDF replication, user should make use of 'clearclone=1' attribute (SFHA 6.0.5 onwards) which will clear the clone flag and update the disk UDID.
Check category: Availability
Check description: Verifies that all the disks in the disk group in a campus cluster have site names. Also verifies that all volumes on the disk group have the same number of plexes on each site in the campus cluster.
Check procedure:
Check recommendation: Make sure that the site name is added to each disk in a disk group. To verify the site name, enter the following command:
# vxdisk -s list disk_name
On each site in the campus cluster, make sure that all volumes on the disk group have the same number of plexes. To verify the plex and subdisk information of a volume created on a disk group, enter the following command:
# vxprint -g disk_group.
Check category: Availability
Check description: Checks whether all the disks are visible to all the nodes in a cluster.
Check procedure:
Check recommendation: Make sure that all the disks are connected to all the nodes in a cluster. Run operating system-specific disk discovery commands such as lsdev (AIX), ioscan (HP-UX), fdisk (Linux) or devfsadm (Solaris).
If the disks are not visible, connect the disks to the nodes.
Check category: Availability
Check description: Checks whether duplicate disk groups are configured on the specified nodes.
Check procedure:
Check recommendation: To facilitate successful failover, make sure that there is only one disk group name configured for the specified node. To list the disk groups on a system, enter the following command:
# vxdg list
Check category: Availability
Check description: Checks for a missing Array Support Library (ASL) for the disk array connected to the system.
Check procedure:
Check recommendation: It is recommended that you install the missing ASL for the connected disk array on the system.
Check category: Availability
Check description: Checks the completeness of boot volumes, and verifies that the plex size is at least equal to the volume size.
Check procedure:
Check recommendation: Fix the boot configuration for the boot volumes that are present on the system boot disk group; the volume size should not be greater than the plex size.
Check category: Availability
Check description: Data corruption may occur on Cross-platform Data Sharing(CDS) disks in SFHA versions prior to 5.1SP1 when backup labels get written to the user data block location on the disk.
Check procedure:
Check recommendation: Please refer to the technote in the documents section for the set of recommendations.
Learn More...
Refer to technote for more information:Check category: Availability
Check description: On AIX, HP-UX and LINUX platforms, any disk of size greater than 1 terabyte,initialized as CDS (Cross-Platform Data Sharing ) disk under VxVM, may experience block level data corruption at 1 terabyte boundary.The data corruption pattern would match with the CDS disk.s backup label which is similar to Solaris SMI label. Any CDS disk resize from less than 1 terabyte to more than 1 terabyte can also lead to this issue. The issue is applicable only for pre 5.1 SP1 releases.
Check procedure:
Check recommendation: Upgrade VxVM version to VxVM 5.1SP1RP1 and later, which supports disk size more than 1 terabyte across platforms with EFI-GPT label. VxVM 5.1SP1RP1 also allows CDS migration to other UNIX platforms. Workaround would be to move (relocate) the user data to CDS disk of size less than 1 terabyte and then migrate to other UNIX platforms if required
Learn More...
vxdisk: manual pageCheck category: Availability
Check description: Checks for non-mirrored concatenated volumes consisting of multiple disks.
Check procedure:
Check recommendation: It is recommended that you mirror the concatenated volumes. That way, if one of the disks in the volume fails, you will not lose the volume.
Learn More...
Adding a mirror to a volumeCheck category: Availability
Check description: Checks for concat volumes whose LUNs span two or more disk arrays.
Check procedure:
Check recommendation: Reconfigure the volume(s) so that all LUNs on the volume are exported by a single storage array. When a concat volume's component LUNs span two or more arrays, failure of any one array brings the entire volume offline. Therefore, it is recommended that all LUNs in a concatenated volume be exported from one array.
The high-level procedure is as follows:
1. Decide on which array you want the volume to reside (referred hereafter as Array1).
2. Identify and record the name and size of the LUN(s) to be replaced, that is, those LUN(s) exported by arrays other than Array1.
3. Export new, or unused existing, LUN(s) to the server from Array1. Each LUN will typically be the same size and redundancy as the LUN to be replaced.
4. Initialize those LUNs as VxVM disks, typically with the <font face='courier'>vxdisksetup command.
5. Use the <font face='courier'>vxdg adddisk command to add those VxVM disks to the diskgroup.
6. Use the <font face='courier'>vxconfigbackup command to backup the diskgroup configuration.
7. Use the <font face='courier'>vxsd mv command to move contents of the old LUN onto the new LUN. This command operates online while the volume is active and in-use.
8. Optionally, remove the replaced LUN(s) from the diskgroup.
Learn More...
vxdisksetup: man pageCheck category: Availability
Check description: Checks whether the disk arrays on which the boot disks reside are listed in the hardware compatibility list (HCL) as supported for storage area network (SAN) bootability. Note: This check is only performed on Linux and Solaris systems.
Check procedure:
Check recommendation: Disks in the boot disk group do not support SAN bootability. The disk arrays where disks in a disk group reside, should support SAN bootability. The disk arrays should be listed in the HCL as supporting SAN bootability.
Check category: Availability
Check description: Verifies whether mirroring is done with disks coming from a single disk array.
Check procedure:
Check recommendation: Ensure that the mirror copies are placed across storage enclosures; to do so,move the subdisks in one data plex to subdisks in a different enclosure using the following command:
# vxsd mv <old sub-disk> <new sub-disk>
This arrangement ensures continued availability of the volume if either of the enclosures becomes unavailable.
Learn More...
How to create mirrors across enclosures?Check category: Availability
Check description: Checks whether the system has any disabled HBA controllers.
Check procedure:
Check recommendation: Enable all the disabled HBA controllers on the system.
Learn More...
Displaying information about I/O controllersCheck category: Availability
Check description: Checks for VxVM Volume Manager disks in error state.
Check procedure:
Check recommendation: Inspect the hardware configuration to confirm that the hardware is functional and configured properly.
Learn More...
Displaying disk informationCheck category: Availability
Check description: Checks whether the hot-relocation daemon is running on the system and whether any disks in the disk group are marked as spare disks. The check will pass if there are no spare disks irrespective of the vxrelocd daemon state.
Check procedure:
Check recommendation: The hot relocation feature increases the overall availability in case of disk failures. You may either keep hot relocation turned on to take advantage of this functionality, or turn it off. Recommendations are:
Case 1: The spare flag is set to ON for at least one disk in the disk group and vxrelocd is not running: it is recommended to switch on the vxrelocd daemon on the system. In case of disk failure, the hot relocation feature can then try to use the disks marked as spare.
Case 2: The spare flag set ON for at least one disk in the disk group and vxrelocd is running: In case of disk failure, hot relocation may occur.
You can start vxrelocd using following command.
#nohup vxrelocd root
Learn More...
What is hot relocation?Check category: Availability
Check description: Checks for mirrored volumes that do not have a mirrored Dirty Region Log (DRL).
Check procedure:
Check recommendation: Ensure that you mirror the DRL for faster read operation during recovery after a disk failure. A mirrored DRL also ensures availability of the DRL if the disk fails where the DRL resides.
Learn More...
How to create a Volume with DRL enabledCheck category: Availability
Check description: Checks for mirrored volumes whose plexes (mirrors) are on the same disk controllers.
Check procedure:
Check recommendation: The mirrored volumes are not mirrored across controllers. A single controller failure compromises the volume. Create a plex (mirror) on a different controller. Attach the new plex to the volume (for a total of three plexes). Detach one of the two original plexes.
Learn More...
How to create mirrors across controllersCheck category: Availability
Check description: Checks that packages installed across all the nodes in a cluster are consistent
Check procedure:
Check recommendation: Ensure that packages installed on all the nodes in a cluster are consistent and package versions are identical. Inconsistent packages can cause errors in application fail-over.
Check category: Availability
Check description: Checks for large RAID-5 volumes with size greater than !param!HC_CHK_RAID5_LOG_VOL_SIZE!/param! (set in sortdc.conf), that do not have mirrored RAID-5 logs.
Check procedure:
Check recommendation: It is recommended to create a mirrored RAID-5 log for each large RAID-5 volume. A mirror of the RAID-5 log protects against loss of log information due to disk failure.
Learn More...
Creating RAID-5 VolumesCheck category: Availability
Check description: Checks to see that the root mirrors are set up correctly.
Check procedure:
Check recommendation: The root volumes of the boot disk group are not mirrored properly. It is recommended that you fix the mirroring of the root disk.
Check category: Best practices
Check description: Checks that the value in the hostid field of the /etc/vx/volboot file is the same as the value returned by the hostname command.
Check procedure:
Check recommendation: The hostid value in the /etc/vx/volboot file does not match the output of the hostname command. To change the hostid value:
1. Make sure the system does not have any deported disk groups. After you update the hostid in the /etc/vx/volboot file to match the hostname, the hostid in the disk headers on the deported disk groups may not match the hostid in the /etc/vx/volboot file. Importing these disk groups would fail.
2. Run the vxdctl init command. This command does not interrupt any Volume Manager services and is safe to run in a production environment. Enter:
# vxdctl init
Unless you have a strong reason to have two different values, the hostid in the /etc/vx/volboot file should match the hostname. That way, the hostid is unique in a given domain and Storage Area Network.
At boot time, Volume Manager (VxVM) auto imports the disk groups whose hostid value in its disk headers match the hostid value in the /etc/vx/volboot file . Keep the hostid value unique so that no other host in a given domain and SAN can auto import the disk group; otherwise, data may be corrupted.
Check category: Best practices
Check description: Checks whether disk group's detach policy is set to global when the shared disk group consists disks from an A/P disk array.
Check procedure:
Check recommendation: When Dynamic Multi-Pathing (DMP) is used to manage multipathing on A/P arrays, set the detach policy to global. This ensures that all nodes correctly coordinate their use of the active path.
Learn More...
Click here for the reference from the VxVM Administrator's GuideCheck category: Best practices
Check description: On systems with VxVM version 4.0 or greater and disk group version 110 or greater, checks whether any disks in a disk group are configured as portable using the Cross-platform Data Sharing (CDS) feature.
Check procedure:
Check recommendation: Ensure that any disk groups present on the system have disks that are configured as portable and compatible with CDS; this optimizes the portability and migration of data between platforms.
Check category: Best practices
Check description: Checks if each disk group have optimum number of configuration backup copies and disk has enough space for it..
Check procedure:
Check recommendation: The disk groups failure can be summarized in any of the six cases:
Case I : VxVM backup config copy count should be set to atleast 5. Refer technote to increase the configuration backup copies. For details refer the technote.
Case II : The backup directory is missing from host. You should recreate the directory and restart the 'vxconfigdbackupd' daemon. For details refer the technote.
Case III : The binconfig file is empty in the disk group configuration backup copy. i.e. The configuration backup copy is invalid. You should restart 'vxconfigdbackupd' daemon. For details refer the technote.
Case IV : None of the backup config copies have valid binconfig file. All the diskgroup backup config copies needs to be recreated. You should restart 'vxconfigdbackupd' daemon. For details refer the technote.
Case V : Number of backup config copies are less than the default specified. Consider increasing the diskgroup backup config copies. For details refer the technote.
Case VI : Enough free space is not available to store configuration backup copies.
Check category: Best practices
Check description: Checks if each disk group has enough configuration copies. Volume Manager (VxVM) stores metadata in the private region of LUNs (or disks) that comprise a disk group rather than on the server. Therefore, each disk group is independent and portable, which is ideal for performing server migrations, cluster failover, array-based replication, or off-host processing via snapshots. For redundancy, VxVM keeps copies of this metadata on multiple LUNs. Although VxVM actively monitors the number of copies, in some situations the number of copies may drop below VxVM's recommended target. For example, this can happen when you use array-based replication or array-based snapshots. While single LUN disk groups are a valid configuration, they pose an availability risk because VxVM can only keep one copy of the metadata.
Check procedure:
Check recommendation: Increase the number of configuration copies.
Check category: Best practices
Check description: Checks whether the disk group has enough spare disk space available for hot-relocation to occur in case of a disk failure.
Check procedure:
Check recommendation: The disk group(s) do not have enough spare space available for hot-relocation if a disk fails. Make sure that the disk groups on the system have enough disk space available for hot-relocation. It is recommended that you designate additional disks as hot-relocation spares.
To add a disk in a diskgroup, enter:
# vxdg -g [diskgroup] adddisk disk=diskname
To designate a disk as a hot-relocation spare, enter:
# vxedit [-g diskgroup] set spare=on diskname
Learn More...
About hot-relocationCheck category: Best practices
Check description: Checks whether the installed Storage Foundation / InfoScale products are at the latest software patch level.
Check procedure:
Check recommendation: To avoid known risks or issues, it is recommended that you install the latest versions of the Storage Foundation / InfoScale products on the system.
Check category: Best practices
Check description: Checks for mirrored-stripe volumes and recommends to re-layout them for improved redundancy and enhanced recovery time.
Check procedure:
Check recommendation: Ensure that the size of any mirrored-stripe volumes on the system is smaller than the expected default. Reconfigure large mirrored-stripe volumes as striped-mirror volumes to improve redundancy and enhance recovery time after a failure.
Learn More...
Striping plus mirroringCheck category: Performance
Check description: Checks whether Solid State Drives (SSDs) or flash drives are attached to the server. It also recommends the right version of Storage Foundation and High Availability / InfoScale software that have the SmartIO feature to bring better performance, reduced storage cost and better storage utilization.
Check procedure:
Check recommendation: The recommendation is summarized in below cases:
Case 1 : SSDs or flash drives are detected on the Linux system with the Storage Foundation software version earlier than 6.1 installed. It is recommended to upgrade the Storage Foundation software to 6.1 or higher version which enables you to use the SmartIO feature, which improves performance, reduces storage costs and brings better storage utilization for the applications running on the servers.
Case 2 : SSDs or flash drives are detected on the AIX/Solaris system with the Storage Foundation software version earlier than 6.2 installed. It is recommended to upgrade the Storage Foundation software to 6.2 or higher version which enables you to use the SmartIO feature, which improves performance, reduces storage costs, and brings better storage utilization for the applications running on the servers.
Case 3 : SSDs or flash drives are detected on the Linux system with Storage Foundation software version 6.1 installed, but SmartIO feature is not detected. It is recommended that you use the SmartIO feature which improves performance, reduces storage costs, and brings better storage utilization for the applications running on the servers. Please refer the documentation link(s).
Case 4 : SSD or flash drives are detected on the system with Storage Foundation software version 6.2 or higher installed, but SmartIO feature is not detected. It is recommended that you use the SmartIO feature which improves performance, reduces storage costs, and brings better storage utilization for the applications running on the servers. Please refer the documentation link(s).
Case 5 : Storage Foundation software version 6.2 or higher is found on the AIX/Linux/Solaris system without any SSD or flash drives. SSDs or flash drives are more efficient since they provide faster data access and have a smaller footprint than traditional spinning disks. The data center uses solid-state technologies in many form factors: in-server, all flash arrays, all flash appliances, and mixed with traditional HDD arrays. Each form factor offers a different value proposition. SSDs also have many connectivity types: PCIe, FC, SATA, and SAS. It is recommended that you use the SmartIO feature that offers data efficiency on your SSDs through I/O caching, which improves performance, reduces storage costs, and brings better storage utilization for the applications running on the servers.
Case 6 : Storage Foundation software version 6.1 is found on the Linux system without any SSDs or flash drives. SSDs or flash drives are more efficient since they provide faster data access and have a smaller footprint than traditional spinning disks. The data center uses solid-state technologies in many form factors: in-server, all flash arrays, all flash appliances, and mixed with traditional HDD arrays. Each form factor offers a different value proposition. SSDs also have many connectivity types: PCIe, FC, SATA, and SAS. It is recommended that you use the SmartIO feature that offers data efficiency on your SSDs through I/O caching, which improves performance, reduces storage costs, and brings better storage utilization for the applications running on the servers.
Case 7 : Storage Foundation software version 6.1 is found on the AIX/Solaris system without any SSDs or flash drives. It is recommended to upgrade the Storage Foundation software to 6.2 or higher version and use the SmartIO feature that offers data efficiency on your SSDs through I/O caching which improves performance, reduces storage costs, and brings better storage utilization for the applications running on the servers. SSDs or flash drives are more efficient since they provide faster data access and have a smaller footprint than traditional spinning disks. The data center uses solid-state technologies in many form factors: in-server, all flash arrays, all flash appliances, and mixed with traditional HDD arrays. Each form factor offers a different value proposition. SSDs also have many connectivity types: PCIe, FC, SATA, and SAS.
Check category: Performance
Check description: Checks a volume's I/O access times and identifies volumes with I/O access times greater than the user-defined parameter !param!HC_CHK_IO_ACCESS_MS!/param! of the sortdc.conf file.
Check procedure:
Check recommendation: It is recommended that you work to improve I/O access times. Verify that multiple volumes do not use the same underlying physical storage, consider an online relayout to enhance performance, or check for hardware configuration problems by comparing the iostat output with the vxstat output.
Learn More...
Performing online relayoutCheck category: Performance
Check description: Checks whether the Volume Manager (VxVM) buffer size set on the system is greater than or equal to the recommended threshold value for the system.
Check procedure:
Check recommendation: The vol_maxio parameter on the system is less than the default threshold value. It is recommended that you increase the VxVM buffer size using the vol_maxio parameter.
Learn More...
VxVM maximum I/O sizeCheck category: Performance
Check description: Checks whether the Dirty Region Log (DRL) size of any volume deviates from the default DRL size.
Check procedure:
Check recommendation: The dirty region log (DRL) size of the volume differs from the default DRL size. Make sure this is the intended configuration. A large DRL reduces the performance for random writes because DRL updates increase when the region size is small. A smaller DRL, increases the region size, which increases the recovery time after a crash. In general, the larger the region-size, the better it is for the performance of an online volume for a legacy DRL. It is recommended to set the default size for the DRL log. To remove the existing DRL and add a new one, enter :
# vxassist -g <diskgroup> remove log <volume>
# vxassist -g <diskgroup> addlog <volume>
You can run these commands when the volume is unmounted and does not have any IO activity.
Learn More...
About dirty region loggingCheck category: Performance
Check description: Checks for mirrored volumes that do not have a DRL.
Check procedure:
Check recommendation: Ensure that you create a DRL for any large mirrored volumes. A DRL tracks those regions that have changed, and helps with data recovery after a system crash. The DRL uses the tracking information to recover only those portions of the volume that need to be recovered. Without a DRL, recovery involves copying the full content of the volume between its mirrors; this process is lengthy and I/O intensive.
Learn More...
How to add DRL to a mirrored volumeCheck category: Utilization
Check description: Determines whether the storage enclosure on your system is ready to use the Storage Foundation / InfoScale thin provisioning feature.
Check procedure:
Check recommendation: The recommendations are:
Case I: The VxFS file system resides on thin provisioned storage, but the system does not have a Storage Foundation Enterprise / InfoScale Storage license installed. You need a Storage Foundation Enterprise / InfoScale Storage license to mirror the storage.
Case II: The VxFS file system storage enclosure appears to support thin provisioning but the Storage Foundation / InfoScale software does not detect it as thin provisioned storage. Ensure you have the correct Array Support Libraries (ASLs) installed for this storage.
Case III: Your system appears to be attached to a storage enclosure that supports thin provisioning, and the necessary Storage Foundation / InfoScale product is installed; however, the VxFS file System does not reside on thin provisioned storage. Possible reasons are:
a) Thin provisioning is not used.
b) Thin provisioning is not enabled on the storage enclosure.
c) Your version of the storage enclosure may be an old version that does not support thin provisioning.
For reasons (a) or (b), check the thin provisioning support with your storage vendor.
For reason (c), if you are considering migrating to thin provisioned storage, consider the following:
Note: If you have 5.0 MP3 with HF1, you may not need RP1.
Case IV: The VxFS file system's disk group version is less than 110. It is recommended that you upgrade to a disk group version greater than 110 before attempting to migrate.
Case V: The Storage Foundation / InfoScale with thin provisioning feature does not support the storage enclosure on which the VxFS file system resides.
Learn More...
White Paper on: Storage Foundation / InfoScale and Thin ProvisioningCheck category: Utilization
Check description: Checks for unused volumes on hosts with no mounted file systems and no input/output.
Check procedure:
Check recommendation: If the volume is not in use, consider removing it to reclaim storage.
Learn More...
How to remove a volumeCheck category: Utilization
Check description: Checks whether the disk group's configuration database free space is reaching a critical low point, less than !param!HC_CHK_CONF_DB_FULL_SIZE_PRCNT!/param!, which is set in sortdc.conf file.
Check procedure:
Check recommendation: The configuration database of the disk group(s) is too full. When the percentage of used space for the configuration database reaches the threshold value, it is recommended that you split the disk group(s). To split the disk group, enter:
# vxdg split <source-diskgroup> <target-diskgroup> <object>
<object> can be a volume or a disk. For more information, see the Learn More links.
Learn More...
Displaying disk group informationCheck category: Utilization
Check description: Checks whether the disk is underutilized. It lists out those disks that have the percentage of usage space lower than the percentage specified in the user-defined parameter !param!HC_CHK_DISK_USAGE_PERCENT!/param!, which is set in sortdc.conf file.
Check procedure:
Check recommendation: Underutilized disks were found. It is recommended that you use all the storage disk(s) available to the system.
Learn More...
Removing disksCheck category: Utilization
Check description: Checks if any of the VxFS file system and underlying VxFS volume size is different and the size difference is greater than the size specified in the user-defined parameter !param!HC_CHK_FS_VOLS_SIZE_DIFF_THRESHOLD!/param!, which is set in the sortdc.conf file.
Check procedure:
Check recommendation: To make the best use of volume space, the file system should be the same size as the volume or volume set.
The failure can be summarized in one of the following cases:
Case I: The file system size is less than underlying volume by a threshold parameter of HC_CHK_FS_VOLS_SIZE_DIFF_THRESHOLD.
You should either grow the file system using the fsadm command or shrink the volume using the vxassist command.
Case II: The file system is larger than underlying volume. This can happen due to execution of incorrect command (vxassist) for shrinking the volume after file system creation.
Run the following commands.
To grow the file system:
# fsadm [-F vxfs] [-b <newsize>] [-r rawdev] mount_point
To shrink :
#vxassist -g <mydg> shrinkby <vol> <len>
or
#vxassist -g <mydg> shrinkto <vol> <newlen>
Check category: Utilization
Check description: For multi-volume file systems with storage tiering, checks whether any tier is full or low on space.
Check procedure:
Check recommendation: The storage tier for the multi-volume file system has little available space on it. It is recommended adding more volumes to this tier using the vxvoladm command.
Learn More...
About Multi Volume FilesystemCheck category: Utilization
Check description: Checks for unused objects (such as plexes and volumes present in a disk group) and violated objects (such as disabled, detached or failed plexes, stopped or disabled volumes, disabled logs, and volumes needing recovery).
Check procedure:
Check recommendation: The disk groups on the system contain unused or violated objects. It is recommended that you either remove these objects or make them re-usable.
Learn More...
Displaying volume and plex states