sf-win_x64-CP8_SFW_601

 Basic information
Release type: Patch
Release date: 2016-12-12
OS update support: None
Technote: None
Documentation: None
Popularity: 1435 viewed    downloaded
Download size: 21.57 MB
Checksum: 2664239305

 Applies to one or more of the following products:
Storage Foundation 6.0.1 On Windows x64

 Obsolete patches, incompatibilities, superseded patches, or other requirements:

This patch supersedes the following patches: Release date
sf-win_x64-CP7_SFW_601 (obsolete) 2015-06-29
sf-win_x64-CP6_SFW_601 (obsolete) 2014-11-24
sf-win_x64-CP5_SFW_601 (obsolete) 2014-07-28
sf-win_x64-CP4_SFW_601 (obsolete) 2014-04-29
sfw-win_x64-CP3_SFW_601 (obsolete) 2014-01-28
sfw-win_x64-CP2_SFW_601 (obsolete) 2013-10-29
sfw-win_x64-CP1_SFW_601A (obsolete) 2013-07-29

 Fixes the following incidents:
3061942, 3086900, 3099805, 3124269, 3146196, 3231600, 3322266, 3345684, 3347495, 3352705, 3360992, 3435678, 3447110, 3450291, 3456751, 3458775, 3460423, 3499335, 3524590, 3579881, 3610931, 3618960, 3623654, 3687468, 3736357, 3746359, 3771896, 3862347, 3867065, 3887746

 Patch ID:
None.

Readme file
                          * * * READ ME * * *
              * * * Veritas Storage Foundation 6.0.1 * * *
                      * * * Patch 6.0.1.800 * * *
                         Patch Date: 2016-12-06


This document provides the following information:

   * PATCH NAME
   * OPERATING SYSTEMS SUPPORTED BY THE PATCH
   * PACKAGES AFFECTED BY THE PATCH
   * BASE PRODUCT VERSIONS FOR THE PATCH
   * SUMMARY OF INCIDENTS FIXED BY THE PATCH
   * DETAILS OF INCIDENTS FIXED BY THE PATCH
   * INSTALLATION PRE-REQUISITES
   * INSTALLING THE PATCH
   * REMOVING THE PATCH


PATCH NAME
----------
Veritas Storage Foundation 6.0.1 Patch 6.0.1.800


OPERATING SYSTEMS SUPPORTED BY THE PATCH
----------------------------------------
Windows 2008 X64
Windows Server 2008 R2 X64



BASE PRODUCT VERSIONS FOR THE PATCH
-----------------------------------
   * Veritas Storage Foundation 6.0.1


SUMMARY OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
Patch ID: SFW CP8
* 3061942 (3061941) Two issues related to storage migration of a volume with SmartMove enabled.
* 3086900 (3086895) Mount points configured under Failover Cluster are deleted after upgrading to SFW 6.0.1
* 3099805 (3111073) SFW and SFWHA 6.0.1 unable to identify Hitachi HUS VM LUNs as thin reclaimable.
* 3124269 (3155620) Tagging of snapshot disks fails during the fire drill operation, because of which disk import also fails.
* 3146196 (3146194) In some cases, mount points configured under FOC are lost during a failover.
* 3231600 (3231593) Memory leak occurs for SFW VSS provider while taking a VSS snapshot.
* 3322266 (3322262) In Failover Cluster Manager, refreshing a virtual machine configuration results in storage-related errors.
* 3345684 (3345527) Cluster disk group resource faults after adding or removing a disk from a disk group if all of its disks are not available for reservation.
* 3352705 (3352702) "vxdmpadm disk list" may display the disk name multiple times and it may crash by itself.
* 3360992 (3360987) Server crashes during high write I/O operations on mirrored volumes.
* 3347495 (3347491) After a failover, VEA sometimes does not show the drive letter or mounted folder paths of a successfully-mounted volume.
* 3435678 (3435675) Rhs.exe may crash while performing an operation on the VMDg resource
* 3450291 (3450059) SFW cannot form correct Enclosure for Hitachi Unified Storage 150 arrays
* 3458775 (3458773) After a VxSVC restart on a fast failover configuration, VMDg/MountV resources may fault and failover to other cluster nodes may also fail
* 3456751 (3456746) VxSvc services crashes with heap corruption in VRAS.dll
* 3460423 (3460421) The Primary node hangs if TCP and compression are enabled.
* 3447110 (3424478) Two scenarios where missing disks cannot be removed from a disk group
* 3499335 (3497449) Cluster disk group loses access to majority of its disks due to a SCSI error
* 3524590 (3524586) VEA hangs and eventually crashes when a user tries to access "View Historic Bandwidth Usage" graph.
* 3623654 (3623653) Disk group import fails with the following error: "Unexpected kernel error in configuration update"
* 3618960 (3618959) In some cases, mount points configured under Microsoft Failover Cluster are lost.
* 3610931 (3575793) When the Primary server disconnects the replication link (RLINK), Volume Replicator causes I/O delays.
* 3579881 (3579878) When you take the replication service group offline on the Secondary host, the Secondary host stops responding.
* 3687468 (3687466) In some cases, mount points configured under Microsoft Failover Cluster are lost.
* 3736357 (3735641) A failover cluster deports the faulted cluster quorum disk group.
* 3746359 (3746355) Disk group import fails after a Disk group with Fast Mirror Resync volumes is split.
* 3771896 (3771877) Storage Foundation for Windows does not provide array specific support for Infinidat arrays other than DSM.
* 3862347 (3862346) After reclaiming storage on disks that support thin provisioning and storage reclamation, SQL Server data might get corrupted.
* 3867065 (3867064) After you install CP7, the I/O performance is degraded.
* 3887746 (3887735) Issue 1: Disk drives that are created on a Volume Manager Disk Group (VMDG) are not seen, in a Failover Cluster Manager. Issue 2: A VVRRVG resource cannot be failed over to a system where the disk drives that are created on a VMDG are not available.


DETAILS OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
This patch fixes the following incidents:

Patch ID: SFW CP8

* 3061942 (Tracking ID: 3061941)

SYMPTOM:
Two issues related to storage migration of a volume with SmartMove enabled.

DESCRIPTION:
In some cases, the following two issues are observed when storage migration is performed for a volume (with operations, such as subdisk move, mirror resync, mirror attach, etc.) and SmartMove is enabled: 1. If the VCS MountV resource for the volume is brought offline, then the migration task completes abnormally and the volume may report data corruption. 2. If the data migration is performed for a volume of size 2 TB or greater, then the task never reaches 100% completion because of integer overflow while handling large offsets.

RESOLUTION:
The two issues mentioned above have been fixed. The first has been fixed by adding correct handling of error conditions so that the task is aborted if the volume is taken offline. The second issue has been fixed by using a 64-bit variable for that purpose.


File Name / Version:
vxconfig.dll / 6.0.10001.308

* 3086900 (Tracking ID: 3086895)

SYMPTOM:
Mount points configured under Failover Cluster are deleted after upgrading to SFW 6.0.1

DESCRIPTION:
This issue occurs in a Microsoft Failover Cluster environment after upgrading to SFW 6.0.1. SFW 6.0.1 stores mount point information in the cluster database under the "VolMountInfo" key. As per the functionality, if mount points are not present in the cluster database when the VMDg resource comes online, then SFW deletes them from the mount database assuming that they were deleted from the other cluster node. After upgrading to SFW 6.0.1, SFW deletes the mount points configured using a previous version of SFW when the VMDg resource comes online because the "VolMountInfo" key was not part of earlier versions of SFW.

RESOLUTION:
The VMDg resource now checks if the "VolMountInfo" key is already present under the cluster registry database. If it is not present, then the VMDg resource updates the database with the valid mount points. NOTE: It is required that, after upgrading the node, you must install this hotfix before performing a reboot or bringing the service group online.

File Name / Version:
vxres.dll / 6.0.10002.308

* 3099805 (Tracking ID: 3111073)

SYMPTOM:
SFW and SFWHA 6.0.1 unable to identify Hitachi HUS VM LUNs as thin reclaimable.

DESCRIPTION:
This issue is regarding SFW and SFW HA 6.0.1 not able to identify the Hitachi Unified Storage VM (HUS VM) LUNs as thin reclaimable LUNs because thin provisioning and storage reclamation is not supported on Hitachi HUS VM LUNs.

RESOLUTION:
The issue has been resolved by enhancing ddlprov.dll to add thin provisioning and storage reclamation support for the Hitachi HUS VM array.

File Name / Version:
ddlprov.dll / 6.0.10003.308

* 3124269 (Tracking ID: 3155620)

SYMPTOM:
Tagging of snapshot disks fails during the fire drill operation, because of which disk import also fails.

DESCRIPTION:
This issue occurs while performing the fire drill operations in case of hardware replication agents, which involve tagging of snapshot disks so that they can be imported separately from the original disks. Because of an issue with SFW, it does not write tags to the disks, and also proceeds without giving any error. Then, the import operation on the snapshot disks also fails because there are no disks present with the specified tag.

RESOLUTION:
This was an existing issue where SFW did not write to disks that are marked as read-only. The issue has been resolved by allowing the fire drill tag to be written to a disk even if the disk is marked as read-only.

File Name / Version:
vxconfig.dll / 6.0.10004.308

* 3146196 (Tracking ID: 3146194)

SYMPTOM:
In some cases, mount points configured under FOC are lost during a failover.

DESCRIPTION:
This issue may occur while performing a failover in a Microsoft Failover Cluster (FOC) environment. During this, Microsoft's "GetVolumePathNamesForVolumeName" function, which is used by Volume Manager Disk Group (VMDg) resource mount handling, fails to return mount point information even though the mount points exist on the system. This happens because of an issue with the "GetVolumePathNamesForVolumeName" function. As a result of this behavior, the VMDg resource removes the mount points from the cluster database during the volume arrival notification and then from the system during a failover.

RESOLUTION:
This issue has been resolved by modifying the present handling of mount points during volume arrival. Because of this change, you need to perform the following workaround in case of a dynamic disk group join operation where the target disk group is under the VMDg resource: After performing the dynamic disk group join operation, reset the VMDg resource property.

File Name / Version:
vxres.dll / 6.0.10005.308

* 3231600 (Tracking ID: 3231593)

SYMPTOM:
Memory leak occurs for SFW VSS provider while taking a VSS snapshot.

DESCRIPTION:
This issue occurs during a VSS snapshot operation when VSS is loading and unloading providers. The SFW VSS provider connects to VEA database during the loading and disconnects during the unloading of providers. Because of an issue in the VEA database cleanup during the unloading, the memory leak occurs.

RESOLUTION:
This issue has been resolved so that now SFW VSS provider does not connect to and disconnect from VEA during every load and unload operation. Instead, it creates a connection at the beginning and disconnects when the Veritas VSS Provider Service (vxvssprovider.exe) is stopped.

File Name / Version:
vxvssprovider.exe / 6.0.10006.308

* 3322266 (Tracking ID: 3322262)

SYMPTOM:
In Failover Cluster Manager, refreshing a virtual machine configuration results in storage-related errors.

DESCRIPTION:
In Failover Cluster Manager, this issue occurs while trying to refresh a virtual machine configuration and the underlying volumes are managed by the SFW VMDg resource. This operation fails with storage-related errors because of the Microsoft code for this operation being limited to a physical disk resource and the VMDg resource type unable to handle some control codes.

RESOLUTION:
This issue has been fixed by implementing required control codes for the VMDg resource type. However, because of Microsoft's support for a single disk, note that this fix will work only for virtual machines with the disk group (VMDG resource) having one data volume laying on the MBR disks. 

File Name / Version:
vxres.dll / 6.0.10008.308
cluscmd.dll / 6.0.10008.308

* 3345684 (Tracking ID: 3345527)

SYMPTOM:
Cluster disk group resource faults after adding or removing a disk from a disk group if all of its disks are not available for reservation.

DESCRIPTION:
This issue occurs when you add or remove a disk from a dynamic disk group and if the majority of its disks are available for reservation, but not all. After the disk is added or removed, SFW checks to see if all the disks are available for reservation. In this case, because not all the disks are available, the 
cluster disk group resource faults.

RESOLUTION:
This issue has been resolved so that, after a disk is added or removed from a disk group, SFW now checks only for a majority of disks available for
reservation instead of all.

File Name / Version:
vxconfig.dll / 6.0.10009.308

* 3352705 (Tracking ID: 3352702)

SYMPTOM:
"vxdmpadm disk list" may display the disk name multiple times and it may crash by itself.

DESCRIPTION:
On some system, "vxdmpadm disk list" may display the disk name multiple times and sometimes it may crashes by itself. This happens due to incorrect logic in vxcmd.dll, where vxdmpadm tries to access invalid memory.

RESOLUTION:
This issue has been resolved by implementing correct logic code in vxcmd library.

File Name / Version:
vxcmd.dll / 6.0.10010.308

* 3360992 (Tracking ID: 3360987)

SYMPTOM:
Server crashes during high write I/O operations on mirrored volumes.

DESCRIPTION:
This issue occurs when heavy write I/O operations are performed on mirrored volumes. During such high I/O operations, the server crashes due to a problem managing the memory for data buffers.

RESOLUTION:
This issue has been resolved by appropriately mapping the system-address-space described by MDL for the write I/Os on mirrored volumes.

File Name / Version:
vxio.sys / 6.0.10011.308

* 3347495 (Tracking ID: 3347491)

SYMPTOM:
After a failover, VEA sometimes does not show the drive letter or mounted folder paths of a successfully-mounted volume.

DESCRIPTION:
This issue may occur after a failover when VEA sometimes does not show the drive letter or mounted folder paths of a volume even though the volume is successfully mounted with the expected drive letter or folder paths. During a failover, when a disk group gets imported, SFW mounts all volumes of the disk group by querying the mount points using Microsoft API GetVolumePathNamesForVolumeName(). Sometimes, this API fails to return the correct drive letter or mounted folder paths because of which VEA fails to update the same.

RESOLUTION:
NOTE: Please note that using the following workaround has a performance impact on the service group offline and failover operations. This happens because, during the service group offline or failover operation, the performance of the disk group deport operation is impacted by "n/2" seconds maximum, where "n" is the number of volumes in the disk group. To resolve this issue, the operation needs to be retried after a few milliseconds so that the Microsoft API GetVolumePathNamesForVolumeName() returns correct information. As a workaround, a new retry logic is added to the GetVolumePathNamesForVolumeName() API so that it retries the operation in case the mount path returned is empty. It will retry after every 100 milliseconds for "n" number of attempts (5 by default), which can be configured using the registry. This retry logic is disabled by default.

To use the workaround, do the following:
1. Enable the retry logic by changing the value of the registry entry "RetryEnumMountPoint" from 0 to 1 under the registry key 
- HKEY_LOCAL_MACHINE\SOFTWARE\VERITAS\VxSvc\CurrentVersion\VolumeManager
2. Configure the number of retry attempts by changing the value of the registry entry "RetryEnumMPAttempts" under the registry key
- HKEY_LOCAL_MACHINE\SOFTWARE\VERITAS\VxSvc\CurrentVersion\VolumeManager


File Name / Version:
mount.dll / 6.0.10012.308

* 3435678 (Tracking ID: 3435675)

SYMPTOM:
Rhs.exe may crash while performing an operation on the VMDg resource

DESCRIPTION:
When the VMDg resource is configured in Microsoft failover cluster, any operation on this resource requiring disk group or disk information may crash the Resource Hosting Subsystem (RHS.exe) process. This happens because of deallocation of an invalid memory location.

RESOLUTION:
This issue has been fixed by correcting the deallocation of memory.

File Name / Version:
vxres.dll / 6.0.10013.308

* 3450291 (Tracking ID: 3450059)

SYMPTOM:
SFW cannot form correct Enclosure for Hitachi Unified Storage 150 arrays

DESCRIPTION:
This issue occurs for Hitachi Unified Storage 150 disk arrays. Because SFW does not provide Enclosure support for Hitachi Unified Storage 150 arrays, SFW cannot form correct Enclosure for this array. Because of this, VEA GUI incorrectly shows disks of two different Enclosures under one Enclosure. Moreover, mirror across disks by Enclosure cannot be performed.

RESOLUTION:
This issue has been resolved by enhancing SFW with Enclosure support for Hitachi Unified Storage 150 arrays.

File Name / Version:
Hitachi.dll / 6.0.10014.308

* 3458775 (Tracking ID: 3458773)

SYMPTOM:
After a VxSVC restart on a fast failover configuration, VMDg/MountV resources may fault and failover to other cluster nodes may also fail

DESCRIPTION:
This issue occurs on restart of the VxSVC service after RO to RW conversion. After the service restarts, re-import of fast failover enabled disk groups may fail, which makes the VMDg or their dependent MountV resources to fault. As the reservation thread is not stopped in this scenario, failover of cluster disk groups to other cluster nodes fail with reservation errors.

RESOLUTION:
This issue has been resolved by ignoring the Host ID check for fast failover enabled disk groups. Therefore, the disk group re-import is successful. If re-import fails for any other reason, then the reservation thread is stopped so that failover to other nodes is successful.

File Name / Version:
vxconfig.dll / 6.0.10016.308

* 3456751 (Tracking ID: 3456746)

SYMPTOM:
VxSvc services crashes with heap corruption in VRAS.dll

DESCRIPTION:
VRAS decided to discard a malformed packet it received, since the size of the packet was too large. It encountered an issue while freeing the IpmHandle pointer and crashed eventually.

RESOLUTION:
This hotfix resolves the crash which occurred during the handling of malformed packet.

File Name / Version:
vras.dll / 6.0.10017.308

* 3460423 (Tracking ID: 3460421)

SYMPTOM:
The Primary node hangs if TCP and compression are enabled.

DESCRIPTION:
During replication, this issue occurs if TCP and compression of data are enabled and the resources are low at the Secondary node. Because of low resources, decompression of data on the Secondary fails repeatedly, causing the TCP buffer to fill up. In such case, if network I/Os are performed on the Primary and a transaction is initiated, then the Primary node hangs.

RESOLUTION:
The issue of system hang caused by VVR is resolved in this hotfix.

File Name / Version:
vxio.sys / 6.0.10018.308

* 3447110 (Tracking ID: 3424478)

SYMPTOM:
Two scenarios where missing disks cannot be removed from a disk group

DESCRIPTION:
The issue where missing disks cannot be removed from a disk group occurs in the following two scenarios: 1. When you try to remove a missing disk from a disk group using the vxdg rmdisk command. In the command, you need to mandatorily provide the name of disk group that the missing disk needs to be removed from. Despite providing the correct disk group name, the command fails because of a bug in the internal check performed for the disk group name. 2. When there are even number of disks in a disk group with half of the disks missing. In this case, if there are any volumes on the non-missing disks, then removing the missing disks is not allowed. If you try to remove them, then it fails with the "Cannot remove last disk in dynamic disk group" error. This happens because the operation to remove disks incorrectly compares the number of disks to be removed with that of the non-missing disks. If the number is equal, then the operation tries to remove the complete disk group. However, the presence of volume resources prevents the removal of the disk group and the intended missing disks as well.

RESOLUTION:
The issue in both the scenarios has been resolved as follows: 1. The issue in the first scenario has been resolved by modifying the way a missing disk can be removed from a disk group. While using the vxdg rmdisk command, you can remove a missing disk from a disk group either by specifying only its display name (for example, "Missing Disk (disk#)") or by specifying both its internal name and name of the disk group to which it belongs. 2. The issue in the second scenario has been resolved so that the operation to remove disks now compares the number of disks to be removed with the total number of disks in the disk group and not with the number of non-missing disks.

File Name / Version:
vxconfig.dll / 6.0.10019.308
vxdg.exe / 6.0.10019.308

* 3499335 (Tracking ID: 3497449)

SYMPTOM:
Cluster disk group loses access to majority of its disks due to a SCSI error

DESCRIPTION:
This issue occurs while processing a request to renew or query the SCSI reservation on a disk belonging to a cluster disk group. Because of the following error in the SCSI command, the operation fails: Unit Attention - inquiry parameters changed (6/3F/03) Because of this, the cluster disk group loses access to majority of its disks.

RESOLUTION:
This issue has been resolved by retrying the SCSI reservation renew or query request.

File Name / Version:
vxio.sys / 6.0.10020.308

* 3524590 (Tracking ID: 3524586)

SYMPTOM:
VEA hangs and eventually crashes when a user tries to access "View Historic Bandwidth Usage" graph.

DESCRIPTION:
In the VVR GUI, if the "View Historic Bandwidth Usage" is launched from RDS, and the historic dataset is large in size, then the VEA hangs and crashes after some-time.

RESOLUTION:
This issue is fixed by handling third-party objects, which consume a lot of memory. Certain enhancements are also implemented when a user refreshes the graph.

File Name / Version:
vvr.jar / N/A

* 3623654 (Tracking ID: 3623653)

SYMPTOM:
Disk group import fails with the following error: "Unexpected kernel error in configuration update"

DESCRIPTION:
When the preferred plex for a volume is not set, and a Data Control Map (DCM) plex already resides on an Solid State Device (SSD), disk group import fails with the following error: "Unexpected kernel error in configuration update". This happens because the DCM plex gets set as the preferred plex.

RESOLUTION:
The issue has been resolved by modifying the code such that the DCM plex is not considered for preferred plex, even if the preferred plex is not set.

File Name / Version:
vxconfig.dll / 6.0.10025.308

* 3618960 (Tracking ID: 3618959)

SYMPTOM:
In some cases, mount points configured under Microsoft Failover Cluster are lost.

DESCRIPTION:
This issue may occur while assigning mount points in a Microsoft Failover Cluster environment. During this, Microsoft's "GetVolumePathNamesForVolumeName" function, which is used by Volume Manager Disk Group (VMDg) resource mount handling, fails to return mount point information even though the mount points exist on the system. This happens because of an issue with the "GetVolumePathNamesForVolumeName" function. As a result of this behavior, the VMDg resource removes the mount points from the cluster database during the volume arrival notification.

RESOLUTION:
This issue has been resolved by modifying the present handling of the existing mount points and new mount points during volume arrival.

File Name / Version:
vxres.dll / 6.0.10024.308

* 3610931 (Tracking ID: 3575793)

SYMPTOM:
When the Primary server disconnects the RLINK, Volume Replicator causes I/O delays.

DESCRIPTION:
While sending the data acknowledgement to the Primary server, if a timeout occurs, the Primary disconnects the RLINK and initiates the error handler. While the error handler is active, Volume Replicator causes I/O delays on the Primary volume.

RESOLUTION:
The code has been tweaked to resolve the issue. 

File Name / Version:
vxio.sys / 6.0.10023.308

* 3579881 (Tracking ID: 3579878)

SYMPTOM:
When you take the replication service group offline on the Secondary host, the Secondary host stops responding.

DESCRIPTION:
After Volume Replicator sends the writes to the data volumes on the Secondary, it sends a data acknowledgement to the Primary. If this data acknowledgement remains stuck in the queue, and another write is executed simultaneously, the Secondary stops responding.

RESOLUTION:
This issue has been resolved by ensuring that the data acknowledgement messages do not wait in the queue.

FILE / VERSION:
vxio.sys/6.0.10022.308

* 3687468 (Tracking ID: 3687466)

SYMPTOM:
In some cases, mount points configured under Microsoft Failover Cluster are lost.

DESCRIPTION:
This issue may occur while assigning mount points in a Microsoft Failover Cluster environment or during Failover. 
During this, Microsoft's "GetVolumePathNamesForVolumeName" function, which is used by Volume Manager Disk Group (VMDg) resource mount handling, fails to return mount point information 
even though the mount points exist on the system. This happens because of an issue with the "GetVolumePathNamesForVolumeName" function. As a result of this behaviour, the VMDg resource removes the mount points from the cluster database 
during the volume arrival notification or failover.

RESOLUTION:
This issue has been resolved by modifying the present handling of the Microsoft function GetVolumePathNamesForVolumeName.
File Name / Version:
cluscmd.dll / 6.0.10028.309
vxres.dll / 6.0.10028.309

* 3736357 (Tracking ID: 3735641)

SYMPTOM:
If half the number of disks that are available in the cluster quorum disk group are disconnected, the quorum disk group faults and the failover cluster deports it.

DESCRIPTION:
In the event of a change in the cluster configuration, failover cluster updates the cluster configuration data on the disks that form the quorum disk group.
During this update, if half the number of disks that are available in the cluster quorum disk group are disconnected, then the quorum disks group faults and the failover cluster deports it.

RESOLUTION:
This hotfix resolves the issue by modifying SFW behaviour.

File Name / Version:
vxio.sys / 6.0.10026.308

* 3746359 (Tracking ID: 3746355)

SYMPTOM:
When you split a disk group that has Fast Mirror Resync (FMR) volumes, the disk group import fails.

DESCRIPTION:
When you split a disk group with FMR volumes, the vxio driver tries to log errors with stale information on the disk. Normally, the logging does not succeed when the disk group is split. However, in some cases, the logging might be successful, causing the disk group import to fail.

RESOLUTION:
This hotfix resolves the issue by ignoring the disabled DCO volumes.

File Name / Version:
vxio.sys / 6.0.10027.308

* 3771896 (Tracking ID: 3771877)

SYMPTOM:
Storage Foundation does not provide array specific support for Infinidat arrays other than DSM.

DESCRIPTION:
Storage Foundation does not provide any array specific support for Infinidat arrays, except DSM. As a result, Storage Foundation is unable to perform any operations related to enclosures, thin provisioning reclamation and track alignment on the LUNS created on Infinidat arrays.

RESOLUTION:
This hotfix addresses the issue by providing support for enclosures, thin provisioning reclamation and track alignment for Infinidat arrays.

Known issue: When the reclaim operation for the disk group is in progress and you disconnect a disk path, the reclaim operation fails for last disk in the disk group.
Workaround: Retry the disk group reclaim operation.

FILE / VERSION:
NFINIDAT.dll / 6.0.10029.308
ddlprov.dll / 6.0.10029.308

* 3862347 (Tracking ID: 3862346)

SYMPTOM:
After reclaiming storage on disks that support thin provisioning and storage reclamation, SQL Server data might get corrupted.

DESCRIPTION:
A file system bitmap is acquired during a reclaim storage operation. The reclaim region and the region that is represented by the file system bitmap may not be exactly aligned. Therefore, some region beyond the reclaim boundary may get reclaimed, and this region may be in active use. In such a scenario, the data in the region that is in active use can get corrupted.

RESOLUTION:
This hotfix addresses the issue by updating the boundary condition check to consider that the file system map may not completely match the reclaim region. 
  
FILE / VERSION: 
vxconfig.dll / 6.0.10030.308 
vxio.sys / 6.0.10030.308

* 3867065 (Tracking ID: 3867064)

SYMPTOM:
After you install CP7, the I/O performance is degraded.

DESCRIPTION:
For any striped volume, after you install CP7, an internal driver causes a degraded I/O performance.

RESOLUTION:
This hotfix addresses the issue by working of the driver for striped volumes. 

FILE / VERSION: 
vxio.sys / 6.0.15303.309

* 3887746 (Tracking ID: 3887735)

SYMPTOM:
Issue 1: In a Failover Cluster Manager, even if a VMDG resource is online, details of the disk drives that are created on the VMDG cannot be obtained. Issue 2: A VVRRVG resource failover operation fails if the disk drive details of a VMDG are not available on the target system.

DESCRIPTION:
Issue 1: After a Volume Manager Disk Group Resource is added to a Failover Cluster, the list of disk drives that are created on a Volume Manager Disk Group is not obtained. Issue 2: The VVRRVG resource failover operation fails if the disk drive details of a VMDG are not available on the target system. These issues occurs if more than 255 disks are added to the disk group. Addition of more than 255 disks to a disk group results in data loss due to incorrect data type conversion. As a result, the details of disk drives that are created on a VMDG cannot be obtained in Failover Cluster Manager.

RESOLUTION:
The SFW behavior if more than 255 disks are added to a disk group is now modified with this hotfix. The incorrect data type conversion that was caused due to exceeding number of disks is now resolved with this hotfix. 

FILE / VERSION: 
vxres.dll / 6.0.16102.309



INSTALLING THE PATCH
--------------------
What's new in this CP
=====================|

The following hotfixes have been added in this CP:
 - Hotfix_6_0_10028_309_3687468
 - Hotfix_6_0_10026_308_3736357
 - Hotfix_6_0_10027_308_3746359
 - Hotfix_6_0_10029_308_3771896
 - Hotfix_6_0_10030_308_3862347
 - Hotfix_6_0_15303_309_3867065
 - Hotfix_6_0_16102_309_3887746
 
For more information about these hotfixes, see the "FIXED_INCIDENTS" section in this Readme.


Install instructions
====================|

Download the appropriate cumulative public patch (CP) executable file to a temporary location on your system.

Each cumulative public patch includes the individual hotfixes that contain enhancements and fixes related to reported issues.
See the "FIXED_INCIDENTS" section for details.

Before you begin
----------------:
[1] Ensure that the logged-on user has privileges to install the CP on the systems.

[2] One or more hotfixes that are included with this CP may require a reboot.
Before proceeding with the installation ensure that the system can be rebooted.

[3] Symantec recommends that you close the Cluster Manager (Java Console) and the Veritas Enterprise Administrator (VEA) Console before installing this CP.

[4] Ensure that you close the Windows Event Viewer before proceeding with the installation.

[5] Before installing CP on Windows Server Core systems, ensure that Visual Studio 2005 x86 re-distributable is installed on the systems.



To install the CP in the silent mode
-----------------------------------:

Perform the following steps:

[1] Double-click the CP executable file to start the CP installation. 

The installer performs the following tasks:
    - Extracts all the individual hotfix executable files
      The files are extracted at %commonprogramfiles(x86)%\Veritas Shared\WxRTPrivates\<HotfixName>
    - Runs the pre-install tasks
    - Installs all the hotfixes sequentially
    - Runs the post-install tasks
The installation progress status is displayed in the command window.

[2] After all the hotfixes are installed, the installer prompts you to restart the system.
Type Y to restart the system immediately, or type N to restart the system later. 
You must restart the system for the changes to take effect.

Note that the installer prompts for a system restart only if hotfixes that require a reboot are included in the CP and are installed.

To install the CP using the command line
----------------------------------------:

Use the VxHFBatchInstaller.exe utility to install a CP from the command line.
The syntax options for this utility are as follows:

vxhfbatchinstaller.exe /CP:<CPName> [/PreInstallScript:<PreInstallScript.pl>] [/silent [/forcerestart]]

where,
    - CPName is the cumulative patch executable file name without the platform, architecture, and .exe extension.
For example, if CP executable name is CP8_SFW_601_W2K8_x64.exe, specify it as CP8_SFW_601.

    - PreInstallScript.pl is the Perl script that includes the pre-installation steps. These steps forcefully kill the required services and processes in case a graceful stop request does not succeed.
    Symantec recommends that you use this option and script only in case the CP installer fails repeatedly while performing the pre-installation tasks.

    - /silent indicates the installation is run in a non-verbose mode; the installer does not prompt for any inputs during the installation.

    - /forcerestart indicates that the system is automatically restarted, if required, after the installation is complete.


Perform the following steps:
[1] From the command prompt, navigate to the directory where the CP executable file is located and then run the file to extract the contents to a default location on the system. 
The installer displays a list of hotfixes that are included in the CP.
    - On 64-bit systems, the hotfixes executable files are extracted to:
      "%commonprogramfiles(x86)%\Veritas Shared\WxRTPrivates\<CPName>"
      
[2] In the same command window, run the following command to begin the CP installation in the silent mode:
vxhfbatchinstaller.exe /CP:<CPName> /silent

For example, to install a SFW 6.0.1 CP for Windows Server 2008, the command is:
vxhfbatchinstaller.exe /CP:CP8_SFW_601 /silent

The installer performs the following tasks:

    - Extracts all the individual hotfix executable files
      The files are extracted at %commonprogramfiles(x86)%\Veritas Shared\WxRTPrivates\<HotfixName>
    - Runs the pre-install tasks
    - Installs all the hotfixes sequentially
    - Runs the post-install tasks
The installation progress status is displayed in the command window.

[3] After all the hotfixes are installed, the installer displays a message for restarting the system.
You must restart the system for the changes to take effect.

Note that the installer prompts for a system restart only if hotfixes that require a reboot are included in the CP and are installed. The installer automatically restarts the system if you had specified the /forcerestart option in step 2 earlier.


VxHFBatchInstaller usage example
---------------------------------:

[+] Install CP in silent mode, restart automatically:

vxhfbatchinstaller.exe /CP:CP8_SFW_601 /silent /forcerestart

Post-install steps
==================|
The following section describes the steps that must be performed after installing the hotfixes included in this CP.

Ensure that VIP_PATH environment variable is set to "C:\Program Files\Veritas\Veritas Object Bus\bin" 
and NOT to "C:\<INSTALLDIR_BASE>\Veritas Object Bus\bin". Assuming that C:\ is the default installation drive.

Known issues
============|
The following section describes the issues related to the individual hotfixes that were included in the previous 6.0.1 CP1:

[1] Hotfix_6_0_10004_308_3124269, Hotfix_6_0_10003_308_3099805, and Hotfix_6_0_10001_308_3061942

 - These hotfixes were initially part of CP1 and they were re-archived in CP2 to ensure that VIP_PATH is set to correct value.

 - In CP1, after installing/un-installing these hotfixes, VIP_PATH environment variable was incorrectly getting set to "C:\<INSTALLDIR_BASE>\Veritas Object Bus\Bin" 
 instead of "C:\Program Files\Veritas\Veritas Object Bus\Bin".

-------------------------------------------------------+


REMOVING THE PATCH
------------------
NO


SPECIAL INSTRUCTIONS
--------------------
This fix is provided without warranty of any kind including the warranties of title or implied warranties of merchantability, 
fitness for a particular purpose and non-infringement. Symantec disclaims all liability relating to or arising out of this fix. 
It is recommended that the fix be evaluated in a test environment before implementing it in your production environment. 
When the fix is incorporated into a Storage Foundation for Windows maintenance release, the resulting Hotfix or Service Pack 
must be installed as soon as possible. Symantec Technical Services will notify you when the maintenance release (Hotfix or Service Pack) 
is available if you sign up for notifications from the Symantec support site http://www.symantec.com/business/support and/or 
from Symantec Operations Readiness Tools (SORT) http://sort.symantec.com.

Additional notes
================|

[+] To confirm the list of cumulative patches installed on a system, run the following command from the directory where the CP files are extracted:
vxhfbatchinstaller.exe /list

The output of this command displays a list of cumulative patches and the hotfixes that are installed as part of a CP. 
This command also displays the hotfixes that are included in a CP but are not installed on the system.

[+] To confirm the installation of the hotfixes, perform one of the following:
    - Run the following command:
      vxhf.exe /list
      The output of this command lists the hotfixes installed on the system.
    - In the Windows Add/Remove program, click "View installed updates" to view the list of the hotfixes installed on the system.

[+] The CP installer (vxhfbatchinstaller.exe) creates and stores logs at:
"%allusersprofile%\Veritas\VxHF\VxHFBatchInstaller.txt"

[+] To confirm the latest cumulative patch installed on a system, run the following command from the directory where the CP files are extracted:
vxhfbatchinstaller.exe /cplevel

The output of this command displays the latest CP that is installed, the CP status, and a list of all hotfixes that were a part of the CP but not installed on the system.

[+] The hotfix installer (vxhf.exe) creates and stores logs at:
"%allusersprofile%\Veritas\VxHF\VxHFBatchInstaller.txt"

[+] For general information about the hotfix installer (vxhf.exe), please refer to the following technote:
http://www.symantec.com/docs/TECH73446

[+] To view a list of hotfixes already installed on a system, please refer to the steps mentioned in the following technote:
http://www.symantec.com/docs/TECH73438

[+] For information on uninstalling a hotfix, please refer to the following technotes:
http://www.symantec.com/docs/TECH225604
http://www.symantec.com/docs/TECH73443

[+] For general information about the CP, please refer to the following technote:
http://www.symantec.com/docs/TECH209086


OTHERS
------
NONE