sf-win-CP21_SFW_51SP2

 Basic information
Release type: P-patch
Release date: 2014-05-15
OS update support: None
Technote: TECH173500-Storage Foundation for Windows High Availability, Storage Foundation for Windows and Veritas Cluster Server 5.1 Service Pack 2 (SP2) Cumulative Patches
Documentation: None
Popularity: 6008 viewed    downloaded
Download size: 331.44 MB
Checksum: 273384356

 Applies to one or more of the following products:
Storage Foundation 5.1SP2 On Windows 32-bit
Storage Foundation 5.1SP2 On Windows IA64
Storage Foundation 5.1SP2 On Windows x64

 Obsolete patches, incompatibilities, superseded patches, or other requirements:

This patch supersedes the following patches: Release date
sfw-win-CP20_SFW_51SP2 (obsolete) 2014-01-28

 Fixes the following incidents:
2087139, 2203640, 2207263, 2218963, 2245816, 2267265, 2290214, 2318276, 2321015, 2327428, 2364591, 2368399, 2372049, 2372164, 2397382, 2426197, 2440099, 2477520, 2512482, 2530236, 2536009, 2536342, 2554039, 2564914, 2587638, 2604814, 2610786, 2614448, 2635097, 2643293, 2670150, 2676164, 2711856, 2738430, 2766206, 2851054, 2860593, 2864040, 2894296, 2905123, 2911830, 2913240, 2914038, 2928801, 2940962, 2963812, 2975132, 3081465, 3104954, 3105641, 3146554, 3164349, 3190483, 3211093, 3226396, 3265897, 3283659, 3316851, 3319824, 3365283, 3372380, 3419601, 3463697, 3478319, 3490438, 3497450

 Patch ID:
None.

Readme file
                          * * * READ ME * * *
             * * * Veritas Storage Foundation 5.1 SP2 * * *
                         * * * P-patch 21 * * *
                         Patch Date: 2014-05-15


This document provides the following information:

   * PATCH NAME
   * OPERATING SYSTEMS SUPPORTED BY THE PATCH
   * PACKAGES AFFECTED BY THE PATCH
   * BASE PRODUCT VERSIONS FOR THE PATCH
   * SUMMARY OF INCIDENTS FIXED BY THE PATCH
   * DETAILS OF INCIDENTS FIXED BY THE PATCH
   * INSTALLATION PRE-REQUISITES
   * INSTALLING THE PATCH
   * REMOVING THE PATCH


PATCH NAME
----------
Veritas Storage Foundation 5.1 SP2 P-patch 21


OPERATING SYSTEMS SUPPORTED BY THE PATCH
----------------------------------------
Windows 2003 32-bit
Windows 2003 IA64
Windows 2003 X64
Windows 2008 32-bit
Windows 2008 IA64
Windows 2008 X64
Windows Server 2008 R2 X64



BASE PRODUCT VERSIONS FOR THE PATCH
-----------------------------------
   * Veritas Storage Foundation 5.1 SP2


SUMMARY OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
Patch ID: SFW 5.1SP2 CP21
* 2203640 (2203640) This hotfix addresses a Volume Manager plug-in issue due to which the DR wizard is unable to discover a cluster node during configuration.
* 2207263 (2207263) This hotfix addresses a deadlock issue where disk group deport hangs after taking a backup from NBU.
* 2245816 (2245816) Issue with Volume turning RAW due to Write failure in fsys.dll module
* 2267265 (2267265) Attempting to bring the VMDg resource online during MoveGroup operation on a Windows Failover Cluster (WFC) results in an RHS deadlock timeout error
* 2087139 (2087139) While trying to stop the vxvm service, hotfix installer intermittently fails to stop the service.
* 2087139 (2087139) While trying to stop the vxvm service, hotfix installer intermittently fails to stop the service.
* 2290214 (2290214) This hotfix addresses an issue in the SFW component, Veritas VxBridge Service (VxBridge.exe), that causes a memory corruption or a crash in a VxBridge client process in the clustering environment.
* 2321015 (2321015) This hotfix fixes bug check 0x3B in VXIO.
* 2318276 (2318276) This hotfix replaces a fix for an issue in vxio which breaks Network Address Translation (NAT) support in VVR. It also removes the limitation of using different private IP addresses on the primary and secondary host in a NAT environment.
* 2364591 (2364591) This hotfix adds Thin Provisioning Reclaim support for EMC VMAX array.
* 2368399 (2368399) This hotfix addresses an issue where any failure while the volume shrink operation is in progress may cause file system corruption and data loss.
* 2397382 (2397382) VVR primary hangs and a BSOD is seen on the secondary when stop and pause replication operations are performed on the configured RVGs.
* 2218963 (2218963) This hotfix adds thin provisioning support for HP XP arrays to SFW 5.1 SP2.
* 2426197 (2426197) This hotfix addresses an issue where the disks fail to appear after the storage paths are reconnected, until either the vxvm service is restarted or the system is rebooted.
* 2440099 (2440099) In case the original SFW product license fails, SFW now tries to find license with a different API.
* 2477520 (2477520) After installing cumulative patch 1 (CP1) over SFW 5.1 SP2, user was not able to configure dynamic cluster quorum in Microsoft Cluster.
* 2512482 (2512482) This hotfix addresses an issue where cluster storage validation check fails with error 87 on a system that has SFW installed on it. 
This happens due to system volume being offline.
* 2372049 (2372049) Unable to create enclosures for SUN Storage Tek (STK) 6580/6780 array.
* 2554039 (2554039) This hotfix addresses the issue where an orphan task appears in VEA from MISCOP_RESCAN_TRACK_ALIGNMENT.
* 2536342 (2536342) This hotfix addresses the issue where incorrect WMI information is logged into Cluster class for Dynamic disks.
* 2564914 (2564914) This hotfix addresses the issue where VxVDS.exe process does not release handles post CP2 updates.
* 2372164 (2372164) This hotfix addresses the issue where multiple buffer overflows occur in SFW vxsvc.exe. This results in a vulnerability that allows remote attackers to execute arbitrary code on vulnerable installations of Symantec Veritas Storage Foundation. Authentication is not required to exploit this vulnerability.
* 2536009 (2536009) Mirror creation failed with auto selection of disk and track alignment enabled, even though enough space was there.
* 2530236 (2530236) This hotfix addresses an issue where the Resource Hosting Subsystem (RHS) process crashes when the system is restarted.
* 2604814 (2604814) This hotfix addresses an issue where the Volume Manager Diskgroup (VMDg) resource in a Microsoft cluster Server (MSCS) cluster may fault when you add or remove multiple empty disks (typically 3 or more) from a dynamic disk group.
* 2587638 (2587638) If a disk group contains a large number of disks then it causes a delay in the disk reservation resulting in the defender node losing the disks to the challenger node.
* 2614448 (2614448) This hotfix addresses an issue where Windows displays the format volume dialog box when you use SFW to create volumes.
* 2610786 (2610786) This hotfix addresses an issue where the disk group import operation fails to complete if one or more disks in the disk group are not readable.
* 2635097 (2635097) This hotfix addresses an issue related to Veritas Volume Replicator (VVR) where replication hangs and the VEA Console becomes unresponsive if the replication is configured over the TCP protocol and VVR compression feature is enabled.
* 2643293 (2643293) This hotfix provides Thin Provisioning Reclaim support for Fujitsu ETERNUS DX80 S2/DX90 S2 arrays.
* 2670150 (2670150) This hotfix addresses the issue where multiple entries for the warning ERROR_MORE_DATA(234) get logged for the control code CLUSCTL_RESOURCE_STORAGE_GET_DISK_INFO.
* 2676164 (2676164) This hotfix addresses an issue related to the SFW component, vxres.dll, that causes a crash in the Resource Hosting Subsystem (RHS) process.
* 2711856 (2711856) VVR replication between two sites fails if VxSAS service is configured with a local user account that is a member of the local administrators group.
* 2327428 (2327428) his hotfix address an issue with the product installer component that causes a failure in the SFW Thin Provisioning space reclamation.
* 2738430 (2738430) An active cluster dynamic disk group faults after a clear SCSI reservation operation is performed.
* 2766206 (2766206) Some VVR operations fail if the RVG contains a large number of volumes.
* 3372380 (3372380) Performing import/deport disk group in a cluster environment fails due to VxVDS refresh operation.
* 2851054 (2851054) Memory leak in Veritas Storage Agent service (vxpal.exe)
* 2864040 (2864040) The vxprint CLI crashes when used with '-l' option
* 2894296 (2894296) An VMDg resource on MSFC fails to get the correct MountVolumeInfo value
* 2905123 (2905123) Not able to create volumes; VEA very slow for refresh and rescan operations
* 2914038 (2914038) Storage Agent crashes on startup
* 2911830 (2911830) Not able to create a volume with more than 256 disks
* 2928801 (2928801) The vxtune rlink_rdbklimit command does not work as expected
* 2860593 (2860593) The vxprint command may fail when it is run multiple times
* 2913240 (2913240) MountV resource faults because SFW removes a volume due to delayed device removal request.
* 2940962 (2940962) Provisioned size of disks is reflected incorrectly in striped volumes after thin provisioning reclamation
* 2963812 (2963812) VMDg resource in MSCS times out and faults because DG offline operation takes long time to complete
* 2975132 (2975132) The Primary node hangs if TCP and compression are enabled
* 3081465 (3081465) In FoC GUI, VMDg resource belatedly changes status from Online Pending to Online after the resource is online
* 3104954 (3104954) RLINK pause operation fails and system hangs when pausing an RLINK
* 3105641 (3105641) I/O errors may occur while using the vxdisk list, vxdisk diskinfo, or vxassist rescan command
* 3146554 (3146554) Unable to perform thin provisioning and storage reclamation operations on non-track aligned volumes created on arrays that support these operations
* 3164349 (3164349) During disk group import or deport operation, VMDg resources may result in a fault if VDS Refresh is also in progress
* 3190483 (3190483) Cluster disk group resource faults after adding or removing a disk from a disk group if not all of its disks are available for reservation.
* 3211093 (3211093) VVR Primary server may crash while replicating data using TCP multi-connection.
* 3226396 (3226396) Error occurs while importing a dynamic disk group as a cluster disk group if a disk is missing.
* 3265897 (3265897) Moving of subdisk fails for a striped volume with stripe unit size greater than 512 blocks.
* 3283659 (3283659) Rhs.exe process stops unexpectedly when the VMDg resource is brought online.
* 3316851 (3316851) If the EMC Symmetrix array firmware is upgraded to version 5876, SFW is unable to discover the LUNs as thin reclaimable.
* 3319824 (3319824) Data corruption may occur if the volume goes offline while a resynchronization operation is in progress.
* 3365283 (3365283) Server crashes during high write I/O operations on mirrored volumes.
* 3478319 (3478319) Storage Agent log file (vm_vxisis.log) gets flooded with informational messages during cluster monitor cycles
* 3419601 (3419601) BSOD occurs if Windows runs out of the system worker threads
* 3463697 (3463697) Storage Agent log file (vm_vxisis.log) gets flooded with informational messages during cluster monitor cycles
* 3490438 (3490438) VEA GUI incorrectly displays wrong LUN serial numbers for a disk
* 3497450 (3497450) Cluster disk group loses access to majority of its disks due to a SCSI error


DETAILS OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
This patch fixes the following Symantec incidents:

Patch ID: SFW 5.1SP2 CP21

* 2203640 (Tracking ID: 2203640)

SYMPTOM:
This hotfix addresses a Volume Manager plug-in issue due to which the DR wizard is unable to discover a cluster node during configuration.

DESCRIPTION:
While creating a disaster recovery configuration, the Disaster Recovery Configuration Wizard fails to discover the cluster node at the primary site where the service group is online.

You may see the following error on the System Selection page:
V-52410-49479-116
An unexpected exception: 'Attempted to read or write protected memory. This is often an indication that other memory is corrupt. 'Failed to discover 'Veritas Volume Manager' on node '<primarysitenodename>'.

This issue occurs because the Volume Manager plug-in attempts to free unallocated memory and fails, resulting in a memory crash.

RESOLUTION:
The code fix in the Volume Manager plug-in addresses the memory crash.

Binary / Version:
VMPlugin.dll / 5.1.20001.105

* 2207263 (Tracking ID: 2207263)

SYMPTOM:
This hotfix addresses a deadlock issue where disk group deport hangs after taking a backup from NBU.

DESCRIPTION:
Disk group deport operation hangs due to deadlock situation in the storage agent. The VDS provider makes PRcall to other providers after acquiring the Veritas Enterprise Administrator (VEA) database lock.

RESOLUTION:
Before making a PRcall to other providers, release the VEA database lock.

Binary / Version:
vdsprov.dll / 5.1.20001.87

* 2245816 (Tracking ID: 2245816)

SYMPTOM:
Issue with Volume turning RAW due to Write failure in fsys.dll module

DESCRIPTION:
This hotfix updates fsys provider, to handle ntfs shrink bug. Windows Operating System by default fails any sector level reads/writes greater than 32MB on Windows Server 2003. Hence, IOs are split into multiple IO for the WriteSector function in fsys provider.

RESOLUTION:
Split IO into multiple IOs.

Binary / Version:
fsys.dll / 5.1.20009.87

* 2267265 (Tracking ID: 2267265)

SYMPTOM:
This hotfix addresses the following issues:
Issue 1
Attempting to bring the VMDg resource online during MoveGroup operation on a Windows Failover Cluster (WFC) results in an RHS deadlock timeout error: RHS] RhsCall::DeadlockMonitor: Call ONLINERESOURCE timed out for resource.

Issue 2
Request to port implementation of CLUSCTL_RESOURCE_STORAGE_GET_MOUNTPOINTS to Storage Foundation for Windows 5.1 Service Pack 2.

DESCRIPTION:
Issue 1
During VMDg resource online attempt, it appears that the Resource Monitor (RHS) fails to receive notification resulting in an RHS deadlock timeout error: 
[RHS] RhsCall::DeadlockMonitor: Call ONLINERESOURCE timed out for resource

Resources were timed out due to delayed offline and online operation. Client access list is created by communicating with cluster services after online and offline operations. To open a connection with cluster, OpenCluster API by Microsoft is used. Delay in cluster offline and online occurred as OpenCluster API was taking too much time.

Issue 2
Distributed File System Replication (DFSR) in a Microsoft cluster is not working properly with the Volume Manager Disk Group (VMDg) resource.

The DFSR resource was modified to identify and pick up third-party disk resources while building volume ID table; however, the volume path is not getting validated when processing the volume. 

The path names for the disk resource is fetched from the disk resource handle using the resource control: CLUSCTL_RESOURCE_STORAGE_GET_MOUNTPOINTS. This control code should return list of path names hosted for the specified disk partition.

RESOLUTION:
Issue 1
Earlier, RPC mechanism was used with OpenCluster API. Resolved the issue by using LPC mechanism with OpenCluster API.

Issue 2
Added code to properly handle the resource control CLUSCTL_RESOURCE_STORAGE_GET_MOUNTPOINTS.

Binary / Version:
vxres.dll / 5.1.20011.87

* 2087139 (Tracking ID: 2087139)

SYMPTOM:
While trying to stop the vxvm service, hotfix installer intermittently fails to stop the service.

DESCRIPTION:
Hotfix installer fails to perform prerequisite operations intermittently. Failure occurs when the installer tries to stop the vxvm service and reports error. When QUERY VXVM is used to query the state of service, status is shown as stopped.

While stopping the vxvm service, iSCSI and Scheduler providers perform certain operations which result in failure.

RESOLUTION:
To abort the operations in iSCSI and Scheduler providers while performing the stop operation for the vxvm service.

Binary / Version:
iscsi.dll / 5.1.20012.88
scheduler.dll / 5.1.20012.88

* 2087139 (Tracking ID: 2087139)

SYMPTOM:
While trying to stop the vxvm service, hotfix installer intermittently fails to stop the service.

DESCRIPTION:
Hotfix installer fails to perform prerequisite operations intermittently. Failure occurs when the installer tries to stop the vxvm service and reports error. When QUERY VXVM is used to query the state of service, status is shown as stopped. While stopping the vxvm service, iSCSI and Scheduler providers perform certain operations which result in failure.

RESOLUTION:
To abort the operations in iSCSI and Scheduler providers while performing the stop operation for the vxvm service.

Binary / Version:
cluster.dll / 5.1.20012.88

* 2290214 (Tracking ID: 2290214)

SYMPTOM:
This hotfix addresses an issue in the SFW component, Veritas VxBridge Service (VxBridge.exe), that causes a memory corruption or a crash in a VxBridge client process in the clustering environment.

DESCRIPTION:
In a clustering environment, a VxBridge client process may either crash or there could be a memory corruption. This occurs because VxBridge.exe tries to read beyond the memory allocated to a [in, out, string] parameter by its client.

As a result, the cluster may become unresponsive. Users may not be able to access the clustered applications and cluster administrators may not be able to connect to the cluster using the Cluster Management console.

RESOLUTION:
This issue is fixed in VxBridge.exe process.
Instead of allocating the maximum memory, VxBridge.exe now allocates a minimum essential required amount of memory to the [in, out, string] parameters.
Because of the minimum memory requirement, any excess memory allocated by the VxBridge clients does not cause any issues.

Binary / Version:
VxBridge.exe / 5.1.20013.87

* 2321015 (Tracking ID: 2321015)

SYMPTOM:
This hotfix fixes bug check 0x3B in VXIO.

DESCRIPTION:
The bug check 0x3B may happen when removing a disk of a cluster dynamic disk group.

RESOLUTION:
This hotfix fixes bug check 0x3B in VXIO.

Binary / Version:
vxio.sys / 5.1.20014.87

* 2318276 (Tracking ID: 2318276)

SYMPTOM:
This hotfix replaces a fix for an issue in vxio which breaks Network Address Translation (NAT) support in VVR. It also removes the limitation of using different private IP addresses on the primary and secondary host in a NAT environment.

DESCRIPTION:
VVR sends heartbeats to the remote node only if the local IP address mentioned on the RLINK is online on that node.

In a NAT environment, the primary host communicates with the NAT IP of the secondary, and since, the NAT IP is never online on the secondary node, VVR secondary does not send heartbeats to the primary host. This results in primary not sending a connection request to the secondary.

It is also observed that in a NAT environment when the primary's private IP address is same as the secondary's private IP, VVR incorrectly concludes that the primary and secondary nodes are one and the same.

RESOLUTION:
Removed the check which made it mandatory for the local IP address to be online on a node. Also, fixed the issue which prevents configuring VVR in a NAT environment when the private IP addresses of the primary and secondary hosts are same.

Binary / Version:
vras.dll / 5.1.20018.87
vxio.sys / 5.1.20018.87

* 2364591 (Tracking ID: 2364591)

SYMPTOM:
This hotfix addresses the following issues:

Issue 1
This hotfix adds Thin Provisioning Reclaim support for EMC VMAX array.

Issue 2
This hotfix addresses an issue of Storage Agent crash when SCSI enquiries made to disks fail.

Issue 3
Mirror creation failed with auto selection of disk and track alignment enabled, even though enough space was there.

DESCRIPTION:
Issue 1
Added Thin Provisioning Reclaim support for EMC VMAX array on Storage Foundation and High Availability for Windows (SFW HA) Service Pack 2.

Issue 2
During startup after SFW 5.1 SP2 installation, Storage Agent uses SCSI enquiries to get information from disks. In some cases it is observed that Storage Agent crashes while releasing memory for the buffer passed to collect information.

Issue 3
There was a logic error in the way we track-align a free region of disks. Small free regions may end up with negative size. Even though we might have free region large enough for the desired allocation, the negative sizes make the total free region size less and thus fails the allocation.

RESOLUTION:
Issue 1
Made changes in DDL provider to reclaim support for EMC VMAX array as a Thin Reclaim device.

Issue 2
Aligned the buffer records on 16 byte boundaries. This ensures that the data structures passed down between native drivers and providers are in sync. Additionally, it also saves the effort of relying on data translation done by WOW64 when code is running on 32-bit emulation mode.

Issue 3
Fix the logic error in track-aligning the free space so that no regions will have negative size.

Binary / Version:
ddlprov.dll / 5_1_20020_87
pnp5.dll / 5_1_20020_87

* 2368399 (Tracking ID: 2368399)

SYMPTOM:
This hotfix addresses an issue where any failure while the volume shrink operation is in progress may cause file system corruption and data loss.

DESCRIPTION:
The volume shrink operation allows you to decrease the size of dynamic volumes.
When you start the volume shrink operation, it begins to move used blocks so as to accommodate them within the specified target shrink size for the volume.

However, this operation is not transactional. If there are any issues encountered during block moves or if the operation is halted for any reason (for example, the host reboots or shuts down, the operating system goes in to a hung state, or a stop error occurs), it can result in a file system corruption and may cause data loss.
The state of the volume changes to 'Healthy, RAW'.

RESOLUTION:
With this hotfix, a warning message is displayed each time you initiate a volume shrink operation. The message recommends that you make a backup copy of the data on the target volume (the volume that you wish to shrink) before you perform the volume shrink operation.

Depending on how you initiate the volume shrink operation (either VEA or command line), you have to perform an additional step, as described below:

    If you initiate the volume shrink operation from the VEA console, click OK on the message prompt to proceed with the volume shrink operation.

    If you initiate the volume shrink operation from the command prompt, the command fails with the warning message. Run the command again with the force (-f) option.

Note: 
Hotfix_5_1_20024_87_2368399 has been repackaged to address an installation issue that was present in older version of this hotfix, which was released in the earlier CP.

Binary / Version:
vxassist.exe / 5.1.20024.87
climessages.dll / 5.1.20024.87
vxvmce.jar / NA
vmresourcebundle.en.jar / NA

* 2397382 (Tracking ID: 2397382)

SYMPTOM:
This hotfix addresses the following issues:
Issue 1
VVR primary hangs and a BSOD is seen on the secondary when stop and pause replication operations are performed on the configured RVGs.

Issue 2
Fixed a memory leak issue in the Veritas Volume Replicator (VVR) compression module.

DESCRIPTION:
Issue 1
When stopping or pausing VVR replication in TCP/IP mode, a BSOD is seen with STOP ERROR 0x12E "INVALID_MDL_RANGE".
A BSOD error occurs on the VVR secondary system due to TCP receive bug which is caused due to a mismatch between Memory Descriptor List (MDL) and the underlying buffer describing it.

Issue 2
During heavy VVR compression activity, when memory upper limit is reached, some IO compression fails during compression resulting in memory leak.

RESOLUTION:
Issue 1
The issue related to BSOD error has been fixed.

Issue 2
Fixed the memory leak issue for IO compression error scenarios.

Binary / Version:
vxio.sys / 5.1.20025.87

* 2218963 (Tracking ID: 2218963)

SYMPTOM:
This hotfix adds thin provisioning support for HP XP arrays to SFW 5.1 SP2.
This hotfix also fixes a data corruption problem that can happen while moving sub disks of a volume when the Smart Move mirror resync feature is enabled.

DESCRIPTION:
Hotfix_5_1_20005_87_2218963 adds thin provisioning support for HP XP arrays to SFW 5.1 SP2.

When there are multiple sub disks on a disk and the sub disk that is being moved is not aligned to 8 bytes, then there is a possibility of missing some disk blocks while syncing with the new location. This may result in data corruption.

Hotfix_5_1_20005_87_2218963 fixes the starting Logical Cluster Numbers used while syncing sub disk clusters so that no block is left out.

RESOLUTION:
This hotfix adds thin provisioning support for HP XP arrays and also fixes a data corruption issue.

Binary / Version:
vxvm.dll / 5.1.20005.87
ddlprov.dll / 5.1.20005.87
vxconfig.dll / 5.1.20005.87

* 2426197 (Tracking ID: 2426197)

SYMPTOM:
This hotfix addresses an issue where the disks fail to appear after the storage paths are reconnected, until either the vxvm service is restarted or the system is rebooted.

DESCRIPTION:
This issue occurs only when SFW and EMC PowerPath 5.5 are installed on a system.

When you disconnect and reconnect disks managed using EMC PowerPath multipathing solution, the disks fail to appear on the system and are inaccessible. The VEA GUI shows the disks arrival events but the disk type is displayed as unknown and the status is offline.

This occurs because the SFW vxpal component creates access handles on the gatekeeper devices that remain open forever. Therefore when the storage is reconnected, the disks remain unreadable as the previous stale handles are not closed.

RESOLUTION:
The open handles issue is fixed in the PnP handler logic.

Binary / Version:
pnp5.dll / 5.1.20028.87

* 2440099 (Tracking ID: 2440099)

SYMPTOM:
In case the original SFW product license fails, SFW now tries to find license with a different API.

DESCRIPTION:
Licensing issue where SFW fails to perform basic operations due to failure in finding the installed license on a system. This occurs even when there is a valid license key installed on the system.

RESOLUTION:
SFW will now use registry and different APIs to find licenses, if the older APIs fail.

Binary / Version:
sysprov.dll / 5.1.20029.87

* 2477520 (Tracking ID: 2477520)

SYMPTOM:
After installing cumulative patch 1 (CP1) over SFW 5.1 SP2, user was not able to configure dynamic cluster quorum in Microsoft Cluster.

DESCRIPTION:
After installing cumulative patch 1 (CP1) over SFW 5.1 SP2, user was not able to configure dynamic cluster quorum in Microsoft Cluster due to an incorrect error code being returned to the cluster service.

RESOLUTION:
Returning the proper error code to cluster service in case storage (i.e. drive letter or volume GUID/name) belongs to the Disk Group resource but path is not valid.

Binary / Version:
vxres.dll / 5.1.20032.87

* 2512482 (Tracking ID: 2512482)

SYMPTOM:
This hotfix addresses an issue where cluster storage validation check fails with error 87 on a system that has SFW installed on it. 
This happens due to system volume being offline.

DESCRIPTION:
SFW disables the automount feature on a system which leaves the default system volume offline after each reboot. 
Cluster validation checks for access to all volumes and fails for the offline system volume with error 87.

RESOLUTION:
Resolved the issue by making the system volume online during system boot up so that cluster validation check is successful.

Binary / Version:
vxboot.sys / 5.1.20033.87

* 2372049 (Tracking ID: 2372049)

SYMPTOM:
Unable to create enclosures for SUN Storage Tek (STK) 6580/6780 array.

DESCRIPTION:
There was no support for SUN STK 6580/6780 array in the VDID library.

RESOLUTION:
Added code to recognize SUN STK 6580/6780 array in VDID library.

Binary / Version:
sun.dll / 5.1.20021.87

* 2554039 (Tracking ID: 2554039)

SYMPTOM:
This hotfix addresses the issue where an orphan task appears in VEA from MISCOP_RESCAN_TRACK_ALIGNMENT.

DESCRIPTION:
After importing a disk group, a task appears in the VEA task bar and never disappears. This appears to be triggered by a rescan fired from the ddlprov.dll when the VDID for a device changes.  Since it is an internal task, it probably should not appear in the VEA at all.

RESOLUTION:
The orphan task object which was created for Rescan has been removed.


Binary / Version:
vxvm.dll / 5.1.20035.87

* 2536342 (Tracking ID: 2536342)

SYMPTOM:
This hotfix addresses the issue where incorrect WMI information is logged into Cluster class for Dynamic disks.

DESCRIPTION:
When a disk group containing all GPT disks is created, then SFW creates and publishes a signature into WMI affecting the Signature and ID fields for the mscluster_disk. This ID/Signature changes on every reboot.  If two disk groups are created with all GPT disks, then they have the same signature and ID. 
This causes issues with Microsoft's SCVVM product which uses the signature/ID to determine individual resources.  

With this configuration, as soon as a HyperV machine is put on an all GPT disk group, all the other GPT disk groups are marked as 'in Use' and SCVVM is unable to use those disks for other HyperV machines.

RESOLUTION:
A unique signature to fill into diskinfo structure of GPT disks is generated. This signature is used as the ID while populating WMI information into MSCluster_Disk WMI class.


Binary / Version:
cluscmd.dll / 5.1.20037.87

* 2564914 (Tracking ID: 2564914)

SYMPTOM:
This hotfix addresses the issue where VxVDS.exe process does not release handles post CP2 updates.

DESCRIPTION:
After the installation of 5.1 SP2 CP2, a high number of open handle counts are seen on the VxVDS Process.

RESOLUTION:
The handle leaks have been fixed.

Binary / Version:
vxvds.exe / 5.1.20038.87

* 2372164 (Tracking ID: 2372164)

SYMPTOM:
This hotfix addresses the issue where multiple buffer overflows occur in SFW vxsvc.exe. This results in a vulnerability that allows remote attackers to execute arbitrary code on vulnerable installations of Symantec Veritas Storage Foundation. Authentication is not required to exploit this vulnerability.

DESCRIPTION:
The specific flaw exists within the vxsvc.exe process. The problem affecting the part of the server running on TCP port 2148 is an integer overflow in the function vxveautil.value_binary_unpack where a 32bit field holds a value that, through some calculation, can be used to create a smaller heap buffer than required to hold user supplied data. This can be leveraged to cause an overflow of the heap buffer, allowing the attacker to execute arbitrary code under the context of SYSTEM.

RESOLUTION:
The issue has been addressed in this hotfix.

Binary / Version:
vxveautil.dll / 3.3.1068.0
vxvea3.dll / 3.3.1068.0
vxpal3.dll / 3.3.1068.0

* 2536009 (Tracking ID: 2536009)

SYMPTOM:
Issue 1
Mirror creation failed with auto selection of disk and track alignment enabled, even though enough space was there.

Issue 2
When using GUI or CLI to create a new volume with a mirror and DRL logging included the DRL log is track aligned and the data volume is no longer track aligned.

Issue 3
On a VVR-GCO configuration, when the primary site goes down, the application service group fails over to the secondary site. It is observed that MountV resource probes online on a failed node after a successful auto takeover operation.

Issue 4
Following issues were fixed for Storage Foundation for Windows (SFW) with DMP DSM and SFW SCSI-3 enabled setting:    
1. Reconnecting previously detached storage array in a campus cluster causes all MSCS Volume Manager Disk Group (VMDg) resources to fail and rescan operation gets hung at 28%. 
2. Deporting a cluster disk group may take a long time.

Issue 5
Data corruption while performing subdisk Move operation.

Issue 6
During Windows Failover Cluster Move Group operation, cluster disk group import fails with Error 3 (DG_FAIL_NO_MAJORITY 0x0003).

Issue 7
This hotfix addresses an issue in the Volume Manager (VM) component to support VCS hotfix Hotfix_5_1_20029_2536009.

Note:
Fixes for issues #1, #2, #3, #4, #5, #6 were released earlier as Hotfix_5_1_20026_87_2406683. It is now a part of this hotfix.

DESCRIPTION:
Issue 1
There was a logic error in the way we track-align a free region of disks. Small free regions may end up with negative size. Even though we might have free region large enough for the desired allocation, the negative sizes make the total free region size less and thus fails the allocation.

Issue 2
In space allocation the free space was aligned before allocating it to data plex and DRL log.  When DRL is placed first and when DRL does not have a size that is a multiple of the track size, the following data plex is not track aligned.

Issue 3
After a node crash on a VVR-GCO configuration, the mount points under VCS control are not deleted causing them to probe online on disk group (which gets auto-imported after the reboot), leading to a concurrency violation.

For global service groups, concurrency violations are not resolved by the VCS engine automatically. Hence the MountV offline is not initiated.

Issue 4
On a SFW DMP DSM environment with SFW SCSI-3 settings, reconnecting a detached disk of an imported cluster disk group can cause the SCSI-3 release reservation logic to get into a loop or operation may take a long time to get completed.

Issue 5
When there are multiple subdisks on a disk and the subdisk that is being moved is not aligned to 8 bytes,  then there is a possibility of missing some disk blocks while syncing with the new location. This may result in data corruption.

Issue 6
During Windows Failover Cluster Move Group operation, successful disk group deport happens; however, subsequent import attempt fails with error "DG_FAIL_NO_MAJORITY 0x0003." 

If disks are added to an existing cluster disk group which is online on node A and when this disk group is moved to other node B, then node B is unable to online the cluster disk group. This happens because the system view of the disk is not being synchronized with the modified disk group information and node B where the cluster disk group is moved is not able to reflect the correct information.

Issue 7
As a result of the modified MountV agent offline function, there is a possibility that the volumes can be mounted externally even after the MountV offline is complete.

RESOLUTION:
Issue 1
Fix the logic error in track-aligning the free space so that no regions will have negative size.

Issue 2
The order of allocation has been reversed. Now, the free space is assigned to the data plex first and then to the DRL log.  Therefore, the data plex is always aligned, and the DRL log may or may not
be aligned.

Issue 3
To delete the stale mount points when a cluster disk group is imported after a node reboot.

Issue 4
Corrected the logic in vxconfig.dll to do SCSI-3 release reservation effectively.

Issue 5
Fixed the starting Logical Cluster Numbers (LCNs) used while syncing subdisk clusters so that no block is left out.

Issue 6
Updating the Windows NT disk cache for all the disks in case of unexpected number of live disks.

Issue 7
The disk group deport operation is modified to support the fix provided in VCS hotfix Hotfix_5_1_20029_2536009.

VM now dismounts and flushes the volumes cleanly during disk group deport.


Binary / Version:
vxconfig.dll / 5.1.20036.87
vxconfig.dll / 5.1.20026.87

* 2530236 (Tracking ID: 2530236)

SYMPTOM:
This hotfix addresses an issue where the Resource Hosting Subsystem (RHS) process crashes when the system is restarted.

DESCRIPTION:
This issue occurs when VVR is configured in a Microsoft Failover Clustering environment.
RHS.exe reports a crash for mscsrvgresource.dll while restarting the system.

The System Event log may display the following:
ERROR	    1230(0x000004ce)	Microsoft-Windows-FailoverClustering Cluster resource '<resourcename>' (resource type '', DLL 'mscsrvgresource.dll') either crashed or deadlocked. The Resource Hosting Subsystem (RHS) process will now 
attempt to terminate, and the resource will be marked to run in a separate monitor.

The cluster log may display the following:
ERR   [RHS]: caught exception c0000005 in call OPENRESOURCE for <resourcename>.

This issue occurred because of an incorrect Microsoft API call made by a VVR component.

RESOLUTION:
The VVR component now uses a different API to address the crash.


Binary / Version:
mscsrvgresource.dll / 5.1.20039.87

* 2604814 (Tracking ID: 2604814)

SYMPTOM:
This hotfix addresses an issue where the Volume Manager Diskgroup (VMDg) resource in a Microsoft cluster Server (MSCS) cluster may fault when you add or remove multiple empty disks (typically 3 or more) from a dynamic disk group.

DESCRIPTION:
This issue occurs on 64-bit systems where SFW is used to manage storage in a Microsoft Cluster Server (MSCS) environment.
When adding or removing disks from a dynamic disk group, the disk group resource (VMDg) faults and the cluster group fails over. The VEA console shows the disk group in a deported state.

The cluster log displays the following message:
ERR   [RES] Volume Manager Disk Group <resourcename>: LDM_RESLooksAlive: *** FAILED for <resourcename>, status =
0, res = 0, dg_state = 35

The system event log contains the following message:
ERROR 1069 (0x0000042d)	clussvc	<systemname>	Cluster resource <resourcename> in Resource Group <groupname> failed.

MSCS uses the Is Alive / Looks Alive polling intervals to check the availability of the storage resources. While disks are being added or removed, SFW holds a lock on the disk group until the operation is complete. However, if the Is Alive / Looks Alive query arrives at a time the disk add/remove operation is in progress, SFW ignores the lock it holds on the disk group and incorrectly communicates the loss of the disk group (where the disks are being added/removed) to MSCS. As a result, MSCS faults the storage resource and initiates a fail over of the group.

RESOLUTION:
The issue is fixed in the SFW component that holds the lock on the disk group. As a result, SFW now responds to the MSCS Is Alive / Looks Alive query only after the disk add/remove operation is complete.


Binary / Version:
cluscmd64.dll / 5.1.20041.87

* 2587638 (Tracking ID: 2587638)

SYMPTOM:
This hotfix addresses the following issues:

Issue 1
If a disk group contains a large number of disks then it causes a delay in the disk reservation resulting in the defender node losing the disks to the challenger node.

Issue 2
VVR Primary goes into a hang state while initiating a connection request to the Secondary. This is observed when the Secondary machine returns an error message which the Primary is unable to understand.

Issue 3
Reboot of an EMC CLARiiON storage processor results in Storage Foundation for Windows dynamic disks to be flagged as removed resulting I/O failures.

Issue 4
I/O operations hang in the SFW driver, vxio, where VVR is configured for replication.

Issue 5
VVR causes a fatal system error and results in a system crash (bug check).

Note:
Fix for issue #1, #2, and #3 was released earlier as Hotfix_5_1_20040_87_2570602 and fix for issue #5 was released earlier as Hotfix_5_1_20043_87_2535885. They are now a part of this hotfix.

DESCRIPTION:
Issue 1
In a split brain scenario, the active node (defender node) and the passive nodes (challenger nodes) try to gain control over the majority of the disks. 

If the disk group contains a large number of disks, then it takes a considerable amount of time for the defender node to the reserve the disks.

The delay may result in the defender node losing the reservation to the challenger nodes.

This delay occurs because the SCSI-3 disk reservation algorithm performs the disk reservation operation in a serial order.

Issue 2
VVR Primary initiates a connection request to the Secondary and waits for an acknowledgement. The Secondary may reply back with an ENXIO signaling that the Secondary Replicated Volume Group's (RVG's) SRL was not found. The Primary machine is unable to understand this error message and continues to remain in the same waiting state for an acknowledgement reply from the Secondary. 

Any transaction from VOLD is blocked since the RLINK is still waiting for an acknowledgement from the Secondary. This leads to new I/Os getting piled up in the vxio as a transaction is already in progress.

Issue 3
In a multipath configuration, each system has one or more storage paths from multiple storage processors (SP). If a storage processor is rebooted, vxio.sys, the Storage Foundation for Windows (SFW) component) may sometimes mark the disks as removed and fail the I/O transactions even though the disks are accessible from another SP. As the I/O failed, the clustering solution fails over the disks to the passive node.

The following errors are reported in the Windows Event logs: 

INFORMATION Systemname vxio: <Hard Disk> read error at block 5539296 due to disk removal  

INFORMATION Systemname vxio: Disk driver returned error c00000a3 when vxio tried to read block 257352 on <hard disk>

WARNING     Systemname vxio: Disk <Hard Disk> block 9359832 (mountpoint X:): Uncorrectable write error 

Issue 4
I/O operations on a system appear to hang due to the SFW driver, vxio.sys, in an environment where VVR is configured.

When an application performs an I/O operation on volumes configured for replication, VVR writes the I/O operation 
to the SRL log volume and completes the I/O request packet (IRP). The I/O is written asynchronously to the data volumes. 

If the application initiates another I/O operation whose extents overlap with any of the I/O operations that are queued 
to be written to the data volumes, the new I/O is kept pending until the queued I/O requests are complete.

Upon completion, the queued I/O signals the waiting I/O to proceed. However, in certain cases, due to a race condition,
the queued I/O fails to send the proceed signal to the waiting I/O. The waiting I/O therefore remains in the waiting state forever.

Issue 5
VVR may sometimes cause a bug check on a system.

The crash dump file contains the following information:
ATTEMPTED_SWITCH_FROM_DPC (b8)
A wait operation, attach process, or yield was attempted from a DPC routine.
This is an illegal operation and the stack track will lead to the offending code and original DPC routine.

The SFW component, vxio, processes internally generated I/O operations in the disk I/O completion routine itself.
However, the disk I/O completion routine runs at the DPC/dispatch level. Therefore any function in vxio that requires a context switch do not get processed.
The broken function calls result in vxio causing a system crash.

RESOLUTION:
Issue 1
The SCSI-3 reservation algorithm has been enhanced to address this issue.

The algorithm now tries to reserve the disks in parallel thus improving the throughput of the total time required for SCSI reservation on the defender node.

Issue 2
If the Secondary is unable to find the SRL, return a proper error message which the Primary understands. The Primary will set the SRL header error on the RLINK and pause it.

Issue 3
The device removal handling logic has been enhanced to address this issue. The updated SFW component vxio.sys includes the fix.

Issue 4
The race condition in the vxio driver has been fixed.

Issue 5
The vxio component is enhanced to ensure that the internally generated I/O operations are not processed as part of the DPC level disk I/O completion routine.


Binary / Version:
vxio.sys / 5.1.20045.87

* 2614448 (Tracking ID: 2614448)

SYMPTOM:
This hotfix addresses an issue where Windows displays the format volume dialog box when you use SFW to create volumes.

DESCRIPTION:
While creating volumes using the New Volume Wizard from the Veritas Enterprise Administrator (VEA), if you choose to assign a drive letter or mount the volume as an NTFS folder, Windows displays the format volume dialog box.

This issue occurs because as part of volume creation, SFW creates raw volumes, assigns mount points and then proceeds with formatting the volumes.
Mounting raw volumes explicitly causes Windows to invoke the format dialog box.

Note that the volume creation is successful and you can cancel the Windows dialog box and access the volume.

RESOLUTION:
SFW now assigns drive letters or mount paths only after the volume format task is completed.

Binary / Version:
vxvm.dll / 5.1.20042.87

* 2610786 (Tracking ID: 2610786)

SYMPTOM:
This hotfix addresses an issue where the disk group import operation fails to complete if one or more disks in the disk group are not readable.

DESCRIPTION:
As part of the disk group import operation, the SFW component, vxconfig, performs a disk scan to update the disk group configuration information.


It acquires a lock on the disks in order to read the disk properties. 
In case one or more disks in the disk group are not readable, the scan operation returns without releasing the lock on the disks. 
This lock blocks the disk group import operation.

RESOLUTION:
This issue has been addressed in the vxconfig component. The disk scan operation now releases the lock on the disks even if one or more disks are not readable.

Binary / Version:
vxconfig.dll / 5.1.20044.87

* 2635097 (Tracking ID: 2635097)

SYMPTOM:
This hotfix addresses an issue related to Veritas Volume Replicator (VVR) where replication hangs and the VEA Console becomes unresponsive if the replication is configured over the TCP protocol and VVR compression feature is enabled.

DESCRIPTION:
This issue may occur when replication is configured over TCP and VVR compression is enabled.

VVR stores the incoming write requests from the primary system in a dedicated memory pool, NMCOM, on the secondary system.

When the NMCOM memory is exhausted, 
VVR tries to process an incoming I/O request until the time it gets the required memory from the NMCOM pool on the secondary.

Sometimes the NMCOM memory pool may get filled up with out of sequence I/O packets. As a result, the waiting I/O request fails to acquire the memory it needs and goes into an infinite loop.

VVR cannot process the out of sequence packets until the waiting I/O request is executed. The waiting I/O request cannot get the memory as it is occupied by the out of sequence I/O packets. This results in a logjam.

The primary may initiate a disconnect if it fails to receive an acknowledgement from the secondary. But the waiting I/O request is into an infinite loop and hence the disconnect also goes into a waiting state.

In such a case, if a transaction is initiated it will not succeed and will also stall all the new incoming I/O threads, resulting in a server hang.

RESOLUTION:
This issue is fixed in VVR.
The error condition has been resolved by exiting from the loop if an RLINK disconnect is initiated.

Binary / Version:
vxio.sys / 5.1.20047.87
vvr.dll / 5.1.20047.87

* 2643293 (Tracking ID: 2643293)

SYMPTOM:
This hotfix provides Thin Provisioning Reclaim support for Fujitsu ETERNUS DX80 S2/DX90 S2 arrays.

DESCRIPTION:
Added Thin Provisioning Reclaim support for Fujitsu ETERNUS DX80 S2/DX90 S2 array on SFW 5.1 SP2.

RESOLUTION:
Made changes in DDL provider to claim Fujitsu ETERNUS DX80 S2/DX90 S2 array LUN as a Thin Reclaim device.

Binary / Version:
ddlprov.dll / 5.1.20046.87

* 2670150 (Tracking ID: 2670150)

SYMPTOM:
This hotfix addresses the issue where multiple entries for the warning ERROR_MORE_DATA(234) get logged for the control code CLUSCTL_RESOURCE_STORAGE_GET_DISK_INFO.

DESCRIPTION:
In a few cases, while performing the Move Group operation in a Microsoft Clustering environment, customers have seen that multiple entries for the warning ERROR_MORE_DATA(234) gets logged for the control code CLUSCTL_RESOURCE_STORAGE_GET_DISK_INFO.
These warning messages do not affect the functionality of the Volume Manager Disk Group (VMDg) resources and, therefore, can be ignored.

RESOLUTION:
Log entry for the ERROR_MORE_DATA(234) warning for the control code CLUSCTL_RESOURCE_STORAGE_GET_DISK_INFO has been removed from the cluster log.
Note: 
This hotfix is applicable to Windows Server 2003 only.

Binary / Version:
vxres.dll / 5.1.20048.87

* 2676164 (Tracking ID: 2676164)

SYMPTOM:
This hotfix addresses an issue related to the SFW component, vxres.dll, that causes a crash in the Resource Hosting Subsystem (RHS) process.

DESCRIPTION:
This issue occurs when there is a failure in the cluster connection calls made by SFW.
The failed connection causes a crash in the RHS.exe process due to an exception in vxres.dll.

RESOLUTION:
RHS process was crashing because an uninitialized variable was passed during the cluster connection calls.
The component has been updated to address the issue.

Binary / Version:
vxres.dll / 5.1.20049.87

* 2711856 (Tracking ID: 2711856)

SYMPTOM:
VVR replication between two sites fails if VxSAS service is configured with a local user account that is a member of the local administrators group.

DESCRIPTION:
While configuring VVR replication between the Primary and Secondary sites in Windows Server 2008, the replication fails with the following error:
Permission denied for executing this command. Please verify the VxSAS service is running in proper account on all hosts in RDS.

This happens if the user used for the VxSAS service is a member of the administrators group and the User Access Control (UAC) is enabled.

RESOLUTION:
The vras.dll file has been modified to resolve the issue, wherein it will check the users in the administrators group to know if the specified user 
has the administrative permissions.

Binary / Version:
vras.dll / 5.1.20052.87

* 2327428 (Tracking ID: 2327428)

SYMPTOM:
his hotfix address an issue with the product installer component that causes a failure in the SFW Thin Provisioning space reclamation.

DESCRIPTION:
This issue occurs after you add or remove SFW features or repair the product installation from Windows Add/Remove Programs.

The SFW Thin Provisioning space reclamation begins to fail after rebooting the systems. This issue occurs because the product installation component erroneously modifies a vxio service registry key.

RESOLUTION:
This issue has been fixed in the product installation component.
The updated component no longer modifies the registry key.

Note:

The hotfix installation steps vary depending on the following cases:
- Product installed but issue did not occur
- Product installed and issue occurred

Perform the steps depending on the case.

Case A: Product installed but issue did not occur
If you have installed SFW or SFW HA in your environment but have not encountered this issue as yet. This could be because you may not have added SFW features or run a product Repair from Windows Add/Remove Programs.

Perform the following steps:
1. Install this hotfix on all systems. See "To install the hotfix using the GUI" or "To install the hotfix using the command line" section in this readme.

2. After replacing the file on all the systems, you can add or remove SFW features or run a product repair on the systems.


Case B: Product installed and issue occurred
If you have installed the product and the issue has occurred on the systems.

Perform the following steps:
1. Install this hotfix on all systems. See "To install the hotfix using the GUI" or "To install the hotfix using the command line" section in this readme. 

2. After installing the hotfix, perform one of the following:
   - From the Windows Add/Remove Programs, either add or remove an SFW feature or run a product Repair on the system. 
   - Set the registry key HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\vxio\Tag to 8 in decimal. 

3. Select the desired options on the product installer and complete the workflow.
Reboot the system when prompted.

4. Repeat these steps on all the systems where this issue has occurred.


Binary / Version:
VM.dll / 5.1.20002.267

* 2738430 (Tracking ID: 2738430)

SYMPTOM:
This hotfix addresses the following issues:

Issue 1
An active cluster dynamic disk group faults after a clear SCSI reservation operation is performed.

Issue 2
VVR replication never resumes when the Replicator Log is full and DCM gets activated for the Secondary host resynchronization.

Issue 3
Snapback caused too many block to resynchronize for snapback volume.

Note:
Fixes for issues #1 and #2 were released earlier as Hotfix_5_1_20055_87_2740872. They are now a part of this hotfix.

DESCRIPTION:
Issue 1
This error occurs when a clear SCSI reservation operation, such as bus reset or SCSI-3 clear reservation, is performed on an 
active cluster dynamic disk group. During the operation, the cluster dynamic disk group faults.

An error message similar to the following is logged in the Event Viewer:
Cluster or private disk group has lost access to a majority of its disks. Its reservation thread has been stopped.

Issue 2
This issue occurs while performing a VVR replication. During replication, if the Storage Replicator Log (SRL) becomes full, then Data Change Map (DCM) gets activated for the Secondary host resynchronization. Because the resynchronization is a time-consuming process, VVR stops sending blocks and the replication never resumes even though it is active and shows the status as "connected autosync resync_started".

Issue 3
During a snapback operation, SFW determines the changed blocks to be synced using a per plex bitmap and a global accumulator.
It updated the accumulator with the per plex bitmap for the changed blocks and syncs all the changed blocks from the original volume to the snapback volume.

Now if there is an another volume to be snapback, it again updates the accumulator with the per plex bitmap. 
But now the accumulator still have the older entries of the previous snapback operation and it copies extra blocks on the new snapback volume.

For e.g. Consider the following scenario
1. Create two snapshot of Volume F: (G: and H:)
2. make G: writable and copy files to G:
3. snapback G: using data from F:
4. snapback H: using data from F:

Step 4 should have nothing to resynchronize since there is no change to F: and H: above.
However, vxio trace shows that the number of dirty regions to be synchronized is the same as step 3.
This is because the global accumulator is updated at step 3 and this causes extra blocks to be resynced.

RESOLUTION:
Issue 1
To resolve this issue, SFW retries the SCSI reservation operation.

Issue 2
The vxio.sys file has been modified to resolve this issue.

Issue 3
During the snapback operation updating the per-plex map to correctly reflect the changed blocks to be synced for the snapback operation.

Binary / Version:
vxio.sys / 5.1.20055.87
vxconfig.dll / 5.1.20036.87

* 2766206 (Tracking ID: 2766206)

SYMPTOM:
Issue 1
Some VVR operations fail if the RVG contains a large number of volumes.

Issue 2
Storage Agent crashes and causes SFW operations to stop responding in a 
VVR configuration.

Issue 3
Some VVR operations fail if VxSAS is configured with a domain user account.

DESCRIPTION:
Issue 1
This issue occurs while performing certain VVR operations, such as creating or deleting a Secondary, in a Replicated Volume Group (RVG) with a large number of volumes (more than 16 volumes). In such cases, if the combined length of all volume names under the RVG is greater than 512 bytes, then some VVR operations fail.

Issue 2
On a Veritas Storage Foundation for Windows (SFW) computer 
configured with Veritas Volume Replicator (VVR), the Storage Agent (vxvm 
service) crashes and causes the SFW operations to stop responding. This happens 
because of memory corruption in Microsoft's Standard Template Library (STL), 
which is used by a code in SFW. 
For more 
information on STL memory corruption, see 
http://support.microsoft.com/kb/813810.



The following messages are logged in the Event Viewer:

The Service Control Manager tried to take a corrective action (Restart the 
service) after the unexpected termination of the Veritas Storage Agent service, 
but this action failed with the following error: %%1056
- The Veritas Storage Agent service terminated unexpectedly.  It has done this 1 
time(s).  

The following corrective action will be taken in 60000 milliseconds:

Restart the service. 

Issue 3
This issue occurs while performing certain VVR operations, such as adding a Secondary 
host to an already configured VVR RDS. If Veritas Volume Replicator Security Service (VxSAS) is 
configured with a domain user account, then some VVR operations fail because VVR cannot 
authenticate the user credentials.

RESOLUTION:
Issue 1
This issue has been resolved by modifying the code to handle larger number of volumes (up to 215 volumes).

Issue 2
To resolve this issue, the SFW code has been modified to not use 
STL in frequently-exercised VVR modules. 

Issue 3
This issue has been resolved by using fully qualified domain name for authenticating 
the user credentials.

Binary / Version:
vras.dll / 5.1.20058.88
vvr.dll / 5.1.20058.88

* 3372380 (Tracking ID: 3372380)

SYMPTOM:
This hotfix addresses the following issues:

Issue 1
Performing import/deport disk group in a cluster environment fails due to VxVDS refresh operation.

Issue 2
This hotfix blocks Thin Provisioning Reclaim support for a snapshot volume and for a volume that has one or more snapshots.

Issue 3
This hotfix provides Thin Provisioning Reclaim support for Huawei S5600T arrays.

Issue 4
The VDS Dynamic software provider causes an error in VDS.

Issue 5
The Dynamic Disk Group Split and Join operations takes a long time to complete.

Issue 6
In some cases, data corruption occurs after a volume is expanded.

Note:
Fixes for issues #1 and #2 were released earlier as Hotfix_5_1_20034_87_2400260. Fix for issue #3 was released earlier as Hotfix_5_1_20051_87_2683797. Fixes for issues #4 and #5 were released earlier as Hotfix_5_1_20059_87_2834385b. These fixes are now a part of this hotfix.

DESCRIPTION:
Issue 1
VxVDS refresh operation interferes with disk group import/deport operation resulting in timeout and delay. This happens as both refresh and import/deport disk group processes try to read disk information at the same time.

Issue 2
It is observed that if a volume or its snapshot is reclaimed, then performing the snapback operation on such a volume causes data corruption.

Issue 3
Added Thin Provisioning Reclaim support for Huawei S5600T array on SFW 5.1 SP2.

Issue 4
This issue occurs while performing the DG DEPORT operation. This happens because the DG DEPORT alerts were not handled successfully by VxVDS. The following error message is displayed: Unexpected provider failure. Restarting the service may fix the problem.

Issue 5
The hotfix addresses an issue where the VDS Refresh operation overlaps with the Dynamic Disk Group Split and Join (DGSJ) operations, which causes a latency. This issue is observed after upgrading SFW.

Issue 6
In some cases, this issue occurs when you use the Expand Volume command to increase a dynamic volume's size. Sometimes, SFW grows a volume beyond the maximum size of the file system that the volume contains. Because of this, file system corruption occurs.

RESOLUTION:
Issue 1
To abort the refresh operation if disk group import/deport operation is in progress. Instead of doing full refresh on all the disk groups, refresh operation will now be performed only on disk groups that are being imported/deported.

Issue 2
Blocked Thin Provisioning Reclaim operation on snapshot volume and on volume that has one or more snapshots.

Issue 3
Made changes in DDL provider to claim Huawei S5600T array LUN as a Thin Reclaim device.

Issue 4
The hotfix fixes the DG DEPORT component, which now handles all alerts successfully.

Issue 5
This hotfix fixes the binaries that cause the latency.

Issue 6
This issue has been resolved so that now SFW expands a volume based on the maximum allowed size of the file system it contains.


Binary / Version: 
vxvm.dll / 5.1.00086.87
vxvm_msgs.dll / 5.1.00086.87

* 2851054 (Tracking ID: 2851054)

SYMPTOM:
Memory leak in Veritas Storage Agent service (vxpal.exe)

DESCRIPTION:
This hotfix addresses a memory leak issue in Veritas Storage Agent service (vxpal.exe) when the mount point information is requested from either the MSCS or VCS cluster.

RESOLUTION:
This hotfix fixes a binary that causes a memory leak in the Veritas Storage Agent service (vxpal.exe). 

Binary / Version:
mount.dll / 5.1.20060.87

* 2864040 (Tracking ID: 2864040)

SYMPTOM:
The vxprint CLI crashes when used with '-l' option

DESCRIPTION:
This hotfix addresses an issue where the vxprint CLI crashes when used with the '-l' option. The issue occurs when the read policy on a mirrored volume is set to a preferred plex.

RESOLUTION:
The hotfix fixes the binary that caused vxprint CLI to crash.

Binary / Version:
vxprint.exe / 5.1.20062.87

* 2894296 (Tracking ID: 2894296)

SYMPTOM:
An VMDg resource on MSFC fails to get the correct MountVolumeInfo value

DESCRIPTION:
This hotfix addresses an issue where the MountVolumeInfo property of VMDg resource does not populate correctly. This occurs because the VMDg resource contains a raw volume.

RESOLUTION:
The hotfix fixes the binary that caused the incorrect population of the VMDG resource.

Binary / Version:
vxres.dll / 5.1.20063.87

* 2905123 (Tracking ID: 2905123)

SYMPTOM:
Not able to create volumes; VEA very slow for refresh and rescan operations

DESCRIPTION:
In SFW, this issue occurs while creating a volume or performing refresh or rescan operations from VEA. During this, if the system has several disks with OEM partitions, volume creation fails and VEA takes a very long time to perform the refresh and rescan operations. Because FtDisk provider locks the database for a long time while processing OEM partitions, lock contention occurs in other operations and providers, which eventually makes all the operations slow.

RESOLUTION:
This issue has been resolved by optimizing the way FtDisk provider acquires lock and releases it.

Binary / Version:
ftdisk.dll / 5.1.20064.87

* 2914038 (Tracking ID: 2914038)

SYMPTOM:
Storage Agent crashes on startup

DESCRIPTION:
This hotfix addresses an issue where Storage Agent crahes during startup because of an unhandled exception while accessing an invalid pointer.

RESOLUTION:
This issue has been resolved by handling the exception in the code.

Binary / Version:
vdsprov.dll / 5.1.20066.87

* 2911830 (Tracking ID: 2911830)

SYMPTOM:
Not able to create a volume with more than 256 disks

DESCRIPTION:
This issue occurs while creating a volume with more than 256 disks. This happens because the license provider check limits the maximum number of disks allowed per volume to 256, regardless of the volume layout. Note that this limit is imposed to control the maximum number of records (such as plex, subdisk, disks, volume, RVG) that gets stored on private region per disk group (maximum limit for this is 2922 records).

RESOLUTION:
This issue has been resolved by increasing the limit of the maximum number of allowed disks per volume to 512. As long as the maximum number of records per private region do not crossing its limit, the new limit of disks per volume is valid.

Binary / Version:
sysprov.dll / 5.1.20067.87

* 2928801 (Tracking ID: 2928801)

SYMPTOM:
The vxtune rlink_rdbklimit command does not work as expected

DESCRIPTION:
This issue occurs when using the vxtune rlink_rdbklimit command to set a value for the RLINK_READBACK_LIMIT tunable. The command fails because vxtune.exe incorrectly stores an invalid value for RLINK_READBACK_LIMIT instead of the one provided by the user. This happens because the value is internally converted into kilobytes instead of bytes.

RESOLUTION:
This issue has been resolved by correcting the code that does the kilobyte to byte conversion.

Binary / Version:
vxtune.exe / 5.1.20069.88

* 2860593 (Tracking ID: 2860593)

SYMPTOM:
The vxprint command may fail when it is run multiple times

DESCRIPTION:
The CORBA clients, such as vxprint, use a CSF (Common Services Framework) API called CsfRegisterEvent to register for events with the VxSVC service. This issue occurs when you run the vxprint command multiple times and the CsfRegisterEvent API fails. However, the issue is intermittent and may not always happen. The vxprint command fails with the following error: V-107-58644-930

RESOLUTION:
This issue can be resolved by retrying the underlying function, which the CsfRegisterEvent API calls in csfsupport3.dll.

Binary / Version:
csfsupport3.dll / 3.3.1071.0

* 2913240 (Tracking ID: 2913240)

SYMPTOM:
Issue 1
MountV resource faults because SFW removes a volume due to delayed device removal request.

Issue 2
Expanding volume using "mirror across disks by Enclosure" assigns disks to wrong plexes.

Issue 3
Basic quorum resource (physical disk resource) faults while Disk Group resource tries to get DgID.

DESCRIPTION:
Issue 1
This issue occurs when SFW removes a volume in response to a delayed device removal request. Because of this, the VCS MountV resource faults.

Issue 2
This issue occurs when expanding a volume using the "mirror across disks by Enclosure" option. During this, the Expand Volume command assigns disks to the wrong plexes, splitting the plexes across enclosures. This happens if the arrays have long and similar names except for the last few numbers; for example, EMC000292602920 and EMC000292602853. The function to compare names does not handle such names in the strings correctly and, therefore, treats different arrays as the same.

Issue 3
When the Volume Manager Disk Group resource tries to get the dynamic disk group ID (DgID) information, SFW clears the reservation of all the disks, including those that are part of Microsoft Cluster Server (MSCS). However, Microsoft Failover Cluster (MSFC) disk resources do not try to re-reserve the disks and, therefore, they fault.

RESOLUTION:
Issue 1
This issue has been resolved by not disabling the mount manager interface instance if it is active when the device removal request arrives.

Issue 2
This issue has been resolved by modifying the name comparison function. 

Issue 3
This issue has been resolved. Now, while clearing the disk reservations, SFW skips the offline and basic disks.

Binary / Version:
vxio.sys / 5.1.20068.87 
vxconfig.dll / 5.1.20068.87

* 2940962 (Tracking ID: 2940962)

SYMPTOM:
Provisioned size of disks is reflected incorrectly in striped volumes after thin provisioning reclamation

DESCRIPTION:
This hotfix addresses an issue where thin provisioning (TP) reclamation in striped volumes shows incorrect provisioned size for disks. This issue is observed on Hitachi and HP arrays.

RESOLUTION:
This issue has been fixed by increasing the map size for striped volumes.

Binary / Version:
vxconfig.dll / 6.0.00018.362

* 2963812 (Tracking ID: 2963812)

SYMPTOM:
VMDg resource in MSCS times out and faults because DG offline operation takes long time to complete

DESCRIPTION:
In a Microsoft Cluster Server (MSCS) configuration, this issue occurs while bringing a disk group offline. During this, the lock volume operation takes a long time to complete due to which the disk group offline operation also takes a long time. Because of this, the Volume Manager Disk Group (VMDg) resource times out and eventually faults.

RESOLUTION:
To resolve this issue, as per Microsoft's recommendation, the volume offline handling is changed to skip the lock volume operation while bringing the cluster DG offline. Earlier, the following steps were used to offline the data volume: 1. Open a volume handle. 2. Take a lock on the volume (FSCTL_LOCK_VOLUME). 3. Flush and Dismount the volume (FSCTL_DISMOUNT_VOLUME). 4. Release the volume handle (This will unlock the volume). Now, the following steps will be used in a clustered environment: 1. Open a volume handle. 2. Flush and Dismount the volume (FSCTL_DISMOUNT_VOLUME). 3. Release the volume handle.

Binary / Version:
vxconfig.dll / 5.1.20071.87

* 2975132 (Tracking ID: 2975132)

SYMPTOM:
The Primary node hangs if TCP and compression are enabled

DESCRIPTION:
During a replication, this issue occurs if TCP and compression of data are enabled and the resources are low at the Secondary node. Because of low resources, decompression of data on the Secondary fails repeatedly, causing the TCP buffer to fill up. In such case, if network I/Os are performed on the Primary and a transaction is initiated, then the Primary node hangs.

RESOLUTION:
To resolve this issue, disconnect the RLINK if decompression fails repeatedly.

Binary / Version:
vxio.sys / 5.1.20072.87

* 3081465 (Tracking ID: 3081465)

SYMPTOM:
In FoC GUI, VMDg resource belatedly changes status from Online Pending to Online after the resource is online

DESCRIPTION:
In the Microsoft Failover Cluster (FoC) GUI, this issue occurs when, after a Volume Manager Disk Group (VMDg) resource is 
brought online, the resource takes some time to change the status from Online Pending to Online.
After the VMDg resource comes online, it receives the CLUSCTL_RESOURCE_STORAGE_GET_MOUNTPOINTS control code for each volume in it. 
Because the resource takes time to process this control code, there's a delay in changing the resource status in FoC GUI.

RESOLUTION:
This hotfix reduces the delay in status change by improving the processing of the CLUSCTL_RESOURCE_STORAGE_GET_MOUNTPOINTS 
control code.

Binary / Version:
cluscmd.dll / 5.1.20073.88
cluscmd64.dll / 5.1.20073.88
vxres.dll / 5.1.20073.88
vxbridge.exe / 5.1.20073.88

* 3104954 (Tracking ID: 3104954)

SYMPTOM:
RLINK pause operation fails and system hangs when pausing an RLINK

DESCRIPTION:
This issue occurs when you perform the RLINK pause operation during which VVR kernel tries to process the pause command for a specified length of time. In some cases, if VVR is not able to process the pause command, then the command is completed with an error. This process may leave behind some flags, which forces VVR to stop reading back write requests from the Replicator Log. Eventually, the Replicator Log fills up and the DCM is activated. Because the SRL to DCM flush cannot proceed, new I/Os get throttled indefinitely, leading to a system hang.

RESOLUTION:
VVR now ensures that any flags that were set while trying to pause the RLINK are cleaned up if the command is going to fail.

Binary / Version:
vxio.sys / 5.1.200074.88
vxconfig.dll / 5.1.200074.88

* 3105641 (Tracking ID: 3105641)

SYMPTOM:
I/O errors may occur while using the vxdisk list, vxdisk diskinfo, or vxassist rescan command

DESCRIPTION:
This issue may occur while using the vxdisk list, vxdisk diskinfo, or vxassist rescan command if the cluster disk group has a GPT disk. In this case, the use of such commands causes disk rescan, which eventually causes un-reservation and re-reservation of all the disks in the disk group. If application I/Os or vxconfig task I/Os result in disk I/Os while its reservation is cleared, then such disk I/Os may fail.

RESOLUTION:
This issue has been resolved by not initiating disk rescan for a disk having GPT signature while firing vxdisk list and vxdisk diskinfo command.

Binary / Version:
vxcmd.dll / 5.1.20075.87

* 3146554 (Tracking ID: 3146554)

SYMPTOM:
Unable to perform thin provisioning and storage reclamation operations on non-track aligned volumes created on arrays that support these operations

DESCRIPTION:
This issue occurs when you try to perform thin provisioning and storage reclamation operations on non-track aligned volumes created on arrays that support these operations. Ideally, these operations are not allowed on Huawei Symantec's Oceanspace S5600T array because the array does not support it. However, because of a bug in the vxvm.dll provider, SFW throws an error for these operations even for the other arrays as it fails to identify that the non-track aligned volumes were created on arrays other than Huawei S5600T.

RESOLUTION:
This issue has been resolved by enhancing the vxvm.dll provider so that it correctly identifies non-track aligned volumes created on the Huawei S5600T 
array and blocks the thin provisioning and storage reclamation operations only on this array.

Binary / Version:
vxvm.dll / 5.1.20076.87

* 3164349 (Tracking ID: 3164349)

SYMPTOM:
During disk group import or deport operation, VMDg resources may result in a fault if VDS Refresh is also in progress

DESCRIPTION:
This issue occurs if the VDS Refresh operation is in progress and, at the same time, a dynamic disk group import or deport operation is performed and the Volume Manager Disk Group (VMDg) resources in a cluster are in the process of coming online or going offline. In this case, because of resource contention caused by two simultaneous operations, the VMDg resources take longer time and may eventually time out and result in a fault.

RESOLUTION:
This issue has been resolved by automatically aborting the VDS Refresh operation when a dynamic disk group import or deport operation is initiated.

Binary / Version:
vxcmd.dll / 5.1.20077.87
vxvds.exe / 5.1.20077.87

* 3190483 (Tracking ID: 3190483)

SYMPTOM:
Cluster disk group resource faults after adding or removing a disk from a disk group if not all of its disks are available for reservation.

DESCRIPTION:
This issue occurs when you add or remove a disk from a dynamic disk group and if the majority of its disks are available for reservation, but not all. After the disk is added or removed, SFW checks to see if all the disks are available for reservation. In this case, because not all the disks are available, the cluster disk group resource faults.

RESOLUTION:
This issue has been resolved so that, after a disk is added or removed from a disk group, SFW now checks only for a majority of disks available for reservation instead of all.

Binary / Version:
vxconfig.dll / 5.1.20078.87

* 3211093 (Tracking ID: 3211093)

SYMPTOM:
VVR Primary server may crash while replicating data using TCP multi-connection.

DESCRIPTION:
This issue may occur when VVR replication is happening in the TCP multi-connection mode. During the replication, the Primary may receive a duplicate data acknowledgement (ACK) for a message it had sent to the Secondary. Because of an issue in VVR's multi-threaded receiver code, VVR processes the message twice on the Secondary and sends two data acknowledgements to the Primary for the same message. Because of this, the Primary server crashes.

RESOLUTION:
The issue in the multi-threaded receiver code in VVR has been fixed by modifying the vxio.sys binary.

Binary / Version:
vxio.sys / 5.1.20079.87
vxconfig.dll / 5.1.20079.87

* 3226396 (Tracking ID: 3226396)

SYMPTOM:
Error occurs while importing a dynamic disk group as a cluster disk group if a disk is missing.

DESCRIPTION:
This issue occurs while importing a dynamic disk group as a cluster disk group and if one of the disks in the cluster disk group is missing. Because this conversion from dynamic disk group to a cluster disk group wrongly interprets the missing disk to be not connected to a shared bus, it results in an error with the following message: disk is not on a shared bus.

RESOLUTION:
This issue has been resolved so that now, during the conversion to a cluster disk group, the shared bus validation is skipped for the missing disks.

Binary / Version:
vxvm.dll / 5.1.20080.87

* 3265897 (Tracking ID: 3265897)

SYMPTOM:
Moving of subdisk fails for a striped volume with stripe unit size greater than 512 blocks.

DESCRIPTION:
This issue occurs while performing the Move Subdisk operation for a striped volume with stripe unit size greater than 512 blocks (256 KB). Because of the maximum size limit of 512 blocks of admin I/Os per call in the vxio driver, the operation to move the subdisk fails while trying to perform the admin I/Os.

RESOLUTION:
This issue has been resolved by increasing the maximum size limit of admin I/Os per call to 16384 blocks.

Binary / Version:
vxio.sys / 5.1.20081.87 
vxconfig.sys / 5.1.20081.87

* 3283659 (Tracking ID: 3283659)

SYMPTOM:
Rhs.exe process stops unexpectedly when the VMDg resource is brought online.

DESCRIPTION:
In a Microsoft Failover Clustering environment, the Resource Hosting Subsystem (Rhs.exe) process stops unexpectedly when the SFW VMDg resource is brought online. This issue occurs while processing the CLUSCTL_RESOURCE_STORAGE_GET_MOUNTPOINTS control code for which the VMDg resource vxres.dll is expected to retrieve correct output buffer size by converting it from the WCHAR to BYTE data type. During this, if the GetVolumePathNamesForVolumeNameW function fails with the "ERROR_MORE_DATA" error, then vxres.dll does not convert the output buffer size in BYTE, and the cluster sends the buffer with half the size the next time because of which GetVolumePathNamesForVolumeNameW ends up overrunning the buffer and results in access violation.

RESOLUTION:
This issue has been fixed so that now the VMDg resource returns the required output buffer size by converting it from WCHAR to BYTE even if the GetVolumePathNamesForVolumeNameW function fails.

Binary / Version:
vxres.dll / 5.1.20082.87

* 3316851 (Tracking ID: 3316851)

SYMPTOM:
If the EMC Symmetrix array firmware is upgraded to version 5876, SFW is unable to discover the LUNs as thin reclaimable.

DESCRIPTION:
This hotfix addresses an issue where SFW is unable to identify thin provisioning reclaimable LUNs on an EMC Symmetrix array if the array firmware is upgraded to version 5876.

RESOLUTION:
The SFW library is enhanced to identify thin reclaimable LUNs on EMC Symmetrix arrays with firmware version 5876.

Binary / Version:
ddlprov.dll / 5.1.20083.87

* 3319824 (Tracking ID: 3319824)

SYMPTOM:
Data corruption may occur if the volume goes offline while a resynchronization operation is in progress.

DESCRIPTION:
This issue occurs where SFW SmartMove is enabled and data resynchronization operations such as subdisk move, mirror resync, or mirror attach are being performed on a volume. If the volume goes offline (for example, because the MountV resource of the volume went offline) while a resync operation is in progress, the resync task completes abnormally and the volume may report data corruption. The task progress on the GUI may suddenly jump to 100%, but the actual task does not complete. This issue occurs due to improper error handling by SFW.

RESOLUTION:
This issue has been fixed by adding correct handling of error conditions.

This hotfix is applicable for x64 platform only.

Binary / Version:
vxconfig.dll / 5.1.20084.87

* 3365283 (Tracking ID: 3365283)

SYMPTOM:
Server crashes during high write I/O operations on mirrored volumes.

DESCRIPTION:
This issue occurs when heavy write I/O operations are performed on mirrored volumes. During such high I/O operations, the server crashes due to a problem managing the memory for data buffers.

RESOLUTION:
This issue has been resolved by appropriately mapping the system-address-space described by MDL for the write I/Os on mirrored volumes.

Binary / Version:
vxio.sys / 5.1.20085.87

* 3478319 (Tracking ID: 3478319)

SYMPTOM:
Storage Agent log file (vm_vxisis.log) gets flooded with informational messages during cluster monitor cycles

DESCRIPTION:
This issue occurs during cluster monitor cycles where the Storage Agent log file (vm_vxisis.log) gets flooded with informational and error messages. This happens because, by default, some informational messages are logged at Error level.

RESOLUTION:
This issue has been resolved by logging all informational messages at Information level (instead of Error level). 

Binary / Version: vxpal3.dll / 3.3.1074.0

* 3419601 (Tracking ID: 3419601)

SYMPTOM:
BSOD occurs if Windows runs out of the system worker threads

DESCRIPTION:
This issue occurs while bringing a disk group resource offline that has a large number of volumes. When the disk group resource is brought offline, Windows Plug and Play generates a large number of volume removal notifications at once. Each of these notifications is handled by a system worker thread, which gets blocked when the system runs out of worker threads. Because of this, system hangs and the Blue Screen of Death (BSOD) error occurs.

RESOLUTION:
This issue has been resolved so that now vxio's private system threads are used instead of the systems worker threads. 

Binary / Version: vxio.sys / 5.1.20087.88

* 3463697 (Tracking ID: 3463697)

SYMPTOM:
Storage Agent log file (vm_vxisis.log) gets flooded with informational messages during cluster monitor cycles

DESCRIPTION:
This issue occurs during cluster monitor cycles where the Storage Agent log file (vm_vxisis.log) gets flooded with informational and error messages. This happens because, by default, some informational messages are logged at Error level.

RESOLUTION:
This issue has been resolved by logging all informational messages at Information level (instead of Error level). 

Binary / Version: cluscmd.dll / 5.1.20088.87 mount.dll / 5.1.20088.87

* 3490438 (Tracking ID: 3490438)

SYMPTOM:
VEA GUI incorrectly displays wrong LUN serial numbers for a disk

DESCRIPTION:
This issue occurs in Veritas Enterprise Administrator (VEA) GUI while performing a rescan operation where VEA incorrectly displays wrong LUN serial numbers for a disk. The issue occurs because of some inconsistency while reading the serial numbers from a disk. However, the SCSICMD (scsicmd.exe) utility provides correct serial number information.

RESOLUTION:
This issue has been resolved by reducing the buffer size for querying LUN serial numbers for a disk, as used in the SCSICMD utility. 

Binary / Version: ddlprov.dll / 5.1.20089.87

* 3497450 (Tracking ID: 3497450)

SYMPTOM:
Cluster disk group loses access to majority of its disks due to a SCSI error

DESCRIPTION:
This issue occurs while processing a request to renew or query the SCSI reservation on a disk belonging to a cluster disk group. Because of the following error in the SCSI command, the operation fails: Unit Attention - inquiry parameters changed (6/3F/03) Because of this, the cluster disk group loses access to majority of its disks.

RESOLUTION:
This issue has been resolved by retrying the SCSI reservation renew or query request. 

Binary / Version: vxio.sys / 5.1.20090.87



INSTALLING THE PATCH
--------------------
What's new in this CP
=====================|

The following hotfixes have been added in this CP:
 - Hotfix_3_3_1074_3478319
 - Hotfix_5_1_20087_88_3419601
 - Hotfix_5_1_20088_87_3463697
 - Hotfix_5_1_20089_87_3490438
 - Hotfix_5_1_20090_87_3497450
 
For more information about these hotfixes, see the "DETAILS OF INCIDENTS FIXED BY THE PATCH" section in this readme.


Install instructions
====================|

Download the appropriate cumulative public patch (CP) executable file to a temporary location on your system.
You can install the CP in a verbose mode or in a non-verbose mode. Instructions for both options are provided below.

Each cumulative public patch includes the individual hotfixes that contain enhancements and fixes related to reported issues.
See "Errors/Problems Fixed" section for details.

Before you begin
----------------:

[1] In case of Windows Server 2003, this hotfix requires Microsoft Core XML Services (MSXML) 6.0 pre-installed in your setup. Download and install MSXML 6.0 before installing the hotfix.
Refer to the following link for more information:
http://www.microsoft.com/downloads/details.aspx?FamilyId=993c0bcf-3bcf-4009-be21-27e85e1857b1

Microsoft posted service pack and/or security updates for Core XML Services 6.0. Please contact Microsoft or refer to Microsoft website to download and install latest updates to Core XML Services 6.0.

Refer to the following link for more information:
http://www.microsoft.com/downloads/details.aspx?FamilyId=70C92E77-9E5A-41B1-A9D2-64443913C976

[2] Ensure that the logged-on user has the following privileges to install the CP on the systems:
    - Local administrator privileges
    - Debug privileges

[3] One or more hotfixes that are included with this CP may require a reboot.
Before proceeding with the installation ensure that the system can be rebooted.

[4] Symantec recommends that you close the Cluster Manager (Java Console) and the Veritas Enterprise Administrator (VEA) Console before installing this CP.

[5] One or more hotfixes that are included in this hotfix may require stopping the Veritas Storage Agent (vxvm) service. This causes the Volume Manager Disk Group (VMDg) resources in a cluster environment to fault.
Before proceeding with the installation, ensure that the cluster disk groups that contain the VMDg resource are taken offline or moved to another node in the cluster.

[6] Ensure that you close the Windows Event Viewer before proceeding with the installation.

[7] Hotfix_5_1_20012_88_2087139a may fail to install due to some stray rhs.exe processes that keep running even after the cluster service has been stopped. In such a case, you should manually terminate all the running rhs.exe processes, confirm that the clussvc service is stopped, and then retry installing the hotfix.

[8] Hotfix_5_1_20048_87_2670150 installation requires stopping the Storage Agent (vxvm) service which will cause the 'Volume Manager Disk Group' (VMDg) resources in a cluster environment (MSCS or VCS) to fault. If this hotfix is being applied to a server in cluster, make sure any cluster groups containing a VMDg resource are taken offline or moved to another node in the cluster before proceeding.

[10] Hotfix_5_1_20058_88_2766206 installation requires stopping the Storage Agent (vxvm) service, which will cause the Volume Manager Disk Group (VMDg) resources in a cluster environment (MSCS or VCS) to fault. If this hotfix is being applied to a server in a cluster, make sure any cluster groups containing a VMDg resource are taken offline or moved to another node in the cluster before proceeding. You should install the latest CP before installing this hotfix.



To install in the verbose mode
------------------------------:

In the verbose mode, the cumulative patch (CP) installer prompts you for inputs and displays the installation progress status in the command window.

Perform the following steps:

[1] Double-click the CP executable file to extract the contents to a default location on the system.
The installer displays a list of hotfixes that are included in the CP.
    - On 32-bit systems, the hotfixes executable files are extracted to:
      "%commonprogramfiles%\Veritas Shared\WxRTPrivates\<CPName>"
    - On 64-bit systems, the hotfixes executable files are extracted to:
      "%commonprogramfiles(x86)%\Veritas Shared\WxRTPrivates\<CPName>"

The installer also lists the hotfixes that require a reboot of the system after the installation. 
If system reboot is not an option at this time, you can choose not to install these hotfixes. 
In such a case, exit the installation and then launch the CP installer again from the command line using the /exclude option.
See "To install in a non-verbose (silent) mode" section for the syntax.

[2] When the installer prompts whether you want to continue with the installation; type Y to begin the hotfix installation.
The installer performs the following tasks:
    - Extracts all the individual hotfix executable files
      On 32-bit systems the files are extracted at %commonprogramfiles%\Veritas Shared\WxRTPrivates\<HotfixName>
      On 64-bit systems the files are extracted at %commonprogramfiles(x86)%\Veritas Shared\WxRTPrivates\<HotfixName>
    - Runs the pre-install tasks
    - Installs all the hotfixes sequentially
    - Runs the post-install tasks
The installation progress status is displayed in the command window.

[3] After all the hotfixes are installed, the installer prompts you to restart the system.
Type Y to restart the system immediately, or type N to restart the system later. 
You must restart the system for the changes to take effect.

Note that the installer prompts for a system restart only if hotfixes that require a reboot are included in the CP and are installed.

To install in the non-verbose (silent) mode
-------------------------------------------:

In the non-verbose (silent) mode, the cumulative patch (CP) installer does not prompt you for inputs and directly proceeds with the installation tasks. 
The installer displays the installation progress status in the command window.

Use the VxHFBatchInstaller.exe utility to install a CP from the command line.
The syntax options for this utility are as follows:

vxhfbatchinstaller.exe /CP:<CPName> [/Exclude:<HF1.exe>,<HF2.exe>...] [/PreInstallScript:<PreInstallScript.pl>] [/silent [/forcerestart]]

where,
    - CPName is the cumulative patch executable file name without the platform, architecture, and .exe extension.
For example, if CP executable name is CP21_SFW_51SP2_W2K8_x64.exe, specify it as CP21_SFW_51SP2.

    - HF1.exe, HF2.exe,... represent the executable file names of the hotfixes that you wish to exclude from the installation. Note that the file names are separated by commas, with no space after a comma. The CP installer skips the mentioned hotfixes during the installation.

    - PreInstallScript.pl is the Perl script that includes the pre-installation steps. These steps forcefully kill the required services and processes in case a graceful stop request does not succeed.
    Symantec recommends that you use this option and script only in case the CP installer fails repeatedly while performing the pre-installation tasks.

    - /silent indicates the installation is run in a non-verbose mode; the installer does not prompt for any inputs during the installation.

    - /forcerestart indicates that the system is automatically restarted, if required, after the installation is complete.


Perform the following steps:

[1] From the command prompt, navigate to the directory where the CP executable file is located and then run the file to extract the contents to a default location on the system. 
The installer displays a list of hotfixes that are included in the CP.
    - On 32-bit systems, the hotfixes executable files are extracted to:
      "%commonprogramfiles%\Veritas Shared\WxRTPrivates\<CPName>"
    - On 64-bit systems, the hotfixes executable files are extracted to:
      "%commonprogramfiles(x86)%\Veritas Shared\WxRTPrivates\<CPName>"

The installer also lists the hotfixes that require a reboot of the system after the installation. If system reboot is not an option at this time, you can choose not to install these hotfixes. In such a case, launch the CP installer from the command line using the /exclude option.

[2] When the installer prompts whether you want to continue with the installation; type N to exit the installer.

[3] In the same command window, run the following command to begin the CP installation in the non-verbose mode:
vxhfbatchinstaller.exe /CP:<CPName> /silent

For example, to install a SFW 6.0.1 x64 CP for Windows Server 2008, the command is:
vxhfbatchinstaller.exe /CP:CP21_SFW_51SP2 /silent

The installer performs the following tasks:

    - Extracts all the individual hotfix executable files
      On 32-bit systems the files are extracted at %commonprogramfiles%\Veritas Shared\WxRTPrivates\<HotfixName>
      On 64-bit systems the files are extracted at %commonprogramfiles(x86)%\Veritas Shared\WxRTPrivates\<HotfixName>
    - Runs the pre-install tasks
    - Installs all the hotfixes sequentially
    - Runs the post-install tasks
The installation progress status is displayed in the command window.

[4] After all the hotfixes are installed, the installer displays a message for restarting the system.
You must restart the system for the changes to take effect.

Note that the installer prompts for a system restart only if hotfixes that require a reboot are included in the CP and are installed. The installer automatically restarts the system if you had specified the /forcerestart option in step 3 earlier.

VxHFBatchInstaller usage examples
---------------------------------:

[+] Install CP in silent mode, exclude hotfixes Hotfix_5_1_20014_87_2321015_w2k8_x64.exe and Hotfix_5_1_20018_87_2318276_w2k8_x64.exe:

vxhfbatchinstaller.exe /CP:CP21_SFW_51SP2 /Exclude:Hotfix_5_1_20014_87_2321015_w2k8_x64.exe, Hotfix_5_1_20018_87_2318276_w2k8_x64.exe /silent

[+] Install CP in silent mode, restart automatically:

vxhfbatchinstaller.exe /CP:CP21_SFW_51SP2 /silent /forcerestart


Known issues
============|

The following section describes the issues related to the individual hotfixes that are included in this CP.

[1] Hotfix_5_1_20005_87_2218963
The following issues may occur:

- Changing the drive letter of a volume when a reclaim task for that volume is in progress will abort the reclaim task. The reclaim task will appear to have completed successfully, but not all of the unused storage will be reclaimed.

Workaround:
If this happens, perform another reclaim operation on the volume to release the rest of the unused storage.

- Reclaim operations on a striped volume that resides on thin provisioned disks in HP XP arrays may not reclaim as much space as you expect. Reclaiming is done in contiguous allocation units inside each stripe unit. The allocation unit size for XP arrays is large compared to a volume's stripe unit size, so free allocation units are often split across stripe units. In that case they are not contiguous and cannot be reclaimed.

- If you use the SFW installer to change the enabled feature set after SFW is already installed, reclaiming free space from a thin provisioned disk no longer works. The installer incorrectly changes the Tag variable in the vxio service registry key from 8 to 12. That allows LDM to intercept and fail the reclaim requests SFW sends to the disks. This is a problem only on Windows Server 2008.

Workaround:
To work around this problem, manually change the value of HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\vxio\Tag back to 8 and reboot after changing the enabled SFW features on Windows Server 2008.

[2] Hotfix_5_1_20024_87_2368399
This issue is applicable only if you uninstall this hotfix.
If you perform the volume shrink operation from the VEA Console after removing this hotfix, VEA still displays the warning message. This occurs because the stale files residing in the VEA cache are not removed during the uninstallation.

Workaround:
Perform the following steps after you have removed the hotfix.

1. Close the Veritas Enterprise Administrator (VEA) Console.
2. Stop the Veritas Storage Agent service.
Type the following at the command prompt:
net stop vxvm
3. Delete the Client extensions cache directories from the system:
    On Windows Server 2003, delete the following:
    - %allusersprofile%\Application Data\Veritas\VRTSbus\cedownloads
    - %allusersprofile%\Application Data\Veritas\VRTSbus\Temp\extensions

    On Windows Server 2008, delete the following:
    - %allusersprofile%\Veritas\VRTSbus\Temp\extensions
    - %allusersprofile%\Veritas\VRTSbus\cedownloads

4. Start the Veritas Storage Agent service.
Type the following at the command prompt:
    net start vxvm
5. Repeat steps 1 to 4 on all the systems where you have uninstalled this hotfix.
6. Launch VEA to perform the volume shrink operation.


[3] Hotfix_5_1_20086_87_3372380

SFW DSM for Huawei does not work as expected for Huawei S5600T Thin Provisioning (TP) LUNs. 
In case the active paths are disabled, the I/O fails over to the standby paths. When the active paths are restored, the I/O should fail back to the active paths. But in case of Huawei S5600T TP LUNs, the I/O runs on both active as well as standby paths even after the active paths are restored. This issue occurs because Huawei S5600T TP LUN does not support A/A-A explicit trespass.

The SFW DSM for Huawei functions properly for Huawei S5600T non-TP LUNs.

Workaround: 

To turn off the A/A-A explicit trespass, run the following commands from the command line:
   Vxdmpadm setdsmalua explicit=0 harddisk5
   Vxdmpadm setarrayalua explicit=0 harddisk5

-------------------------------------------------------+


REMOVING THE PATCH
------------------
NO


SPECIAL INSTRUCTIONS
--------------------
This fix is provided without warranty of any kind including the warranties of title or implied warranties of merchantability, 
fitness for a particular purpose and non-infringement. Symantec disclaims all liability relating to or arising out of this fix. 
It is recommended that the fix be evaluated in a test environment before implementing it in your production environment. 
When the fix is incorporated into a Storage Foundation for Windows maintenance release, the resulting Hotfix or Service Pack 
must be installed as soon as possible. Symantec Technical Services will notify you when the maintenance release (Hotfix or Service Pack) 
is available if you sign up for notifications from the Symantec support site http://www.symantec.com/business/support and/or 
from Symantec Operations Readiness Tools (SORT) http://sort.symantec.com.

Additional notes
================|

[+] To confirm the list of cumulative patches installed on a system, run the following command from the directory where the CP files are extracted:
vxhfbatchinstaller.exe /list

The output of this command displays a list of cumulative patches and the hotfixes that are installed as part of a CP. 
This command also displays the hotfixes that are included in a CP but are not installed on the system.

[+] To confirm the installation of the hotfixes, perform one of the following:
    - Run the following command:
      vxhf.exe /list
      The output of this command lists the hotfixes installed on the system.
    - In the Windows Add/Remove program, click "View installed updates" to view the list of the hotfixes installed on the system.

[+] The CP installer (vxhfbatchinstaller.exe) creates and stores logs at:
"%allusersprofile%\Application Data\Veritas\VxHF\VxHFBatchInstaller.txt"

[+] The hotfix installer (vxhf.exe) creates and stores logs at:
"%allusersprofile%\Application Data\Veritas\VxHF\VxHF.txt"

[+] For general information about the hotfix installer (vxhf.exe), please refer to the following technote:
http://www.symantec.com/docs/TECH73446

[+] To view a list of hotfixes already installed on a system, please refer to the steps mentioned in the following technote:
http://www.symantec.com/docs/TECH73438

[+] For information on uninstalling a hotfix, please refer to the steps mentioned in the following technote:
http://www.symantec.com/docs/TECH73443


OTHERS
------
NONE