vcs-win-CP19_SFWHA_51SP2A
Obsolete
The latest patch(es) : sfha-win-CP21_SFWHA_51SP2 

 Basic information
Release type: P-patch
Release date: 2013-08-26
OS update support: None
Technote: TECH173500 - Storage Foundation for Windows High Availability, Storage Foundation for Windows and Veritas Cluster Server 5.1 Service Pack 2 Cumulative Patches
Documentation: None
Popularity: 6466 viewed    downloaded
Download size: 380.21 MB
Checksum: 2809052970

 Applies to one or more of the following products:
Storage Foundation HA 5.1SP2 On Windows 32-bit
Storage Foundation HA 5.1SP2 On Windows IA64
Storage Foundation HA 5.1SP2 On Windows x64

 Obsolete patches, incompatibilities, superseded patches, or other requirements:

This patch is obsolete. It is superseded by: Release date
vcs-win-CP20_SFWHA_51SP2 (obsolete) 2014-01-28

This patch supersedes the following patches: Release date
vcs-win-CP18_SFWHA_51SP2 (obsolete) 2013-05-27
vcs-win-CP17_SFWHA_51SP2 (obsolete) 2013-03-01
vcs-win-CP16_SFWHA_51SP2 (obsolete) 2012-11-07
vcs-win-CP15_SFWHA_51SP2 (obsolete) 2012-10-29
vcs-CP14_SFWHA_51SP2 (obsolete) 2012-07-30

 Fixes the following incidents:
2087139, 2177225, 2192886, 2203640, 2207263, 2207404, 2218963, 2220352, 2225878, 2231216, 2237219, 2239785, 2245816, 2248307, 2251689, 2271435, 2290214, 2318276, 2321015, 2327428, 2330902, 2338437, 2364591, 2368399, 2370962, 2371250, 2372049, 2372164, 2376010, 2378712, 2385965, 2391453, 2392336, 2397382, 2400260, 2401163, 2406683, 2407816, 2413405, 2415517, 2422907, 2426197, 2437434, 2440099, 2482820, 2495109, 2497664, 2512482, 2513842, 2522314, 2528853, 2530236, 2535885, 2536009, 2536342, 2554039, 2564914, 2570602, 2587638, 2597410, 2604814, 2610786, 2614448, 2635097, 2643293, 2650336, 2683797, 2687350, 2688194, 2692139, 2695917, 2711856, 2713126, 2722108, 2722228, 2738430, 2740872, 2766206, 2770008, 2790100, 2817466, 2822519, 2834385, 2851054, 2858362, 2860196, 2860593, 2864040, 2898414, 2905123, 2905178, 2911830, 2913240, 2914038, 2928801, 2940962, 2963812, 2975132, 3053006, 3054619, 3081465, 3104954, 3105641, 3146554, 3164349, 3188998, 3190483, 3211093, 3226396, 3265897

 Patch ID:
None.

Readme file
Date: 2013-26-08
OS: Windows
OS Version: 2003, 2008, 2008 R2
Packages:

===============================================================================
Architecture/OS  Windows Server 2003 		 Windows Server 2008 / 2008 R2
===============================================================================
x86 	         CP19_SFWHA_51SP2_W2k3_x86.exe 	 CP19_SFWHA_51SP2_W2k8_x86.exe
-------------------------------------------------------------------------------
x64 	         CP19_SFWHA_51SP2_W2k3_x64.exe 	 CP19_SFWHA_51SP2_W2k8_x64.exe
-------------------------------------------------------------------------------
ia64 	         CP19_SFWHA_51SP2_W2k3_ia64.exe  CP19_SFWHA_51SP2_W2k8_ia64.exe
-------------------------------------------------------------------------------

Etrack Incidents: 
2220352, 2207404, 2237219, 2231216, 2271435, 2338437, 2385965, 2391453, 2407816, 2422907, 
2203640, 2378712, 2207263, 2245816, 2087139, 2290214, 2321015, 2330902, 2318276, 2364591, 
2368399, 2397382, 2406683, 2218963, 2413405, 2251689, 2177225, 2225878, 2371250, 2437434, 
2426197, 2440099, 2376010, 2415517, 2512482, 2372049, 2400260, 2497664, 2522314, 2495109, 
2401163, 2239785, 2482820, 2392336, 2513842, 2536009, 2554039, 2536342, 2564914, 2530236, 
2372164, 2597410, 2528853, 2370962, 2570602, 2604814, 2614448, 2610786, 2587638, 2535885, 
2650336, 2635097, 2643293, 2692139, 2695917, 2683797, 2713126, 2688194, 2722108, 2711856, 
2740872, 2722228, 2687350, 2327428, 2738430, 2248307, 2770008, 2766206, 2790100, 2817466, 
2834385, 2851054, 2822519, 2898414, 2858362, 2864040, 2905123, 2914038, 2911830, 2928801, 
2860593, 2860196, 2905178, 2913240, 3053006, 3054619, 2940962, 2963812, 2975132, 3081465,
3188998, 3104954, 3105641, 3146554, 3164349, 3190483, 3211093, 3226396, 3265897, 2192886 




Fixes Applied for Products
==========================|

Storage Foundation and High Availability Solutions (SFW HA) 5.1 SP2 for Windows


What's new in this CP
=====================|

The following hotfixes have been added in this CP:
 - Hotfix_5_1_20078_87_3190483
 - Hotfix_5_1_20079_87_3211093
 - Hotfix_5_1_20080_87_3226396
 - Hotfix_5_1_20081_87_3265897
 - Hotfix_5_1_20053_2192886

The following hotfix has been removed from this CP:
 - Hotfix_5_1_20049_2898414

For more information about these hotfixes, see the "Errors/Problems Fixed" section in this readme.


Install instructions
====================|

Download the appropriate cumulative patch (CP) executable file to a temporary location on your system. You can install the CP in a verbose mode or in a non-verbose mode. Instructions for both options are provided below.

Each CP includes the individual hotfixes that contain enhancements and fixes related to reported issues.
See "Errors/Problems Fixed" section for details.

Before you begin
----------------:

[1] In case of Windows Server 2003, this hotfix requires Microsoft Core XML Services (MSXML) 6.0 pre-installed in your setup. Download and install MSXML 6.0 before installing the hotfix.
Refer to the following link for more information:
http://www.microsoft.com/downloads/details.aspx?FamilyId=993c0bcf-3bcf-4009-be21-27e85e1857b1&displaylang=en

Microsoft posted service pack and/or security updates for Core XML Services 6.0. Please contact Microsoft or refer to Microsoft website to download and install latest updates to Core XML Services 6.0.

Refer to the following link for more information:
http://www.microsoft.com/downloads/details.aspx?FamilyId=70C92E77-9E5A-41B1-A9D2-64443913C976&displaylang=en

[2] Ensure that the logged-on user has the following privileges to install the CP on the systems:
	- Local administrator privileges
	- Debug privileges

[3] One or more hotfixes that are included with this CP may require a reboot.
Before proceeding with the installation ensure that the system can be rebooted.

[4] Symantec recommends that you close the Cluster Manager (Java Console) and the Veritas Enterprise Administrator (VEA) Console before installing this CP.

[5] One or more hotfixes that are included in this hotfix may require stopping the Veritas Storage Agent (vxvm) service. This causes the Volume Manager Disk Group (VMDg) resources in a cluster environment to fault.

Before proceeding with the installation, ensure that the cluster disk groups that contain the VMDg resource are taken offline or moved to another node in the cluster.

[6] Ensure that you close the Windows Event Viewer before proceeding with the installation.

[7] Hotfix_5_1_20012_88_2087139a may fail to install due to some stray rhs.exe processes that keep running even after the cluster service has been stopped. In such a case, you should manually terminate all the running rhs.exe processes, confirm that the clussvc service is stopped, and then retry installing the hotfix.

[8] Hotfix_5_1_20002_2220352 requires the following two additional steps after installation:
- To support non-scoped file shares, the agents read a specific registry key. You must create the registry key after installing this hotfix.
See the topic "Configuring the registry parameter to support non-scoped file shares" in the post-install steps section in this readme.

- After the hotfix installation is complete, offline the FileShare service group on the node where it was online and then bring it online again. The enhancement will take effect once the service group is online again. 

[9] Hotfix_5_1_20038_2688194 installation may fail if CP installer fails to stop the MSMQ and its dependent services. In such a case, you should manually stop these services and then retry installing the hotfix.

[10] Hotfix_5_1_20058_88_2766206 installation requires stopping the Storage Agent (vxvm) service, which will cause the Volume Manager Disk Group (VMDg) resources in a cluster environment (MSCS or VCS) to fault. If this hotfix is being applied to a server in a cluster, make sure any cluster groups containing a VMDg resource are taken offline or moved to another node in the cluster before proceeding. You should install the latest CP before installing this hotfix.



To install in the verbose mode
------------------------------:

In the verbose mode, the cumulative patch (CP) installer prompts you for inputs and displays the installation progress status in the command window.

Perform the following steps:

[1] Double-click the CP executable file to extract the contents to a default location on the system.
The installer displays a list of hotfixes that are included in the CP.
	- On 32-bit systems, the hotfixes executable files are extracted to:
	  "%commonprogramfiles%\Veritas Shared\WxRTPrivates\<CPName>"
	- On 64-bit systems, the hotfixes executable files are extracted to:
	  "%commonprogramfiles(x86)%\Veritas Shared\WxRTPrivates\<CPName>"

The installer also lists the hotfixes that require a reboot of the system after the installation. If system reboot is not an option at this time, you can choose not to install these hotfixes. In such a case, exit the installation and then launch the CP installer again from the command line using the /exclude option.
See "To install in a non-verbose (silent) mode" section for the syntax.

[2] When the installer prompts whether you want to continue with the installation; type Y to begin the hotfix installation.
The installer performs the following tasks:
	- Extracts all the individual hotfix executable files
	  On 32-bit systems the files are extracted at %commonprogramfiles%\Veritas Shared\WxRTPrivates\<HotfixName>
          On 64-bit systems the files are extracted at %commonprogramfiles(x86)%\Veritas Shared\WxRTPrivates\<HotfixName>
	- Runs the pre-install tasks
	- Installs all the hotfixes sequentially
	- Runs the post-install tasks
The installation progress status is displayed in the command window.

[3] After all the hotfixes are installed, the installer prompts you to restart the system.
Type Y to restart the system immediately, or type N to restart the system later. You must restart the system for the changes to take effect.

Note that the installer prompts for a system restart only if hotfixes that require a reboot are included in the CP and are installed.

To install in the non-verbose (silent) mode
-------------------------------------------:

In the non-verbose (silent) mode, the cumulative patch (CP) installer does not prompt you for inputs and directly proceeds with the installation tasks. The installer displays the installation progress status in the command window.

Use the VxHFBatchInstaller.exe utility to install a CP from the command line.
The syntax options for this utility are as follows:

vxhfbatchinstaller.exe /CP:<CPName> [/Exclude:<HF1.exe>,<HF2.exe>...] [/PreInstallScript:<PreInstallScript.pl>] [/silent [/forcerestart]]

where,
	- CPName is the cumulative patch executable file name without the platform, architecture, and .exe extension.
For example, if CP executable file name is CP19_SFWHA_51SP2_W2K8_x64.exe, specify it as CP19_SFWHA_51SP2.

	- HF1.exe, HF2.exe, ... represent the executable file names of the hotfixes that you wish to exclude from the installation. Note that the file names are separated by commas, with no space after a comma. The CP installer skips the mentioned hotfixes during the installation.

	- PreInstallScript.pl is the Perl script that includes the pre-installation steps. These steps forcefully kill the required services and processes in case a graceful stop request does not succeed.
Symantec recommends that you use this option and script only in case the CP installer fails repeatedly while performing the pre-installation tasks.

	- /silent indicates the installation is run in a non-verbose mode; the installer does not prompt for any inputs during the installation.

	- /forcerestart indicates that the system is automatically restarted, if required, after the installation is complete.


Perform the following steps:

[1] From the command prompt, navigate to the directory where the CP executable file is located and then run the file to extract the contents to a default location on the system. The installer displays a list of hotfixes that are included in the CP.
	- On 32-bit systems, the hotfixes executable files are extracted to:
	  "%commonprogramfiles%\Veritas Shared\WxRTPrivates\<CPName>"
	- On 64-bit systems, the hotfixes executable files are extracted to:
	  "%commonprogramfiles(x86)%\Veritas Shared\WxRTPrivates\<CPName>"

The installer also lists the hotfixes that require a reboot of the system after the installation. If system reboot is not an option at this time, you can choose not to install these hotfixes. In such a case, launch the CP installer from the command line using the /exclude option.

[2] When the installer prompts whether you want to continue with the installation; type N to exit the installer.

[3] In the same command window, run the following command to begin the CP installation in the non-verbose mode:
vxhfbatchinstaller.exe /CP:<CPName> /silent

For example, to install a SFW HA 5.1 SP2 x64 CP for Windows Server 2008, the command is:
vxhfbatchinstaller.exe /CP:CP19_SFWHA_51SP2 /silent

The installer performs the following tasks:

	- Extracts all the individual hotfix executable files
	  On 32-bit systems the files are extracted at %commonprogramfiles%\Veritas Shared\WxRTPrivates\<HotfixName>
          On 64-bit systems the files are extracted at %commonprogramfiles(x86)%\Veritas Shared\WxRTPrivates\<HotfixName>
	- Runs the pre-install tasks
	- Installs all the hotfixes sequentially
	- Runs the post-install tasks
The installation progress status is displayed in the command window.

[4] After all the hotfixes are installed, the installer displays a message for restarting the system.
You must restart the system for the changes to take effect.

Note that the installer prompts for a system restart only if hotfixes that require a reboot are included in the CP and are installed. The installer automatically restarts the system if you had specified the /forcerestart option in step 3 earlier.

VxHFBatchInstaller usage examples:
----------------------------------

[+] Install CP in silent mode, exclude hotfixes Hotfix_5_1_20014_87_2321015_w2k8_x64.exe and Hotfix_5_1_20018_87_2318276_w2k8_x64.exe:

vxhfbatchinstaller.exe /CP:CP19_SFWHA_51SP2/Exclude:Hotfix_5_1_20014_87_2321015_w2k8_x64.exe, Hotfix_5_1_20018_87_2318276_w2k8_x64.exe /silent

[+] Install CP in silent mode, restart automatically:

vxhfbatchinstaller.exe /CP:CP19_SFWHA_51SP2 /silent /forcerestart


Post-install steps
==================|

The following section describes the steps that must be performed after installing the hotfixes included in this CP.
Note that these steps are applicable only to the hotfixes listed in this section.

[1] Hotfix_5_1_20002_2220352
Perform the following steps only if you have installed Hotfix_5_1_20002_2220352 as part of the CP installation.

Configuring the registry parameter to support non-scoped file shares
--------------------------------------------------------------------+
Perform the following steps to create a registry key that is used by the VCS FileShare and Lanman agents to support non-scoped file shares on Windows Server 2008 and 2008 R2 systems.

This key is created in the context of a Lanman resource; if you have multiple VCS file share service groups and wish to use the shares in a non-scoped mode, you must create a corresponding key for each Lanman resource that is configured in the file share service group.

Caution: Incorrectly editing the registry may severely damage your system. Before making changes to the registry, make a backup copy.

To create the registry key:
1. Ensure that you have installed this hotfix (Hotfix_5_1_20002_2220352) on all the cluster nodes.

2. To open the Registry Editor, click Start > Run, type regedit, and then click OK.

3. In the registry tree (on the left), navigate to HKLM\SOFTWARE\VERITAS\VCS\BundledAgents.
If the Lanman key exists, go to step 4 (next step), else go to step 5.

4. Verify if under Lanman key there is a key named __Global__\DisableServerNameScoping.
If it exists, change the value of DisableServerNameScoping to 0.
Proceed to step 6.

5. Click Edit > New > Key and create a key by the name Lanman.

6. Select the Lanman key and click Edit > New > Key and create a key by the name <VirtualName>.
Here <VirtualName> should be the virtual computer name assigned to the file share server.
This is the VirtualName attribute of the Lanman resource in the file share service group.

The newly created registry key should look like this: HKLM\SOFTWARE\VERITAS\VCS\BundledAgents\Lanman\<VirtualName>.

7. Select the key that you created in step 5 (<VirtualName>) and add a DWORD type of value.
Value name should be DisableServerNameScoping and Value data should be 1.

The value 1 indicates that the FileShare and Lanman agents support non-scoped file shares on Windows Server 2008 and 2008 R2 systems.

8. If there are multiple file share service groups to be used in the non-scoped mode, repeat steps 5 and 6 for each Lanman resource that is configured in the file share service group.

Note: You must create this key only for Lanman resources that are part of VCS file share service groups.
Configuring this key for Lanman resources that are part of other VCS service groups may result in an unexpected behavior. 

9. Save and exit the Registry Editor.

-------------------------------------------------------+

[2] Hotfix_5_1_20041_2722108
Perform the following steps only if you have installed Hotfix_5_1_20041_2722108 as part of the CP installation.

Setting the DisableStrictVirtualNameCheck registry key
------------------------------------------------------+
This hotfix creates the following registry key:
HKLM\SOFTWARE\VERITAS\VCS\BundledAgents\Lanman\__Global__\DisableStrictVirtualNameCheck

Use this registry key to disable FileShare agent's virtual name check. When DisableStrictVirtualNameCheck is set to 1 (server name check is disabled), the
FileShare agent makes the file shares accessible even if the virtual name is not accessible on the network.
In case the virtual name is not resolvable, the file shares are accessible using the virtual IP.

Caution: Incorrectly editing the registry may severely damage your system.
Before making changes to the registry, make a backup copy.

To set DisableStrictVirtualNameCheck registry key value:
1. To open the Registry Editor, click Start > Run, type regedit, and then click OK.

2. In the registry tree (on the left), navigate to
HKLM\SOFTWARE\VERITAS\VCS\BundledAgents\Lanman\__Global__\.

3. Select the key DisableStrictVirtualNameCheck and change its value data to 1.
The value 1 indicates that the FileShare agent makes the non-scoped file shares
accessible irrespective of whether or not the virtual name is accessible on the
network.
In case the virtual name is not resolvable, the file shares are accessible using
the virtual IP.

4. Save and exit the Registry Editor.


Known issues
============|

The following section describes the issues related to the individual hotfixes that are included in this CP.

[1] Hotfix_5_1_20034_87_2400260

VDS Refresh fails to complete successfully after a storage disconnect/reconnect. All subsequent VDS Refresh operations fail (2346508).

Workaround:

To resolve this issue, run the following commands from the command line:

1. net stop vds
2. net stop vxvm
3. Taskkill /F /IM vxvds.exe
4. Taskkill /F /IM vxvdsdyn.exe
(only for Windows Server 2008 and later)
5. net start vds
6. net start vxvm


[2] Hotfix_5_1_20005_87_2218963

The following issues may occur:

- Changing the drive letter of a volume when a reclaim task for that volume is in progress will abort the reclaim task. The reclaim task will appear to have completed successfully, but not all of the unused storage will be reclaimed.

Workaround:
If this happens, perform another reclaim operation on the volume to release the rest of the unused storage.

- Reclaim operations on a striped volume that resides on thin provisioned disks in HP XP arrays may not reclaim as much space as you expect. Reclaiming is done in contiguous allocation units inside each stripe unit. The allocation unit size for XP arrays is large compared to a volume's stripe unit size, so free allocation units are often split across stripe units. In that case they are not contiguous and cannot be reclaimed.

- If you use the SFW installer to change the enabled feature set after SFW is already installed, reclaiming free space from a thin provisioned disk no longer works. The installer incorrectly changes the Tag variable in the vxio service registry key from 8 to 12. That allows LDM to intercept and fail the reclaim requests SFW sends to the disks. This is a problem only on Windows Server 2008.

Workaround:
To work around this problem, manually change the value of HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\vxio\Tag back to 8 and reboot after changing the enabled SFW features on Windows Server 2008.


[3] Hotfix_5_1_20024_87_2368399

This issue is applicable only if you uninstall this hotfix.
If you perform the volume shrink operation from the VEA Console after removing this hotfix, VEA still displays the warning message. This occurs because the stale files residing in the VEA cache are not removed during the uninstallation.

Workaround:
Perform the following steps after you have removed the hotfix.

1. Close the Veritas Enterprise Administrator (VEA) Console.
2. Stop the Veritas Storage Agent service.
Type the following at the command prompt:
net stop vxvm
3. Delete the Client extensions cache directories from the system:
    On Windows Server 2003, delete the following:
    - %allusersprofile%\Application Data\Veritas\VRTSbus\cedownloads
    - %allusersprofile%\Application Data\Veritas\VRTSbus\Temp\extensions

    On Windows Server 2008, delete the following:
    - %allusersprofile%\Veritas\VRTSbus\Temp\extensions
    - %allusersprofile%\Veritas\VRTSbus\cedownloads

4. Start the Veritas Storage Agent service.
Type the following at the command prompt:
    net start vxvm
5. Repeat steps 1 to 4 on all the systems where you have uninstalled this hotfix.
6. Launch VEA to perform the volume shrink operation.


[4] Hotfix_5_1_20059_87_2834385b

SFW DSM for Huawei does not work as expected for Huawei S5600T Thin Provisioning (TP) LUNs. 
In case the active paths are disabled, the I/O fails over to the standby paths. When the active paths are restored, the I/O should fail back to the active paths. But in case of Huawei S5600T TP LUNs, the I/O runs on both active as well as standby paths even after the active paths are restored. This issue occurs because Huawei S5600T TP LUN does not support A/A-A explicit trespass.

The SFW DSM for Huawei functions properly for Huawei S5600T non-TP LUNs.

Workaround: 

To turn off the A/A-A explicit trespass, run the following commands from the command line:
   Vxdmpadm setdsmalua explicit=0 harddisk5
   Vxdmpadm setarrayalua explicit=0 harddisk5


[5] Hotfix_5_1_20051_3054619

In case if you have environment variable in the binary path of services that VCS installs, please run the commands as mentioned in the Workaround.

ImagePath has registry value of type REG_EXPAND_SZ which by design expects environment variables to expand, while REG_SZ is just a sequence of characters and it will not support expansion of environment variables. With this hotfix, REG_EXPAND_SZ data type has been changed to REG_SZ. Due to this change, environment variables are not expanded.

Workaround:

Run the following commands:
 - sc config vcscomm binPath= "<Value Of>”
replace the <value of> with the registry key value of HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\VCSComm\ImagePath

 - sc config had binPath= "<Value Of>”
replace the <value of> with the registry key value of HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\Had\ImagePath

 - sc config hadhelper binPath= "<Value Of>”
replace the <value of> with the registry key value of HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\HadHelper\ImagePath

 - sc config cmdserver binPath= "<Value Of>”
replace the <value of> with the registry key value of HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\CmdServer\ImagePath

The above mentioned commands will correct the registry data types.


Errors/Problems fixed
=====================|

The fixes and enhancements that are included in this cumulative patch (CP) are as follows:

[1] Hotfix name: Hotfix_5_1_20002_2220352

Symptom:
This hotfix includes the VCS FileShare and Lanman agents that are enhanced to address the limitation of file share scoping on Windows Server 2008 and Windows Server 2008 R2 systems.
These agents now support creation of non-scoped file shares configured with VCS.

Description:
File shares on Windows Server 2003 are accessible using the virtual name (Lanman) or the IP address (administrative IP or virtual IP).

On Windows Server 2008 systems, due to file share scoping, the file shares configured with VCS are accessible only using the virtual server name (Lanman).
These file shares are not accessible using the IP address.

Resolution:
The VCS agents (FileShare, Lanman) have been enhanced to support creation of non-scoped file shares on Windows.
Along with the virtual name, the file shares are now accessible using the IP address too.

To support non-scoped file shares, the agents read a specific registry key. You must create the registry key after installing this hotfix.
See the topic "Configuring the registry parameter to support non-scoped file shares" in this readme.

Notes: 
- After the hotfix installation is complete, offline the FileShare service group on the node where it was online and then bring it online again. The enhancement will take effect once the service group is online again. 

- This hotfix includes a design enhancement to the same non-scoped shares support hotfix (Hotfix_5_1_20002_2220352) released in the earlier CP.

Binary / Version:
Lanman.dll / 5.1.20002.441
Fileshare.dll / 5.1.20002.441

-------------------------------------------------------+ 

[2] Hotfix name: Hotfix_5_1_20003_2207404

Symptom:
This hotfix addresses an issue where the VCS SRDF agent resource fails to come online if 8.3 short file naming convention is disabled on the cluster node and the EMC SRDF SymHome path name contains spaces.

Description:
The VCS SRDF agent converts the EMC SRDF SymHome path in the 8.3 short file name convention before using it. On cluster nodes where the 8.3 short file naming is disabled and if the EMC SRDF SymHome path name contains spaces, the agent is unable to resolve the path.
As a result, the VCS SRDF resource fails to come online on the cluster node.

The VCS engine log contains the following errors:
VCS WARNING V-16-20017-1007 <nodename> <SRDFresourcename>:online:Command: '<SymHomepath\Symclicommand>' failed !

VCS INFO V-16-2-13001 (nodename) <SRDFresourcename>: Output of the completed operation (online)
'<SymHomepath\Symclicommand>' is not recognized as an internal or external command, operable program or batch file.

Resolution:
The issue has been fixed in the VCS SRDF agent. The agent is able to retrieve and use the SymHome path even if 8.3 short file naming is disabled and it contains spaces.

Binary / Version:
SRDFAgent.pm / NA

-------------------------------------------------------+

[3] Hotfix name: Hotfix_5_1_20005_2237219

Symptom:
This hotfix addresses the issue where the VCS IP resource comes online before the configured IP address becomes available.

Description:
Before making an IP address usable, Windows server performs a Duplicate Address Detection (DAD) test to check whether the IP address is unique on the network. During this check, the state of the IP address (ipconfig) on the system remains as "Tentative".

The VCS IP agent does not wait for the DAD check to complete and reports the status of the IP resource as online, even though the configured IP address is not yet usable.

The DAD check typically takes 2-3 seconds.
This delay may result in a problem in service groups where an application resource depends directly on the IP resource.
The application component fails to start as a result of an unusable IP address.

Resolution:
The VCS IP agent is enhanced to address this issue. The agent does not report the resource status as online until the DAD check is complete and the configured IP address state on the system changes from "Tentative" to "Preferred".

Binary / Version:
IP.dll / 5.1.20005.443

-------------------------------------------------------+
 
[4] Hotfix name: Hotfix_5_1_20006_2231216

Symptom:
This hotfix addresses an issue where the VCS PrintShare agent erroneously reports the resource as online on the passive nodes, resulting in a concurrency violation.

Description:
This issue occurs with PrintShare service groups created in a Global Cluster Configuration (GCO) cluster environment.
PrintShare resources come online on the nodes at the primary and secondary site at the same time.

Taking the resources offline at the secondary site results in a service group fault at the primary site.
Bringing the faulted resources online on the primary site also brings the corresponding resources online at the secondary site leading to a concurrency violation.

Resolution:
This issue has been fixed in the VCS PrintShare agent.

Binary / Version:
PrintShare.dll / 5.1.20006.445

-------------------------------------------------------+

[5] Hotfix name: Hotfix_5_1_20012_2271435

Symptom:
This hotfix addresses an issue where the VCS EMC MirrorView resource fails to come online in the cluster.

Description:
As part of its online function, the VCS MirrorView agent checks if naviseccli.exe is installed. If it does not find naviseccli, the agent checks the presence of navicli which in turn needs Java Runtime Environment (JRE).
If JRE is not installed on the node the agent fails to bring the resource online.

The agent log may contain the following messages:
V-16-20048-91 (nodename) MirrorView:<resourcename>-MirroView:clean:Can't open keys SOFTWARE\Wow6432Node\Veritas\VRTSjre

V-16-20048-92 (nodename) MirrorView:<resourcename>-MirroView:clean:Can't open keys SOFTWARE\Veritas\VRTSjre

VCS ERROR V-16-20048-94 (nodename) MirrorView:MIRRORVIEW:online:Java home is not found hence exiting

Resolution:
This issue is fixed in the VCS MirrorView agent.
With the secure naviseccli, JRE is no longer required. The agent does not check for JRE on the cluster node.

Binary / Version:
MirrorViewAgent.pm / NA
 
-------------------------------------------------------+

[6] Hotfix name: Hotfix_5_1_20014_2338437

Symptom:
This hotfix addresses an issue where the VCS VMDg agent reboots a cluster node if it is unable to start the Veritas Storage Foundation Service (vxvm). This occurs even if the agent is not configured to reboot the node.

Description:
The VMDg agent's VxVMFailAction attribute determines the agent behavior when the Veritas Storage Agent Service (vxvm) fails to start.

If the VxVMFailAction attribute value is set as RESTART_VXVM (default value), the agent attempts to restart the vxvm service in each agent monitor cycle.

If the VxVMFailAction attribute value is set as SHUTDOWN, the agent makes a predefined number of attempts (VxVMRestartAttempts attribute) to restart the vxvm service, and if that fails the agent reboots the cluster node.

In this case, even if VxVMFailAction attribute value is set to
RESTART_VXVM, the VMDg agent refers to VxVMRestartAttempts attribute value and shuts down the node after the defined vxvm restart attempts fail.

The Windows Event log may contain the following event:
The process %VCS_HOME%\bin\VCSAgDriver.exe has initiated the restart of computer <nodename> on behalf of user <user> for the following reason: No Title for this reason could be found.

Where, the variable %VCS_HOME% is the default installation directory for VCS, typically C:\Program Files\VERITAS\cluster server.

The VMDg agent log may contain the following message:
VCS CRITICAL V-16-10051-9571 VMDg:data_dg:monitor:The Agent is shutting down the system because it is configured so or it failed to start VxVM Service.

Resolution:
This agent behavior has been rectified in the updated VMDg agent.

Binary / Version:
VMDg.dll / 5.1.20014.454

-------------------------------------------------------+

[7] Hotfix name: Hotfix_5_1_20017_2385965

Symptom:
This hotfix addresses an issue wherein the VCS IIS resource fails to probe if IIS site binding is configured for HTTPS only.

Description:
In an IIS service group, the VCS IIS resource fails to probe and may go in to an unknown state if the website binding is configured for https. The IIS agent is unable to detect the configured port if the site binding protocol is set to https.

The IIS agent log displays the following message:
VCS ERROR V-16-10051-13612 IIS:<resourcename>:monitor:Site DEFAULT WEB SITE exists but the site port number doesn’t match with that configured under VCS.

Resolution:
This issue is fixed in the updated IIS agent. The agent is now able to detect the port if the site binding type is set to https.

Binary / Version:
IIS.dll / 5.1.20017.482

-------------------------------------------------------+

[8] Hotfix name: Hotfix_5_1_20018_2391453b

Symptom:
This hotfix includes a fix for the VCS agent for Hitachi TrueCopy. 
The VCS agent for Hitachi TrueCopy fails to detect the correct status of the HTC configuration and thus fails to invoke the "pairresync" command.

Description:
The VCS agent for Hitachi TrueCopy fails to detect the correct status of the HTC configuration and thus fails to invoke the required "pairresync" command.

On a faulted remote node, the horctakeover returns an error with fence level “never”. This causes the VCS agent to freeze the service group in the cluster, and a critical message is logged in the engine log file. The message indicates that a manual recovery operation is required on the array before the service group is failed over to another node in the cluster.

Resolution:
The VCS agent for Hitachi TrueCopy is now updated to bring the HTC resource online even when horctakeover returns an error 225 with fence level "never". The HTC agent now invokes the required "pairresync" command to bring the HTC resource online on the remote site.

Binary / Version:
HTCAgent.pm / NA

-------------------------------------------------------+

[9] Hotfix name: Hotfix_5_1_20019_2407816

Symptom:
This hotfix addresses the following issues:
Issue 1
The VCS IIS resource goes in to an unknown state if the IIS site binding also includes the host name along with the IP and the Port.

Issue 2
The VCS IIS resource fails to probe if IIS site binding is configured for HTTPS only.

Description:
Issue 1
This issue occurs only if IIS 7 WMI Provider is used.
The IIS resource goes in to an unknown state if the IIS site binding includes the host name along with the IP and the Port.

The IIS agent log contains the following error:
VCS ERROR V-16-10051-13620 IIS:IISResource:monitor:<sitename> exists but the site IP address or port number doesn't match with that configured under VCS

Issue 2
In an IIS service group, the VCS IIS resource fails to probe and may go in to an unknown state if the website binding is configured for https. The IIS agent is unable to detect the configured port if the site binding protocol is set to https.

The IIS agent log displays the following message:
VCS ERROR V-16-10051-13612 IIS:<resourcename>:monitor:Site DEFAULT WEB SITE exists but the site port number doesn’t match with that configured under VCS.

Resolution:
Issue 1
VCS does not need the Host Name to make IIS highly available.
Therefore, the IIS agent now ignores the Host Name configured for the site.

Issue 2
This issue is fixed in the updated IIS agent. The agent is now able to detect the port if the site binding type is set to https.


Binary / Version:
IIS.dll / 5.1.20019.488

-------------------------------------------------------+

[10] Hotfix name: Hotfix_5_1_20021_2422907

Symptom:
This hotfix addresses the following issues:
Issue 1
The VCS MountV resources take a long time to go offline if there are applications accessing the clustered mount points.

The MountV agent log may contain the following message:
VCS WARNING V-16-10051-9023 MountV:<mountvresourcename>:offline:Failed to lock volume [2:5]

Issue 2
The VCS MountV resource faults or if the OnlineRetryLimit is set to a non-zero value, the MountV resource takes a long time to come online.

Issue 3
After an upgrade from SFW HA 5.1 SP1 to SP2, an error 9050 event from the Agent Framework is reported for each MountV resource during an online.

Description:
Issue 1
When the MountV resources are taken offline, the MountV agent performs certain actions depending on the ForceUnmount attribute value.

If ForceUnmount is set to READ_ONLY, the agent tries to enumerate the open handles to the configured mount points and then gracefully unmounts the mount points.

If ForceUnmount is set to ALL, the agent first tries to obtain exclusive access to the configured mount point. The agent makes this attempt 10 times with half a second delay between each unsuccessful attempt. If it fails to lock the volume even after 10 attempts, it proceeds with the unmount operation.

In case of multiple volume mount points, this wait causes a delay in the offlining of the MountV resources and as a result leads to longer failover times for service groups.

Issue 2
In previous releases, the VCS VMDg agent reports the resource as online only after the disk group and all the volumes have arrived on the node. The MountV agent begins the resource online function only after the VMDg agent has reported online.

For performance improvements, the VMDg agent was modified to report an online as soon as the disk group is imported on the node. The VMDg agent does not wait for all the volumes on the disk group to arrive on the node.
AS soon as the VMDg agent reports online, the MountV agent begins the resource online process.
In some cases, it may happen that the MountV resource is online even before the configured volume has arrived on the node. As a result the MountV agent reports a missing volume error and faults.

The MountV agent log may contain the following errors:
VCS WARNING V-16-10051-9031 MountV:<mountvresourcename>:online:Unable to open volume [2:2]
VCS ERROR V-16-10051-9024 MountV:<mountvresourcename>:online:Failed to mount volume <volumemountpath> at <driveletter> [2:87]

If the OnlineRetryLimit is set to non-zero value, the resource takes some time to come online. The delay is equal to the value of MonitorInterval.

Issue 3
This error indicates failure of online and offline IOCTLs failed to execute.

Resolution:
Issue 1
This issue has been fixed in the VCS MountV agent.

The agent includes the following enhancements:
A new value, CLOSE_FORCE, is added to the agent's ForceUnmount attribute.
When ForceUnmount is set to CLOSE_FORCE, the MountV agent does not try to lock the configured mount points and proceeds directly with the forceful unmount operation.

Note: Forceful unmount may potentially cause a data corruption. When you use ForceUnmount with CLOSE_FORCE, then before you switch or take the MountV resources offline, verify that none of the applications are accessing the configured mount points.

The MountV agent is enhanced to achieve performance improvement and overall reduction in service group failover times.
The MountV agent now uses the native operating system APIs to obtain the status of the configured mount points. This results in faster detection.

Issue 2
The MountV agent has been enhanced to address this issue.
If the MountV agent encounters the "Volume not arrived" state, the agent now attempts to retry the operation with some delay. This ensures that the same Online function successfully mounts the volume rather than the next online call.

Issue 3
Investigation revealed that the specific IOCTLs are not supported on Windows Server 2003. Hence the resolution is to condition out the IOCTLs for Windows Server 2003.

Binary / Version:
MountV.dll / 5.1.20021.491
MountV.xml / NA

-------------------------------------------------------+


[11] Hotfix name: Hotfix_5_1_20001_2203640

Symptom:
This hotfix addresses a Volume Manager plug-in issue due to which the DR wizard is unable to discover a cluster node during configuration.

Description:
While creating a disaster recovery configuration, the Disaster Recovery Configuration Wizard fails to discover the cluster node at the primary site where the service group is online.

You may see the following error on the System Selection page:
V-52410-49479-116
An unexpected exception: 'Attempted to read or write protected memory. This is often an indication that other memory is corrupt. 'Failed to discover 'Veritas Volume Manager' on node '<primarysitenodename>'.

This issue occurs because the Volume Manager plug-in attempts to free unallocated memory and fails, resulting in a memory crash.

Resolution:
The code fix in the Volume Manager plug-in addresses the memory crash.

Binary / Version:
VMPlugin.dll / 5.1.20001.105

-------------------------------------------------------+

[12] Hotfix name: Hotfix_5_1_20002_2378712

Symptom:
This hotfix addresses an issue where the Disaster Recovery Wizard displays an SRDF discovery error and blocks configuration.

Description
This issue occurs while using the Disaster Recovery (DR) Wizard for configuring replication and global clustering in an EMC SRDF environment.

When the DR wizard is run by two different domain users one after the other, then the DR wizard blocks the second user with the SRDF discovery error.
This issue occurs even if the second user has local administrative privileges on the cluster node.

On the Replication Options panel when you select the EMC SRDF option and click Next, the wizard displays a pop-up about hardware array mismatch and prompts you whether you wish to continue with the configuration.
When you click Yes the wizard displays the following error:
V-52410-49479-116
Failed to perform the requested operation on <clusternodename>. Unexpected error occurred. Failed to discover 'SRDF' on node <clusternodename>.

The DR wizard stores the SRDF discovery information in temporary files created at %programdata%\Veritas\winsolutions directory.

When another user runs the DR wizard, the wizard is not able to write to the temporary files because the user does not have write access to those files. As a result, the wizard is unable to proceed with the configuration.

Resolution:
This issue has been addressed in the DR wizard component.
The DR wizard now creates and stores the SRDF discovery data in new temporary files for each user session. When the wizard run is complete, the wizard also removes those files from the node.

Binary / Version:
SRDFPlugin.dll / 5.1.20002.106

-------------------------------------------------------+

[13] Hotfix name: Hotfix_5_1_20001_87_2207263

Symptom:
This hotfix addresses a deadlock issue where disk group deport hangs after taking a backup from NBU.

Description:
Disk group deport operation hangs due to deadlock situation in the storage agent. The VDS provider makes PRcall to other providers after acquiring the Veritas Enterprise Administrator (VEA) database lock.

Resolution:
Before making a PRcall to other providers, release the VEA database lock.

Binary / Version:
vdsprov.dll / 5.1.20001.87

-------------------------------------------------------+

[14] Hotfix name: Hotfix_5_1_20009_87_2245816

Symptom:
Issue with Volume turning RAW due to Write failure in fsys.dll module

Description:
This hotfix updates fsys provider, to handle ntfs shrink bug. Windows Operating System by default fails any sector level reads/writes greater than 32MB on Windows Server 2003. Hence, IOs are split into multiple IO for the WriteSector function in fsys provider.

Resolution:
Split IO into multiple IOs.

Binary / Version:
fsys.dll / 5.1.20009.87

-------------------------------------------------------+

[15] Hotfix name: Hotfix_5_1_20012_88_2087139a

Symptom:
While trying to stop the vxvm service, hotfix installer intermittently fails to stop the service.

Description:
Hotfix installer fails to perform prerequisite operations intermittently. Failure occurs when the installer tries to stop the vxvm service and reports error. When QUERY VXVM is used to query the state of service, status is shown as stopped.

While stopping the vxvm service, iSCSI and Scheduler providers perform certain operations which result in failure.

Resolution:
To abort the operations in iSCSI and Scheduler providers while performing the stop operation for the vxvm service.

Binary / Version:
iscsi.dll / 5.1.20012.88
scheduler.dll / 5.1.20012.88

-------------------------------------------------------+

[16] Hotfix name: Hotfix_5_1_20012_88_2087139b

Symptom:
While trying to stop the vxvm service, hotfix installer intermittently fails to stop the service.

Description:
Hotfix installer fails to perform prerequisite operations intermittently. Failure occurs when the installer tries to stop the vxvm service and reports error. When QUERY VXVM is used to query the state of service, status is shown as stopped.

While stopping the vxvm service, iSCSI and Scheduler providers perform certain operations which result in failure.

Resolution:
To abort the operations in iSCSI and Scheduler providers while performing the stop operation for the vxvm service.

Binary / Version:
cluster.dll / 5.1.20012.88

-------------------------------------------------------+ 

[17] Hotfix name: Hotfix_5_1_20013_87_2290214

Symptom:
This hotfix addresses an issue in the SFW component, Veritas VxBridge Service (VxBridge.exe), which causes a memory corruption or a crash in a VxBridge client process in the clustering environment.

Description:
In a clustering environment, a VxBridge client process may either crash or there could be a memory corruption. This occurs because VxBridge.exe tries to read beyond the memory allocated to a [in, out, string] parameter by its client.

As a result, the cluster may become unresponsive. Users may not be able to access the clustered applications and cluster administrators may not be able to connect to the cluster using the Cluster Management console.

Resolution:
This issue is fixed in VxBridge.exe process.
Instead of allocating the maximum memory, VxBridge.exe now allocates a minimum essential required amount of memory to the [in, out, string] parameters.
Because of the minimum memory requirement, any excess memory allocated by the VxBridge clients does not cause any issues.

Binary / Version:
VxBridge.exe / 5.1.20013.87

-------------------------------------------------------+

[18] Hotfix name: Hotfix_5_1_20014_87_2321015

Symptom:
This hotfix fixes bug check 0x3B in VXIO.

Description:
The bug check 0x3B may happen when removing a disk of a cluster dynamic disk group.

Resolution:
This hotfix fixes bug check 0x3B in VXIO.

Binary / Version:
vxio.sys / 5.1.20014.87
 
-------------------------------------------------------+

[19] Hotfix name: Hotfix_5_1_20018_87_2318276

Symptom:
This hotfix replaces a fix for an issue in vxio which breaks Network Address Translation (NAT) support in VVR. It also removes the limitation of using different private IP addresses on the primary and secondary host in a NAT environment.

Description:
VVR sends heartbeats to the remote node only if the local IP address mentioned on the RLINK is online on that node.

In a NAT environment, the primary host communicates with the NAT IP of the secondary, and since, the NAT IP is never online on the secondary node, VVR secondary does not send heartbeats to the primary host. This results in primary not sending a connection request to the secondary.

It is also observed that in a NAT environment when the primary's private IP address is same as the secondary's private IP, VVR incorrectly concludes that the primary and secondary nodes are one and the same.

Resolution:
Removed the check which made it mandatory for the local IP address to be online on a node. Also, fixed the issue which prevents configuring VVR in a NAT environment when the private IP addresses of the primary and secondary hosts are same.

Binary / Version:
vras.dll / 5.1.20018.87
vxio.sys / 5.1.20018.87

-------------------------------------------------------+ 

[20] Hotfix name: Hotfix_5_1_20020_87_2364591

Symptom:

This hotfix addresses the following issues:

Issue 1
This hotfix adds Thin Provisioning Reclaim support for EMC VMAX array.

Issue 2
This hotfix addresses an issue of Storage Agent crash when SCSI enquiries made to disks fail.

Issue 3
Mirror creation failed with auto selection of disk and track alignment enabled, even though enough space was there.

Description:
Issue 1
Added Thin Provisioning Reclaim support for EMC VMAX array on Storage Foundation and High Availability for Windows (SFW HA) Service Pack 2.

Issue 2
During startup after SFW 5.1 SP2 installation, Storage Agent uses SCSI enquiries to get information from disks. In some cases it is observed that Storage Agent crashes while releasing memory for the buffer passed to collect information.

Issue 3
There was a logic error in the way we track-align a free region of disks. Small free regions may end up with negative size. Even though we might have free region large enough for the desired allocation, the negative sizes make the total free region size less and thus fails the allocation.

Resolution:
Issue 1
Made changes in DDL provider to reclaim support for EMC VMAX array as a Thin Reclaim device.

Issue 2
Aligned the buffer records on 16 byte boundaries. This ensures that the data structures passed down between native drivers and providers are in sync. Additionally, it also saves the effort of relying on data translation done by WOW64 when code is running on 32-bit emulation mode.

Issue 3
Fix the logic error in track-aligning the free space so that no regions will have negative size.

Binary / Version:
ddlprov.dll / 5_1_20020_87
pnp5.dll / 5_1_20020_87

-------------------------------------------------------+

[21] Hotfix name: Hotfix_5_1_20024_87_2368399

Symptom:
This hotfix addresses an issue where any failure while the volume shrink operation is in progress may cause file system corruption and data loss.

Description:
The volume shrink operation allows you to decrease the size of dynamic volumes.
When you start the volume shrink operation, it begins to move used blocks so as to accommodate them within the specified target shrink size for the volume.

However, this operation is not transactional. If there are any issues encountered during block moves or if the operation is halted for any reason (for example, the host reboots or shuts down, the operating system goes in to a hung state, or a stop error occurs), it can result in a file system corruption and may cause data loss.
The state of the volume changes to 'Healthy, RAW'.

Resolution:
With this hotfix, a warning message is displayed each time you initiate a volume shrink operation. The message recommends that you make a backup copy of the data on the target volume (the volume that you wish to shrink) before you perform the volume shrink operation.

Depending on how you initiate the volume shrink operation (either VEA or command line), you have to perform an additional step, as described below:

If you initiate the volume shrink operation from the VEA console, click OK on the message prompt to proceed with the volume shrink operation.

If you initiate the volume shrink operation from the command prompt, the command fails with the warning message. Run the command again with the force (-f) option.

Note: 
Hotfix_5_1_20024_87_2368399 has been repackaged to address an installation issue that was present in older version of this hotfix, which was released in the earlier CP.

Binary / Version:
vxassist.exe / 5.1.20024.87
climessages.dll / 5.1.20024.87
vxvmce.jar / NA
vmresourcebundle.en.jar / NA

-------------------------------------------------------+

[22] Hotfix name: Hotfix_5_1_20025_87_2397382

Symptom:
VVR primary hangs and a BSOD is seen on the secondary when stop replication and pause replication operations are performed on the configured RVGs.

Description:
When stopping or pausing VVR replication in TCP/IP mode, a BSOD is seen with STOP ERROR 0x12E "INVALID_MDL_RANGE".
A BSOD error occurs on the VVR secondary system due to TCP receive bug which is caused due to a mismatch between Memory Descriptor List (MDL) and the underlying buffer describing it.

Resolution
The issue related to BSOD error has been fixed.

Binary / Version:
vxio.sys / 5.1.20025.87

-------------------------------------------------------+

[23] Hotfix name: Hotfix_5_1_20005_87_2218963

Symptom:
This hotfix adds thin provisioning support for HP XP arrays to SFW 5.1 SP2.
This hotfix also fixes a data corruption problem that can happen while moving sub disks of a volume when the Smart Move mirror resync feature is enabled.

Description:
Hotfix_5_1_20005_87_2218963 adds thin provisioning support for HP XP arrays to SFW 5.1 SP2.

When there are multiple sub disks on a disk and the sub disk that is being moved is not aligned to 8 bytes, then there is a possibility of missing some disk blocks while syncing with the new location. This may result in data corruption.

Hotfix_5_1_20005_87_2218963 fixes the starting Logical Cluster Numbers used while syncing sub disk clusters so that no block is left out.

Resolution:
This hotfix adds thin provisioning support for HP XP arrays and also fixes a data corruption issue.

Binary / Version:
vxvm.dll / 5.1.20005.87
ddlprov.dll / 5.1.20005.87
vxconfig.dll / 5.1.20005.87

-------------------------------------------------------+ 

[24] Hotfix name: Hotfix_5_1_20028_87_2426197

Symptom:
This hotfix addresses an issue where the disks fail to appear after the storage paths are reconnected, until either the vxvm service is restarted or the system is rebooted.

Description:
This issue occurs only when SFW and EMC PowerPath 5.5 are installed on a system.

When you disconnect and reconnect disks managed using EMC PowerPath multipathing solution, the disks fail to appear on the system and are inaccessible. The VEA GUI shows the disks arrival events but the disk type is displayed as unknown and the status is offline.

This occurs because the SFW vxpal component creates access handles on the gatekeeper devices that remain open forever. Therefore when the storage is reconnected, the disks remain unreadable as the previous stale handles are not closed.


Resolution:
The open handles issue is fixed in the PnP handler logic.

Binary / Version:
pnp5.dll / 5.1.20028.87

-------------------------------------------------------+

[25] Hotfix name: Hotfix_5_1_20029_87_2440099

Symptom:
In case the original SFW product license fails, SFW now tries to find license with a different API. 

Description:
Licensing issue where SFW fails to perform basic operations due to failure in finding the installed license on a system. This occurs even when there is a valid license key installed on the system.

Resolution:
SFW will now use registry and different APIs to find licenses, if the older APIs fail.

Binary / Version:
sysprov.dll / 5.1.20029.87

-------------------------------------------------------+

[26] Hotfix name: Hotfix_5_1_20033_87_2512482

Symptom:
This hotfix addresses an issue where cluster storage validation check fails with error 87 on a system that has SFW installed on it. 
This happens due to system volume being offline.

Description:
SFW disables the automount feature on a system which leaves the default system volume offline after each reboot. Cluster validation checks for access to all volumes and fails for the offline system volume with error 87.

Resolution:
Resolved the issue by making the system volume online during system boot up so that cluster validation check is successful.

Binary / Version:
vxboot.sys / 5.1.20033.87

-------------------------------------------------------+

[27] Hotfix name: Hotfix_5_1_20021_87_2372049

Symptom:
Unable to create enclosures for SUN Storage Tek (STK) 6580/6780 array.

Description:
There was no support for SUN STK 6580/6780 array in the VDID library.

Resolution:
Added code to recognize SUN STK 6580/6780 array in VDID library.

Binary / Version:
sun.dll / 5.1.20021.87

-------------------------------------------------------+

[28] Hotfix name: Hotfix_5_1_20025_2497664

Symptom:
This hotfix addresses the issue where Exchange 2010 database resources fail to come online in first attempt because Exchange Active Manager takes time to update.

Description:
The Microsoft Exchange 2010 database resource failover does not succeed on the first attempt. During the online operation, the agent makes some changes in Active Directory. However Exchange Active Manager takes time to get that information from Active Directory. Due to this, the database mount operation fails.

The agent log displays an error message similar to the following:
VCS DBG_21 V-16-50-0 Exch2010DB:Common-Users-Database-1-Exch2010DB:online:Error occurred while executing the command.

Error message is: Couldn't mount the database that you specified.
Specified database: c5e9fdbe-9a2c-4a35-855a-209801f9f412; Error code: An Active Manager operation failed. Error: The database action failed. Error: An error occurred while preparing to mount database 'Common Users Database 1' on server <ServerName>.

Please check that the database still exists and that it has a copy on server <ServerName> before retrying the operation. 
[Database: Common Users Database 1, Server:<ServerName> i.e. local].
LibExchBaseCommand.cpp:CExchBaseCommand::ExecuteEx[105]

Resolution:
The VCS Exchange 2010 agent is modified to handle the time required for Exchange Active Manager to update.

Before mounting a database, the agent executes a cmdlet to get the current owning server for the database.

If the owning server of the database is the same as the current node then the agent fires a command to mount the database.

If the owning server is not the current node, then the agent waits for 2 seconds and then queries for the owning server again. 

By default, the agent makes 6 attempts to query the owning server.

If Exchange Active Manager requires more time to update, you can modify the number of attempts the agent should make to query the owning server by manually creating the following registry,
HKEY_LOCAL_MACHINE\SOFTWARE\Veritas\VCS\EnterpriseAgents\Exch2010DB\ActiveManagerWaitLimit, on the cluster node and setting its value to a reasonable number greater than 6.

Perform the following steps to create the registry that is used by the agent to determine the number of attempts it should make to query the owning server:

Caution: Incorrectly editing the registry may severely damage your system. Before making changes to the registry, make a backup copy.

To create the registry key:

1. Ensure that you have installed this hotfix (Hotfix_5_1_20025_2497664) on all the cluster nodes.

2. To open the Registry Editor, click Start > Run, type regedit, and then click OK.

3. In the registry tree (on the left), navigate to HKEY_LOCAL_MACHINE\SOFTWARE\Veritas\VCS\EnterpriseAgents\.

4. Click Edit > New > Key and create a key by the name Exch2010DB.

5. Select the key that you created in step 4 and add a DWORD type of value.
Value name should be ActiveManagerWaitLimit and Value data should be a reasonable number greater than 6.

6. Save and exit the Registry Editor.

Note: This registry setting is applicable locally. Create the registry on all Exchange nodes in the service group. The agent uses the node-specific registry value to determine the number of attempts it needs to make to query the owning server.


Binary / Version:
Exch2010DB.dll / 5.1.20025.496
ExchUtilDL.dll / 5.1.20025.496

-------------------------------------------------------+

[29] Hotfix name: Hotfix_5_1_20027_2522314

Symptom:
This hotfix addresses an issue related to the VCS LLT service causing system hangs.

Description:
LLT spinlock causes a deadlock on multi-CPU servers because a spinlock is held while sending out packets. This deadlock causes the system to hang.

Resolution:
This issue has been fixed. The change ensures that spinlock is released before transmitting the packet.

Binary / Version:
llt.sys / 5.1.20027.609

-------------------------------------------------------+

[30] Hotfix name: Hotfix_5_1_20036_87_2536009

Symptom:
This hotfix addresses the following issues:

Issue 1
Mirror creation failed with auto selection of disk and track alignment enabled, even though enough space was there.

Issue 2
When using GUI or CLI to create a new volume with a mirror and DRL logging included the DRL log is track aligned and the data volume is no longer track aligned.

Issue 3
On a VVR-GCO configuration, when the primary site goes down, the application service group fails over to the secondary site. It is observed that MountV resource probes online on a failed node after a successful auto takeover operation.

Issue 4
Following issues were fixed for Storage Foundation for Windows (SFW) with DMP DSM and SFW SCSI-3 enabled setting:    
1. Reconnecting previously detached storage array in a campus cluster causes all MSCS Volume Manager Disk Group (VMDg) resources to fail and rescan operation gets hung at 28%. 
2. Deporting a cluster disk group may take a long time.

Issue 5
Data corruption while performing subdisk Move operation.

Issue 6
During Windows Failover Cluster Move Group operation, cluster disk group import fails with Error 3 (DG_FAIL_NO_MAJORITY 0x0003).

Issue 7
This hotfix addresses an issue in the Volume Manager (VM) component to support VCS hotfix Hotfix_5_1_20052_3187218.

Note:
Fixes for issues #1, #2, #3, #4, #5, #6 were released earlier as Hotfix_5_1_20026_87_2406683. It is now a part of this hotfix.

Description:
Issue 1
There was a logic error in the way we track-align a free region of disks. Small free regions may end up with negative size. Even though we might have free region large enough for the desired allocation, the negative sizes make the total free region size less and thus fails the allocation.

Issue 2
In space allocation the free space was aligned before allocating it to data plex and DRL log.  When DRL is placed first and when DRL does not have a size that is a multiple of the track size, the following data plex is not track aligned.

Issue 3
After a node crash on a VVR-GCO configuration, the mount points under VCS control are not deleted causing them to probe online on disk group (which gets auto-imported  after the reboot), leading to a concurrency violation.

For global service groups, concurrency violations are not resolved by the VCS engine automatically. Hence the MountV offline is not initiated.

Issue 4
On a SFW DMP DSM environment with SFW SCSI-3 settings, reconnecting a detached disk of an imported cluster disk group can cause the SCSI-3 release reservation logic to get into a loop or operation may take a long time to get completed.

Issue 5
When there are multiple subdisks on a disk and the subdisk that is being moved is not aligned to 8 bytes, then there is a possibility of missing some disk blocks while syncing with the new location. This may result in data corruption.

Issue 6
During Windows Failover Cluster Move Group operation, successful disk group deport happens; however, subsequent import attempt fails with error "DG_FAIL_NO_MAJORITY 0x0003." 

If disks are added to an existing cluster disk group which is online on node A and when this disk group is moved to other node B, then node B is unable to online the cluster disk group. This happens because the system view of the disk is not being synchronized with the modified disk group information and node B where the cluster disk group is moved is not able to reflect the correct information.

Issue 7
As a result of the modified MountV agent offline function, there is a possibility that the volumes can be mounted externally even after the MountV offline is complete.

Resolution:
Issue 1
Fix the logic error in track-aligning the free space so that no regions will have negative size.

Issue 2
The order of allocation has been reversed. Now, the free space is assigned to the data plex first and then to the DRL log.  Therefore, the data plex is always aligned, and the DRL log may or may not be aligned.

Issue 3
To delete the stale mount points when a cluster disk group is imported after a node reboot.

Issue 4
Corrected the logic in vxconfig.dll to do SCSI-3 release reservation effectively.

Issue 5
Fixed the starting Logical Cluster Numbers (LCNs) used while syncing subdisk clusters so that no block is left out.

Issue 6
Updating the Windows NT disk cache for all the disks in case of unexpected number of live disks.

Issue 7
The disk group deport operation is modified to support the fix provided in VCS hotfix Hotfix_5_1_20052_3187218.

VM now dismounts and flushes the volumes cleanly during disk group deport.


Binary / Version:
vxconfig.dll / 5.1.20036.87
vxconfig.dll / 5.1.20026.87

-------------------------------------------------------+

[31] Hotfix name: Hotfix_5_1_20035_87_2554039

Symptom:
This hotfix addresses the issue where an orphan task appears in VEA from MISCOP_RESCAN_TRACK_ALIGNMENT.

Description:
After importing a disk group, a task appears in the VEA task bar and never disappears. This appears to be triggered by a rescan fired from the ddlprov.dll when the VDID for a device changes.  Since it is an internal task, it probably should not appear in the VEA at all.

Resolution:
The orphan task object which was created for Rescan has been removed.


Binary / Version:
vxvm.dll / 5.1.20035.87

-------------------------------------------------------+

[32] Hotfix name: Hotfix_5_1_20037_87_2536342

Symptom:
This hotfix addresses the issue where incorrect WMI information is logged into Cluster class for Dynamic disks.

Description:
When a disk group containing all GPT disks is created, then SFW creates and publishes a signature into WMI affecting the Signature and ID fields for the mscluster_disk. This ID/Signature changes on every reboot.  If two disk groups are created with all GPT disks, then they have the same signature and ID. 
This causes issues with Microsoft's SCVVM product which uses the signature/ID to determine individual resources.  

With this configuration, as soon as a HyperV machine is put on an all GPT disk group, all the other GPT disk groups are marked as “in Use” and SCVVM is unable to use those disks for other HyperV machines.

Resolution:
A unique signature to fill into diskinfo structure of GPT disks is generated. This signature is used as the ID while populating WMI information into MSCluster_Disk WMI class.


Binary / Version:
cluscmd.dll / 5.1.20037.87

-------------------------------------------------------+

[33] Hotfix name: Hotfix_5_1_20038_87_2564914

Symptom:
This hotfix addresses the issue Where VxVDS.exe process does not release handles post CP2 Updates.

Description:
After the installation of 5.1 SP2 CP2, a high number of open handle counts are seen on the VxVDS Process.

Resolution:
The handle leaks have been fixed.


Binary / Version:
vxvds.exe / 5.1.20038.87

-------------------------------------------------------+

[34] Hotfix name: Hotfix_3_3_1068b_2372164

Symptom:
This hotfix addresses the issue where multiple buffer overflows occur in SFW vxsvc.exe. This results in a vulnerability that allows remote attackers to execute arbitrary code on vulnerable installations of Symantec Veritas Storage Foundation. Authentication is not required to exploit this vulnerability. 

Description:
The specific flaw exists within the vxsvc.exe process. The problem affecting the part of the server running on TCP port 2148 is an integer overflow in the function vxveautil.value_binary_unpack where a 32bit field holds a value that, through some calculation, can be used to create a smaller heap buffer than required to hold user-supplied data. This can be leveraged to cause an overflow of the heap buffer, allowing the attacker to execute arbitrary code under the context of SYSTEM.

Resolution:
The issue has been addressed in this hotfix.


Binary / Version:
vxveautil.dll / 3.3.1068.0
vxvea3.dll / 3.3.1068.0
vxpal3.dll / 3.3.1068.0
-------------------------------------------------------+

[35] Hotfix name: Hotfix_5_1_20030_2597410

Symptom:
This hotfix addresses the following issues:

Issue 1
The VCS FileShare agent fails to probe FileShare resources on passive nodes.

Issue 2
FileShare resource fails to come online and no specific error is logged.

Note:
- Fix for issue #1 was earlier released as Hotfix_5_1_20028_2513842. It is now replaced by this hotfix.
- For Windows 2008 and Windows 2008 R2, this hotfix has been replaced by Hotfix_5_1_20041_2722108.

Description:
Issue 1
The VCS FileShare agent is unable to probe FileShare resources on passive nodes if the file share is configured for a root of a volume that is mounted as a folder mount on a folder on a local system drive.

The FileShare resource is online on the active node and the share is accessible. However, the status of the resource on the passive nodes displays as unknown instead of offline.

The following error is displayed in the FileShare agent log:
VCS ERROR V-16-10051-10507 FileShare:<FileShareResourceName>:monitor:Failed to get properties for folder <foldermountpath>.

The behavior is reversed when the FileShare service group is switched over to a passive node.

Issue 2
If Hotfix_5_1_20028_2513842 is installed, and, NetBIOS is disabled, FileShare resource fails to come online.

Resolution:
Issue 1
The issue is addressed in the updated FileShare agent.

Issue 2
The issue has been addressed in this hotfix.


Binary / Version:
FileShare.dll / 5.1.20030.497

-------------------------------------------------------+

[36] Hotfix name: Hotfix_5_1_20031_2528853

Symptom
This hotfix addresses an issue related to the VCS VvrRvg agent where the Windows Event Viewer on a passive cluster node displays VvrRvg error messages each time the cluster node is rebooted.

Description
This issue occurs when there are multiple VvrRvg resources configured in a cluster. 

After you reboot the passive cluster node where the VvrRvg resource is offline, the event viewer contains the following error messages in the application log:

ERROR 8(0x05f30008) AgentFramework <systemname> VvrRvg:<resourcename>:monitor:Monitor failed. 
Error code: (null)

The VvrRvgagent log contains the following message:
VCS ERROR V-16-20026-8 VvrRvg:<resourcename>:monitor:Monitor failed. Error code: 0x80070057

Resolution
The timing issue encountered during the agent startup has been addressed.

Binary / Version:
VvrRvg.dll / 5.1.20031.497

-------------------------------------------------------+

[37] Hotfix name: Hotfix_5_1_20015_2370962

Symptom:
This hotfix includes an enhancement to support usage of standard (tgt) devices for cloning during SRDF fire drill.

Description:
During a fire drill of SRDF-based replication setup, the fire drill agent (SRDFSnap) creates clones of the secondary volumes using Timefinder Clone feature. This feature allows users to use both standard (tgt) devices as well as BCV devices as the clone target. 
The SRDFSnap agent however is able to use only the BCVs as the clone target.

Resolution:
The SRDF agent is enhanced to allow usage of tgt devices as clone targets.


Binary / Version:
SRDFSnap.pm / NA
SRDFSnap.xml / NA
SRDFSnapTypes.cf / NA
open.pl / NA
monitor.pl / NA
offline.pl / NA
online.pl / NA
clean.pl / NA

-------------------------------------------------------+

[38] Hotfix name: Hotfix_5_1_20041_87_2604814

Symptom:
This hotfix addresses an issue where the Volume Manager Diskgroup (VMDg) resource in a Microsoft cluster Server (MSCS) cluster may fault when you add or remove multiple empty disks (typically 3 or more) from a dynamic disk group.

Description:
This issue occurs on 64-bit systems where SFW is used to manage storage in a Microsoft Cluster Server (MSCS) environment.
When adding or removing disks from a dynamic disk group, the disk group resource (VMDg) faults and the cluster group fails over. The VEA console shows the disk group in a deported state.

The cluster log displays the following message:
ERR   [RES] Volume Manager Disk Group <resourcename>: LDM_RESLooksAlive: *** FAILED for <resourcename>, status =
0, res = 0, dg_state = 35

The system event log contains the following message:
ERROR 1069 (0x0000042d)	clussvc	<systemname>	Cluster resource <resourcename> in Resource Group <groupname> failed.

MSCS uses the Is Alive / Looks Alive polling intervals to check the availability of the storage resources. While disks are being added or removed, SFW holds a lock on the disk group until the operation is complete. However, if the Is Alive / Looks Alive query arrives at a time the disk add/remove operation is in progress, SFW ignores the lock it holds on the disk group and incorrectly communicates the loss of the disk group (where the disks are being added/removed) to MSCS. As a result, MSCS faults the storage resource and initiates a fail over of the group.


Resolution:
The issue is fixed in the SFW component that holds the lock on the disk group. As a result, SFW now responds to the MSCS Is Alive / Looks Alive query only after the disk add/remove operation is complete.


Binary / Version:
cluscmd64.dll / 5.1.20041.87

-------------------------------------------------------+

[39] Hotfix name: Hotfix_5_1_20045_87_2587638

Symptom:
This hotfix addresses the following issues:

Issue 1
If a disk group contains a large number of disks then it causes a delay in the disk reservation resulting in the defender node losing the disks to the challenger node.

Issue 2
VVR Primary goes into a hang state while initiating a connection request to the Secondary. This is observed when the Secondary machine returns an error message which the Primary is unable to understand.

Issue 3
Reboot of an EMC CLARiiON storage processor results in Storage Foundation for Windows dynamic disks to be flagged as removed resulting I/O failures.

Issue 4
I/O operations hang in the SFW driver, vxio, where VVR is configured for replication.

Issue 5
VVR causes a fatal system error and results in a system crash (bug check).

Note:
Fix for issue #1, #2, and #3 was released earlier as Hotfix_5_1_20040_87_2570602 and fix for issue #5 was released earlier as Hotfix_5_1_20043_87_2535885. They are now a part of this hotfix.

Description:
Issue 1
In a split brain scenario, the active node (defender node) and the passive nodes (challenger nodes) try to gain control over the majority of the disks. 

If the disk group contains a large number of disks, then it takes a considerable amount of time for the defender node to the reserve the disks.

The delay may result in the defender node losing the reservation to the challenger nodes.

This delay occurs because the SCSI-3 disk reservation algorithm performs the disk reservation operation in a serial order.

Issue 2
VVR Primary initiates a connection request to the Secondary and waits for an acknowledgement. The Secondary may reply back with an ENXIO signaling that the Secondary Replicated Volume Group's (RVG's) SRL was not found. The Primary machine is unable to understand this error message and continues to remain in the same waiting state for an acknowledgement reply from the Secondary. 

Any transaction from VOLD is blocked since the RLINK is still waiting for an acknowledgement from the Secondary. This leads to new I/Os getting piled up in the vxio as a transaction is already in progress.

Issue 3
In a multipath configuration, each system has one or more storage paths from multiple storage processors (SP). If a storage processor is rebooted, vxio.sys, the Storage Foundation for Windows (SFW) component) may sometimes mark the disks as removed and fail the I/O transactions even though the disks are accessible from another SP. As the I/O failed, the clustering solution fails over the disks to the passive node.

The following errors are reported in the Windows Event logs: 

INFORMATION Systemname vxio: <Hard Disk> read error at block 5539296 due to disk removal  

INFORMATION Systemname vxio: Disk driver returned error c00000a3 when vxio tried to read block 257352 on <Hard Disk>  

WARNING     Systemname vxio: Disk <Hard Disk> block 9359832 (mountpoint X:): Uncorrectable write error 

Issue 4
I/O operations on a system appear to hang due to the SFW driver, vxio.sys, in an environment where VVR is configured.

When an application performs an I/O operation on volumes configured for replication, VVR writes the I/O operation 
to the SRL log volume and completes the I/O request packet (IRP). The I/O is written asynchronously to the data volumes. 

If the application initiates another I/O operation whose extents overlap with any of the I/O operations that are queued 
to be written to the data volumes, the new I/O is kept pending until the queued I/O requests are complete.

Upon completion, the queued I/O signals the waiting I/O to proceed. However, in certain cases, due to a race condition,
the queued I/O fails to send the proceed signal to the waiting I/O. The waiting I/O therefore remains in the waiting state forever.

Issue 5
VVR may sometimes cause a bug check on a system.

The crash dump file contains the following information:
ATTEMPTED_SWITCH_FROM_DPC (b8)
A wait operation, attach process, or yield was attempted from a DPC routine.
This is an illegal operation and the stack track will lead to the offending code and original DPC routine.

The SFW component, vxio, processes internally generated I/O operations in the disk I/O completion routine itself.
However, the disk I/O completion routine runs at the DPC/dispatch level. Therefore any function in vxio that requires a context switch do not get processed.
The broken function calls result in vxio causing a system crash.

Resolution:
Issue 1
The SCSI-3 reservation algorithm has been enhanced to address this issue.

The algorithm now tries to reserve the disks in parallel thus improving the throughput of the total time required for SCSI reservation on the defender node.

Issue 2
If the Secondary is unable to find the SRL, return a proper error message which the Primary understands. The Primary will set the SRL header error on the RLINK and pause it.

Issue 3
The device removal handling logic has been enhanced to address this issue. The updated SFW component vxio.sys includes the fix.

Issue 4
The race condition in the vxio driver has been fixed.

Issue 5
The vxio component is enhanced to ensure that the internally generated I/O operations are not processed as part of the DPC level disk I/O completion routine.


Binary / Version:
vxio.sys / 5.1.20045.87

-------------------------------------------------------+

[40] Hotfix name: Hotfix_5_1_20042_87_2614448

Symptom:
This hotfix addresses an issue where Windows displays the format volume dialog box when you use SFW to create volumes.

Description:
While creating volumes using the New Volume Wizard from the Veritas Enterprise Administrator (VEA), if you choose to assign a drive letter or mount the volume as an NTFS folder, Windows displays the format volume dialog box.

This issue occurs because as part of volume creation, SFW creates raw volumes, assigns mount points and then proceeds with formatting the volumes.
Mounting raw volumes explicitly causes Windows to invoke the format dialog box.

Note that the volume creation is successful and you can cancel the Windows dialog box and access the volume.

Resolution:
SFW now assigns drive letters or mount paths only after the volume format task is completed.


Binary / Version:
vxvm.dll / 5.1.20042.87

-------------------------------------------------------+

[41] Hotfix name: Hotfix_5_1_20044_87_2610786

Symptom:
This hotfix addresses an issue where the disk group import operation fails to complete if one or more disks in the disk group are not readable.

Description:
As part of the disk group import operation, the SFW component, vxconfig, performs a disk scan to update the disk group configuration information.


It acquires a lock on the disks in order to read the disk properties. 
In case one or more disks in the disk group are not readable, the scan operation returns without releasing the lock on the disks. 
This lock blocks the disk group import operation.

Resolution:
This issue has been addressed in the vxconfig component. The disk scan operation now releases the lock on the disks even if one or more disks are not readable.


Binary / Version:
vxconfig.dll / 5.1.20044.87

-------------------------------------------------------+

[42] Hotfix name: Hotfix_5_1_20033_2650336

Symptom:
This hotfix addresses an issue related to the VCS PrintSpool agent where information about printers that were newly added in the virtual server context is lost in case of a system crash or unexpected failures that require a reboot.

Description:
The PrintSpool agent stores printer information to the configuration during its offline function. Therefore if printers are added in the virtual server context but the PrintSpool resource or the PrintShare service group is not gracefully taken offline or failed over, the new printer information does not get stored to the disk.

In such a case, if the node where the service group is online hangs, shuts down due to unexpected failures, or reboots abruptly, all the new printer information is lost.

The PrintShare service group may also fail to come online again on the node.

Resolution:
This issue has been fixed in the PrintSpool agent. A configurable parameter now enables the agent to save the printer information periodically.

A new attribute, 'SaveFrequency', is added to the PrintSpool agent.
SaveFrequency specifies the number of monitor cycles after which the PrintSpool agent explicitly saves the newly added printer information to the cluster configuration. The value 0 indicates that the agent does not explicitly save the information to disk. 
It will continue to save the printer information during its offline function.
The default value is 5.

Binary / Version:
PrintSpool.dll / 5.1.20033.497
PrintSpool.xml / NA

-------------------------------------------------------+

[43] Hotfix name: Hotfix_5_1_20047_87_2635097

Symptom:
This hotfix addresses an issue related to Veritas Volume Replicator (VVR) where replication hangs and the VEA Console becomes unresponsive if the replication is configured over the TCP protocol and VVR compression feature is enabled.

Description:
This issue may occur when replication is configured over TCP and VVR compression is enabled.

VVR stores the incoming write requests from the primary system in a dedicated memory pool, NMCOM, on the secondary system.

When the NMCOM memory is exhausted, 
VVR tries to process an incoming I/O request until the time it gets the required memory from the NMCOM pool on the secondary.

Sometimes the NMCOM memory pool may get filled up with out of sequence I/O packets. As a result, the waiting I/O request fails to acquire the memory it needs and goes into an infinite loop.

VVR cannot process the out of sequence packets until the waiting I/O request is executed. The waiting I/O request cannot get the memory as it is occupied by the out of sequence I/O packets. This results in a logjam.

The primary may initiate a disconnect if it fails to receive an acknowledgement from the secondary. But the waiting I/O request is into an infinite loop and hence the disconnect also goes into a waiting state.

In such a case, if a transaction is initiated it will not succeed and will also stall all the new incoming I/O threads, resulting in a server hang.

Resolution:
This issue is fixed in VVR.
The error condition has been resolved by exiting from the loop if an RLINK disconnect is initiated.

Binary / Version:
vxio.sys/ 5.1.20047.87
vvr.dll / 5.1.20047.87

-------------------------------------------------------+

[44] Hotfix name: Hotfix_5_1_20046_87_2643293

Symptom:
This hotfix provides Thin Provisioning Reclaim support for Fujitsu ETERNUS DX80 S2/DX90 S2 arrays.

Description:
Added Thin Provisioning Reclaim support for Fujitsu ETERNUS DX80 S2/DX90 S2 array on SFW 5.1 SP2.


Resolution:
Made changes in DDL provider to claim Fujitsu ETERNUS DX80 S2/DX90 S2 array LUN as a Thin Reclaim device.


Binary / Version:
ddlprov.dll / 5.1.20046.87

-------------------------------------------------------+

[45] Hotfix name: Hotfix_5_1_20034_2692139

Symptom: 
This hotfix addresses an issue related to the VCS NIC agent wherein the NIC resource removes the static IP address assigned to the local network adapter and
causes the system to become unreachable on the network.


Description: 
UseConnectionStatus is an optional attribute of the NIC agent that defines whether or not the NIC maintains its connection status. If UseConnectionStatus
is disabled (set to False), then a value must be specified for PingHostList, another optional attribute of the NIC agent.

However, in case UseConnectionStatus is disabled and PingHostList attribute value is either invalid or left blank, then the NIC resource removes the static
IP address assigned to the network adapter. As a result the system becomes unreachable on the network.

The VCS high availability engine log contains the following message:
NIC is faulted (not initiated by VCS)


Resolution: 
The VCS NIC agent has been updated to address this issue.
The agent now does not remove the static IP (admin IP) address assigned to the network adapter, irrespective of the PingHostList attribute value.
If the IP address specified in the PingHostList attribute is not reachable, the agent now faults the resource and the following message is displayed in the NIC agent log:
VCS ERROR V-16-10051-3510 NIC:<NICResourceName>:monitor:UDP check failed


Binary / Version:
nic.dll / 5.1.20034.497

-------------------------------------------------------+

[46] Hotfix name: Hotfix_5_1_20037_2713126

Symptom:
This hotfix addresses the issue where the cluster fails to come online if the PATH variable is longer than 1976 characters.

Description:
The cluster is brought online by the VCS High Availability Daemon (HAD). HAD depends on the Veritas VCSComm Startup service, which fails to start when the PATH variable is longer than 1976 characters. As a result, HAD does not start and thus the cluster fails to come online.

The following errors are reported in the Windows Event logs:

INFORMATION     7036(0x40001b7c)        
Service Control Manager              <System Name>
The Veritas VCSComm Startup  service entered the stopped state.

INFORMATION     7035(0x40001b7b)       
Service Control Manager              <System Name>            
The Veritas VCSComm Startup  service was successfully sent a start control.

Resolution:
This issue has been fixed in this hotfix. 

NOTE:
A reboot may be required if the issue is not resolved after the successful installation of this hotfix.

Binary / Version:
vcscomm.exe / 5.1.20037.629

-------------------------------------------------------+

[47] Hotfix name: Hotfix_5_1_20038_2688194

Symptom:
This hotfix addresses the issue where clustered Microsoft Message Queuing (MSMQ) communication does not work as expected.

Description:
A clustered instance of MSMQ on a node is not able to acknowledge or send messages to another clustered instance of MSMQ on another node. 

Resolution:
This issue has been fixed in this hotfix. 

NOTE:
After installing or uninstalling this hotfix, perform the following steps for the changes to take effect:
1. Take the MSMQ service groups offline.
2. Stop the default MSMQ service.
3. Bring the MSMQ service groups online.

Binary / Version:
msmq.dll / 5.1.20038.504     
schelp.dll / 5.1.20038.504  

-------------------------------------------------------+

[48] Hotfix name: Hotfix_5_1_20041_2722108

Symptom:
This hotfix addresses an issue related to the VCS FileShare agent wherein non-scoped file shares are not accessible using virtual server name or IP address if NetBIOS and WINS, or DNS updates are disabled.

Description:
VCS FileShare and Lanman agents support non-scoped file shares on Windows Server 2008 and 2008 R2 systems if hotfix Hotfix_5_1_20002_2220352 is installed and the DisableServerNameScoping registry key value is set to 1.

The VCS FileShare agent depends on NetBIOS and WINS, or DNS to resolve the virtual name.
If NetBIOS and WINS are disabled or the DNS is not updated, the agent is unable to resolve the virtual name.

This may typically occur when the file share service groups are configured to use localized IP addresses. When the service group is switched or failed over,
the virtual name to IP address mapping changes. In such a case if WINS database and the DNS are not updated, the agent is unable to resolve the virtual name. As
a result the FileShare resources fault and the shares become inaccessible.

The following message is seen in the agent log:
VCS INFO V-16-10051-10530 FileShare:<servicegroupname>:online:Failed to access the network path (\\virtualname) 

Resolution:
The FileShare agent is enhanced to address this issue. A new registry key value determines how the FileShare agent behaves if the virtual name is not resolvable.

The following registry key is created automatically as part of this hotfix installation:
HKLM\SOFTWARE\VERITAS\VCS\BundledAgents\Lanman\__Global__\DisableStrictVirtualNameCheck

If DisableServerNameScoping (file share scoping registry key) and DisableStrictVirtualNameCheck registry key (new registry key introduced with this hotfix) values are set to 1, the agent makes the file shares accessible irrespective of whether or not the virtual name is resolvable. 
In case the virtual name is not resolvable, the file shares are accessible using the virtual IP.

The default value of DisableStrictVirtualNameCheck registry is set to 0.

Note that this agent behavior is applicable only for non-scoped file shares.

See "Setting the DisableStrictVirtualNameCheck registry key" topic in this readme for details on how to modify this registry key value.


Notes:
- The registry key DisableStrictVirtualNameCheck will take effect only if DisableServerNameScoping key value is set to 1.

- This FileShare agent behavior is applicable only for non-scoped file shares.

- This hotfix creates the DisableStrictVirtualNameCheck registry key at the
global level. The agent behavior applies to all the file share service groups in the cluster.
If there are multiple file share service groups that are to be used in the non-scoped mode, and you want to selectively apply this agent behavior for specific file shares, then you have to create this registry key manually for each virtual server and then set its value.

You must create the DisableStrictVirtualNameCheck key parallel to the file share scoping key:
HKLM\SOFTWARE\VERITAS\VCS\BundledAgents\Lanman\<VirtualName>\DisableServerNameScoping.

Here <VirtualName> should be the virtual computer name assigned to the file share server. 
This is the VirtualName attribute of the Lanman resource in the file share service group.

- In case these registry parameters are configured at a global level and also configured for individual file share virtual servers, the registry settings for individual virtual servers take precedence.

- You must create this key only for Lanman resources that are part of VCS file share service groups. Configuring this key for Lanman resources that are part of
other VCS service groups may result in an unexpected behavior. 

Binary / Version:
Fileshare.dll / 5.1.20041.506

-------------------------------------------------------+

[49] Hotfix name: Hotfix_5_1_20052_87_2711856 

Symptom:
VVR replication between two sites fails if VxSAS service is configured with a local user account that is a member of the local administrators group.

Description:
While configuring VVR replication between the Primary and Secondary sites in Windows Server 2008, the replication fails with the following error:
Permission denied for executing this command. Please verify the VxSAS service is running in proper account on all hosts in RDS.

This happens if the user used for the VxSAS service is a member of the administrators group and the User Access Control (UAC) is enabled.


Resolution:
The vras.dll file has been modified to resolve the issue, wherein it will check the users in the administrators group to know if the specified user 
has the administrative permissions.


Binary / Version:
vras.dll / 5.1.20052.87

-------------------------------------------------------+

[50] Hotfix name: Hotfix_5_1_20039_2722228

Symptom:

This hotfix addresses the following issues:

Issue 1
This hotfix addresses an authentication issue due to which the VCS Cluster Configuration Wizard (VCW) fails to connect to systems.

Issue 2
This hotfix addresses an issue where after configuring a cluster to use single sign-on authentication (secure cluster), the service group configuration wizards and the Cluster Manager (Java Console) are unable to connect to cluster nodes remotely. 

Note: 
Fix for issue #1 was earlier released as Hotfix_5_1_20026_2495109. This fix is now a part of this hotfix.

Description:
Issue 1
While configuring a cluster VCW fails to discover the systems due to an access denied error.

The following error is logged:
00005708-00000112: WMI Connection failed. Node='nodename', Error=80070005.
00005708-00000051: CNode::Init failed. Node='nodename', Error=80070005. 
00005708-00000072: OpenNamespace for <domain> namespace returned 0x80070005

Issue 2
The following issues occur after the VCS Cluster Configuration Wizard (VCW) is used to configure a cluster to use single sign-on authentication (secure cluster):
- The VCS service group configuration wizards fail to connect to the remote cluster nodes with the following error:
  CmdServer service not running on system: <nodename>

This error is displayed even though the VCS Command Server service is running on all the nodes.

- The Cluster Manager (Java Console) is able to connect to a cluster locally, but fails to connect to the cluster remotely.

This happens because VCW fails to explicitly configure the credentials used by HAD and CMDSERVER while configuring the cluster. VCW also fails to clear the
older single sign-on authentication credentials if the cluster is being reconfigured.

Resolution:
Issue 1
VCW uses WMI to connect to the systems and this error occurred due to a DCOM security issue.
This issue is addressed in this hotfix.

Issue 2
The following enhancements have been made to VCW to address the issues:
- VCW by default now deletes older/stale authentication credentials while reconfiguring secure clusters.
- VCW now performs all the authentication related steps as part of the reconfigure cluster option. VCW by default reconfigures the security credentials used by the HAD and the CMDSERVER services.
- VCW now includes additional logging capabilities for tracking all the AT commands run while configuring single sign-on clusters (secure clusters). VCW logs details about each authentication configuration related command (Setuptrust, DeleteCred, CreatePD, CreatePRPL, Authenticate) it runs on every cluster node.

Binary / Version:
VCWDlgs.dll / 5.1.20039.507 

-------------------------------------------------------+

[51] Hotfix name: Hotfix_5_1_20002_2327428

Symptom: 
This hotfix address an issue with the product installer component that causes a failure in the SFW Thin Provisioning space reclamation.

Description:
This issue occurs after you add or remove SFW features or repair the product installation from Windows Add/Remove Programs.

The SFW Thin Provisioning space reclamation begins to fail after rebooting the systems. This issue occurs because the product installation component erroneously modifies a vxio service registry key.

Resolution:
This issue has been fixed in the product installation component.
The updated component no longer modifies the registry key.


Note:

The hotfix installation steps vary depending on the following cases:
- Product installed but issue did not occur
- Product installed and issue occurred

Perform the steps depending on the case.

Case A: Product installed but issue did not occur
If you have installed SFW or SFW HA in your environment but have not encountered this issue as yet. This could be because you may not have added SFW features or run a product Repair from Windows Add/Remove Programs.

Perform the following steps:
1. Install this hotfix on all systems. See "To install the hotfix using the GUI" or "To install the hotfix using the command line" section in this readme.

2. After replacing the file on all the systems, you can add or remove SFW features or run a product repair on the systems.


Case B: Product installed and issue occurred
If you have installed the product and the issue has occurred on the systems.

Perform the following steps:
1. Install this hotfix on all systems. See "To install the hotfix using the GUI" or "To install the hotfix using the command line" section in this readme. 

2. After installing the hotfix, perform one of the following:
   - From the Windows Add/Remove Programs, either add or remove an SFW feature or run a product Repair on the system. 
   - Set the registry key HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\vxio\Tag to 8 in decimal. 

3. Select the desired options on the product installer and complete the workflow.
Reboot the system when prompted.

4. Repeat these steps on all the systems where this issue has occurred.


Binary / Version:
VM.dll / 5.1.20002.267  

-------------------------------------------------------+

[52] Hotfix name: Hotfix_5_1_20056_87_2738430

Symptom:

This hotfix addresses the following issues:

Issue 1
An active cluster dynamic disk group faults after a clear SCSI reservation operation is performed.

Issue 2
VVR replication never resumes when the Replicator Log is full and DCM gets activated for the Secondary host resynchronization.

Issue 3
Snapback caused too many block to resynchronize for snapback volume.

Note:
Fixes for issues #1 and #2 were released earlier as Hotfix_5_1_20055_87_2740872. They are now a part of this hotfix.

Description:
Issue 1
This error occurs when a clear SCSI reservation operation, such as bus reset or SCSI-3 clear reservation, is performed on an 
active cluster dynamic disk group. During the operation, the cluster dynamic disk group faults.

An error message similar to the following is logged in the Event Viewer:
Cluster or private disk group has lost access to a majority of its disks. Its reservation thread has been stopped.

Issue 2
This issue occurs while performing a VVR replication. During replication, if the Storage Replicator Log (SRL) becomes full, then Data Change Map (DCM) gets activated for the Secondary host resynchronization. Because the resynchronization is a time-consuming process, VVR stops sending blocks and the replication never resumes even though it is active and shows the status as "connected autosync resync_started".

Issue 3
During a snapback operation, SFW determines the changed blocks to be synced using a per plex bitmap and a global accumulator.
It updated the accumulator with the per plex bitmap for the changed blocks and syncs all the changed blocks from the original volume to the snapback volume.

Now if there is an another volume to be snapback, it again updates the accumulator with the per plex bitmap. 
But now the accumulator still have the older entries of the previous snapback operation and it copies extra blocks on the new snapback volume.

For e.g. Consider the following scenario :-
1. Create two snapshot of Volume F: (G: & H:)
2. make G: writable and copy files to G:
3. snapback G: using data from F:
4. snapback H: using data from F:

Step 4 should have nothing to resynchronize since there is no change to F: and H: above.
However, vxio trace shows that the number of dirty regions to be synchronized is the same as step 3.
This is because the global accumulator is updated at step 3 and this causes extra blocks to be resynced.

Resolution:
Issue 1
To resolve this issue, SFW retries the SCSI reservation operation.

Issue 2
The vxio.sys file has been modified to resolve this issue.

Issue 3
During the snapback operation updating the per-plex map to correctly reflect the changed blocks to be synced for the snapback operation.

Binary / Version:
vxio.sys / 5.1.20055.87
vxconfig.dll / 5.1.20036.87

-------------------------------------------------------+

[53] Hotfix name: Hotfix_5_1_20042_2770008

Symptom:
This hotfix addresses an issue related to the VCS Cluster Manager (Java Console) where a service group switch or failover operation to the secondary site fails due to a user privilege error. 

Description:
This issue occurs in secure clusters set up in a disaster recovery (DR) environment. When you switch or failover a global service group to the
remote site, the operation fails with the following error:
Error when trying to failover GCO Service Group. V-16-1-50824 At least Group Operator privilege required on remote cluster.

This error occurs even if the user logged on to the Java Console is a local administrator, which by default has the Cluster Administrator privilege in the
local cluster.

This issue occurs because the local administrator is not explicitly added to the cluster admin group at the remote site. During a switch or a failover, the Java Console is unable to determine whether the logged-on user at the local cluster has any privileges on the remote cluster and hence fails the operation.

If you use the Java Console to grant the local administrator with operator or administrator privileges to the remote cluster, the Java Console only assigns
guest privileges to that user. 

Resolution: 
This issue is fixed in the Java Console. The Java Console now allows you to assign the local administrator at the local cluster with cluster admin or cluster operator privileges in the remote cluster.
This change is applicable only on Windows.

Note: 
This hotfix is applicable to server components only. A separate hotfix is available for client components. Contact Symantec Technical Support for more details.

Binary Name / Version:
VCSGui.jar / NA

-------------------------------------------------------+

[54] Hotfix name: Hotfix_5_1_20058_88_2766206 

Symptom:
Issue 1
Some VVR operations fail if the RVG contains a large number of volumes.

Issue 2
Storage Agent crashes and causes SFW operations to stop responding in a 
VVR configuration.

Issue 3
Some VVR operations fail if VxSAS is configured with a domain user account.

Description:
Issue 1
This issue occurs while performing certain VVR operations, such as creating or deleting a Secondary, in a Replicated Volume Group (RVG) with a large number of volumes (more than 16 volumes). In such cases, if the combined length of all volume names under the RVG is greater than 512 bytes, then some VVR operations fail.

Issue 2
On a Veritas Storage Foundation for Windows (SFW) computer 
configured with Veritas Volume Replicator (VVR), the Storage Agent (vxvm 
service) crashes and causes the SFW operations to stop responding. This happens 
because of memory corruption in Microsoft's Standard Template Library (STL), 
which is used by a code in SFW. 
For more 
information on STL memory corruption, see 
http://support.microsoft.com/kb/813810.



The following messages are logged in the Event Viewer:

The Service Control Manager tried to take a corrective action (Restart the 
service) after the unexpected termination of the Veritas Storage Agent service, 
but this action failed with the following error: %%1056
- The Veritas Storage Agent service terminated unexpectedly.  It has done this 1 
time(s).  

The following corrective action will be taken in 60000 milliseconds:

Restart the service. 

Issue 3
This issue occurs while performing certain VVR operations, such as adding a Secondary 
host to an already configured VVR RDS. If Veritas Volume Replicator Security Service (VxSAS) is 
configured with a domain user account, then some VVR operations fail because VVR cannot 
authenticate the user credentials.

Resolution:
Issue 1
This issue has been resolved by modifying the code to handle larger number of volumes (up to 215 volumes).

Issue 2
To resolve this issue, the SFW code has been modified to not use 
STL in frequently-exercised VVR modules. 

Issue 3
This issue has been resolved by using fully qualified domain name for authenticating 
the user credentials.

Binary / Version:
vras.dll / 5.1.20058.88
vvr.dll / 5.1.20058.88

-------------------------------------------------------+

[55] Hotfix name: HotFix_5_1_20043_2817466  

Symptom:
This hotfix addresses an issue where the Microsoft Windows Driver Verifier tool detects a memory violation in the VCS GAB module and causes a system crash. 

Description:
When the GAB membership changes due to LLT network link transitions, the internal GAB data structure is released as it is no longer needed. However, the GAB driver tries to access a pointer that is already freed. This can occur in a different thread while another thread retains the stale pointer and tries to access the memory that was released. The problem is compounded in multi-CPU, multi-threaded systems. The special pool checking option of the Microsoft Windows Driver Verifier tool detects this memory violation issue and causes a system crash.

Resolution: 
The GAB driver has been updated to prevent the release of memory that is still in use. 

Binary / Version: 
gab.sys / 5.1.20043.642

-------------------------------------------------------+

[56] Hotfix name: Hotfix_5_1_20059_87_2834385b

Symptom:

This hotfix addresses the following issues:

Issue 1
Performing import/deport disk group in a cluster environment fails due to VxVDS refresh operation.

Issue 2
This hotfix blocks Thin Provisioning Reclaim support for a snapshot volume and for a volume that has one or more snapshots.

Issue 3
This hotfix provides Thin Provisioning Reclaim support for Huawei S5600T arrays.

Issue 4
The VDS Dynamic software provider causes an error in VDS.

Issue 5
The Dynamic Disk Group Split and Join operations takes a long time to complete.

Note:
Fixes for issues #1 and #2 were released earlier as Hotfix_5_1_20034_87_2400260. Fix for issue #3 was released as Hotfix_5_1_20051_87_2683797. These fixes are now a part of this hotfix.

Description:
Issue 1
VxVDS refresh operation interferes with disk group import/deport operation resulting in timeout and delay. This happens as both refresh and import/deport disk group processes try to read disk information at the same time.

Issue 2
It is observed that if a volume or its snapshot is reclaimed, then performing the snapback operation on such a volume causes data corruption.

Issue 3
Added Thin Provisioning Reclaim support for Huawei S5600T array on SFW 5.1 SP2.

Issue 4
This issue occurs while performing the DG DEPORT operation. This happens because the DG DEPORT alerts were not handled successfully by VxVDS. The following error message is displayed: Unexpected provider failure. Restarting the service may fix the problem.

Issue 5
The hotfix addresses an issue where the VDS Refresh operation overlaps with the Dynamic Disk Group Split and Join (DGSJ) operations, which causes a latency. This issue is observed after upgrading SFW.

Resolution:
Issue 1
To abort the refresh operation if disk group import/deport operation is in progress. Instead of doing full refresh on all the disk groups, refresh operation will now be performed only on disk groups that are being imported/deported.

Issue 2
Blocked Thin Provisioning Reclaim operation on snapshot volume and on volume that has one or more snapshots.

Issue 3
Made changes in DDL provider to claim Huawei S5600T array LUN as a Thin Reclaim device.

Issue 4
The hotfix fixes the DG DEPORT component, which now handles all alerts successfully.

Issue 5
This hotfix fixes the binaries that cause the latency.


Binary / Version: 
vxvds.exe / 5.1.20059.87 
vxvm.dll / 5.1.20059.87 
vxvm_msgs.dll / 5.1.20059.87

-------------------------------------------------------+

[57] Hotfix name: Hotfix_5_1_20060_87_2851054

Symptom:
Memory leak in Veritas Storage Agent service (vxpal.exe)

Description:
This hotfix addresses a memory leak issue in Veritas Storage Agent service (vxpal.exe) when the mount point information is requested from either the MSCS or VCS cluster.

Resolution: 
This hotfix fixes a binary that causes a memory leak in the Veritas Storage Agent service (vxpal.exe). 

Binary / Version:
mount.dll / 5.1.20060.87

-------------------------------------------------------+

[58] Hotfix name: Hotfix_5_1_20046_2822519

Symptom:
This hotfix addresses the following issues identified with the VCS High Availability Engine (HAD) module:

Issue 1: 
The VCS High Availability Engine (HAD) fails to acknowledge resource state changes. As a result, service group resources do not go offline or fail over and remain in the waiting state forever. 
(2239785, 2392336)

Issue 2:
In a online global soft group dependency, if more than one parent service group is ONLINE in a cluster, then you cannot switch a child service group from one node to another.
(2239785, 2392336)

Issue 3:
The VCS High Availability Engine (HAD) rejects client connection requests after renewing its authentication credentials as per the CredRenewFrequency cluster attribute.
(2401163)

Issue 4:
The HA commands spike CPU usage to almost 100% for about 10 seconds or more. Also there is a delay in getting the results from the HA commands.

Issue 5:
The VCS engine log is flooded with information messages in case of network congestion.

Issue 6:
Parent service group in a dependency fails to come online after a node failure.

Note: 
- Fixes for issue #1, #2, #3, #4, and #5 were released earlier as Hotfix_5_1_20036_2695917. It is now replaced by this hotfix.

Description:
Issue 1:
The VCS engine (HAD) authenticates resource state changes and sends an acknowledgement in the form of an internal message.
In some cases, when a service group or a resource is taken offline or switched or failed over to another cluster node, it may happen that HAD fails to send the acknowledgement message for the resource state change. 
The affected resource continues to wait for the acknowledgement from HAD and does not proceed with the state change. As a result, the offline, switch, or failover operation does not succeed.

Issue 2:
When you run the 'hagrp -switch' command, the VCS engine checks the state of a parent service group before switching a child service group. If more than one parent service group is ONLINE, the VCS engine rejects the command. 
The following error message is displayed: 
hagrp -switch <childservicegroup> -to <clusternodename>
VCS WARNING V-16-1-10207 Group dependencies are violated for group <childservicegroup> on system <clusternodename>

Issue 3:
The CredRenewFrequency is a cluster attribute that determines the number of days after which VCS engine should renew credentials. By default this is set to 0. When the value is set to a non-zero value, the credential is renewed periodically. After this credential is renewed, all HA clients (command line, Java GUI, etc.) fail to connect to VCS engine securely. This situation is corrected only after the VCS engine is restarted.

Issue 4:
The HA commands spike CPU usage to almost 100% for about 10 seconds or more. Also there is a delay in getting the results from the HA commands.

Issue 5:
MOMUtil.exe is a VCSAPI client. It uses VCS APIs to get information from VCS and present that to SCOM Server. 
MOMUtil.exe is invoked after every 5 minutes as part of VCS monitoring. This connects to the VCS Cluster running on the system and tries to fetch the information to display the current status of service groups. In case of a network congestion, the engine is unable to send the data to the client immediately. In this scenario, the following information message is logged after every 5 minutes:

Could not send data to peer MOMUtil.exe at this point; received error 10035 (Unknown error), will resend again

The engine successfully sends data to the client (MOMUtil.exe) after some time. By this time, the engine log is flooded with the information messages.

Issue 6:
When a node fails, VCS fails over service groups to another node. During this, VCS brings the child service group online before attempting to bring the parent service group online. When the failed node joins the cluster, VCS probes all the resources on the node. If the resources in the parent service group are probed before the VCS initiates to bring it online, then VCS resets the "Start" attribute of the resources. Because of this, VCS fails to bring the resources and the parent service group online.

Resolution:
The VCS engine has been modified as follows:

Issue 1:
The communication logic in the VCS High Availability Engine (HAD) module has been enhanced to address this issue.

Issue 2:
The VCS engine now proceeds with the switch operation for a child service group even if the parent service groups are ONLINE. 

Issue 3:
The fix ensures that the HA clients can seamlessly connect to VCS engine after the credential renewals.

Issue 4:
This fix addresses the issue of high CPU usage by VCS High Availability Engine (HAD).

Issue 5:
The engine has been modified to print the information messages as debug messages. 
So the messages will appear in the engine log only if DBG_IPM debug level is set.
If you need to log the information messages in the engine log, run the following command to set the debug level DBG_IPM:
halog -addtags DBG_IPM

Issue 6:
This issue has been resolved by modifying VCS to not reset the "Start" attribute of a resource if the service group is in the process of failover. 

Had.exe / 5.1.20046.515 
Hacf.exe / 5.1.20046.515

-------------------------------------------------------+

[59] Hotfix name: Hotfix_5_1_20048_2898414

Symptom:
This hotfix addresses an issue where the MSMQ resource failed to bind to the correct port.

Description:
When the MSMQ agent was brought online, the Event Viewer reported an error stating that Message Queuing failed to bind to port 1801. This error could occur due to various reasons. Even though the binding failed, the agent reported the MSMQ resource as Online.

Resolution: 
The MSMQ agent has been enhanced to verify that the clustered MSMQ service is bound to the correct virtual IP and port. By default, the agent performs this check only once during the Online operation. If the clustered MSMQ service is not bound to the correct virtual IP and port, the agent stops the service and the resource faults. You can configure the number of times that this check is performed. To do so, create the DWORD tunable parameter 'VirtualIPPortCheckRetryCount' under the registry key 'HKLM\SOFTWARE\VERITAS\VCS\BundledAgents\MSMQ'. If this parameter is set to a value greater than 1, the agent starts the clustered MSMQ service again and verifies its virtual IP and port binding as many times. It waits 2 seconds between each verification attempt. If the clustered MSMQ service is bound to the correct virtual IP and port, the agent reports Online. 

Binary / Version:
MSMQ.dll / 5.1.20048.518 

-------------------------------------------------------+

[60] Hotfix name: Hotfix_5_1_20053_2192886

Symptom: 
Issue 1
This hotfix addresses an issue wherein BIND DNS servers are unable to be updated in VCS. 

Issue 2
This hotfix addresses an issue where an application resolved a virtual name incorrectly due to DNS lag.

Issue 3
A Lanman resource faults with the error code: [2, 0x0000006F]

Note: 
Fix for issue #1 was released earlier as Hotfix_5_1_20023_2437434. 
Fix for issue #2 was released earlier as Hotfix_5_1_20049_2898414.
They are now replaced by this hotfix.

Description:
Issue 1
In a master-slave configuration of BIND DNS server, if you configure the Lanman agent to update the slave BIND DNS server, the Lanman agent fails to do so because the BIND DNS servers do not allow direct updates on the slave BIND DNS servers. In this case, neither an error message is displayed nor the Lanman resource faults even if DNSCriticalForOnline is set to TRUE.

Issue 2
This issue occurred due to a delay in resolving the DNS name. For example, MSMQ resolved a virtual name incorrectly. Therefore, when the MSMQ agent was brought online, the Event Viewer reported an error stating that Message Queuing failed to bind to the appropriate port.

Issue 3
This occurs due to an internal issue with the Lanman agent.


Resolution:
Issue 1
The Lanman agent's behavior is enhanced to successfully update the slave BIND DNS server. The agent will now query the slave BIND DNS server to discover the corresponding master BIND DNS server and update the master BIND DNS server. 

Issue 2
The Lanman agent has been enhanced to verify the DNS lookup and flush the DNS resolver cache after bringing the Lanman resource online. You need to create the DWORD tunable parameter 'DNSLookupRetryCount' under the registry key 'HKLM\SOFTWARE\VERITAS\VCS\BundledAgents\Lanman'. If this parameter is set, the Lanman agent verifies the DNS lookup as per the parameter value. It waits 5 seconds between each verification attempt. You can also create the DWORD tunable parameter 'SkipDNSCheckFailure' under the Lanman registry key. The default value of 'SkipDNSCheckFailure' is 0 (zero), which indicates that the resources should fault if the DNS lookup fails. If this parameter value is set to 1, the resources should not fault even if the DNS lookup fails.

Issue 3
The Lanman agent has been updated so that the Lanman resource does not fault due to this issue.

Binary / Version:
Lanman.dll / 5.1.20053.525 

-------------------------------------------------------+

[61] Hotfix name: Hotfix_5_1_20062_87_2864040 

Symptom:
The vxprint CLI crashes when used with '-l' option

Description:
This hotfix addresses an issue where the vxprint CLI crashes when used with the '-l' option. The issue occurs when the read policy on a mirrored volume is set to a preferred plex.

Resolution: 
The hotfix fixes the binary that caused vxprint CLI to crash.

Binary / Version:
vxprint.exe / 5.1.20062.87 

-------------------------------------------------------+

[62] Hotfix name: Hotfix_5_1_20064_87_2905123  

Symptom:
Not able to create volumes; VEA very slow for refresh and rescan operations

Description:
In SFW, this issue occurs while creating a volume or performing refresh or rescan operations from VEA. During this, if the system has several disks with OEM partitions, volume creation fails and VEA takes a very long time to perform the refresh and rescan operations. Because FtDisk provider locks the database for a long time while processing OEM partitions, lock contention occurs in other operations and providers, which eventually makes all the operations slow.

Resolution: 
This issue has been resolved by optimizing the way FtDisk provider acquires lock and releases it.

Binary / Version:
ftdisk.dll / 5.1.20064.87

-------------------------------------------------------+

[63] Hotfix name: Hotfix_5_1_20066_87_2914038

Symptom:
Storage Agent crashes on startup

Description:
This hotfix addresses an issue where Storage Agent crahes during startup because of an unhandled exception while accessing an invalid pointer.

Resolution: 
This issue has been resolved by handling the exception in the code.

Binary / Version:
vdsprov.dll / 5.1.20066.87

-------------------------------------------------------+

[64] Hotfix name: Hotfix_5_1_20067_87_2911830

Symptom:
Not able to create a volume with more than 256 disks

Description:
This issue occurs while creating a volume with more than 256 disks. This happens because the license provider check limits the maximum number of disks allowed per volume to 256, regardless of the volume layout. Note that this limit is imposed to control the maximum number of records (such as plex, subdisk, disks, volume, RVG) that gets stored on private region per disk group (maximum limit for this is 2922 records).

Resolution: 
This issue has been resolved by increasing the limit of the maximum number of allowed disks per volume to 512. As long as the maximum number of records per private region do not crossing its limit, the new limit of disks per volume is valid.

Binary / Version:
sysprov.dll / 5.1.20067.87

-------------------------------------------------------+

[65] Hotfix name: Hotfix_5_1_20069_88_2928801

Symptom:
The vxtune rlink_rdbklimit command does not work as expected

Description:
This issue occurs when using the vxtune rlink_rdbklimit command to set a value for the RLINK_READBACK_LIMIT tunable. The command fails because vxtune.exe incorrectly stores an invalid value for RLINK_READBACK_LIMIT instead of the one provided by the user. This happens because the value is internally converted into kilobytes instead of bytes.

Resolution: 
This issue has been resolved by correcting the code that does the kilobyte to byte conversion.

Binary / Version:
vxtune.exe / 5.1.20069.88

-------------------------------------------------------+

[66] Hotfix name: Hotfix_3_3_1071_2860593 

Symptom:
The vxprint command may fail when it is run multiple times

Description:
The CORBA clients, such as vxprint, use a CSF (Common Services Framework) API called CsfRegisterEvent to register for events with the VxSVC service. This issue occurs when you run the vxprint command multiple times and the CsfRegisterEvent API fails. However, the issue is intermittent and may not always happen. The vxprint command fails with the following error: V-107-58644-930

Resolution: 
This issue can be resolved by retrying the underlying function, which the CsfRegisterEvent API calls in csfsupport3.dll.

Binary / Version:
csfsupport3.dll / 3.3.1071.0

-------------------------------------------------------+

[67] Hotfix name: Hotfix_5_1_20068_87_2913240

Symptom:
This hotfix addresses the following issues:
Issue 1
MountV resource faults because SFW removes a volume due to delayed device removal request.

Issue 2
Expanding volume using "mirror across disks by Enclosure" assigns disks to wrong plexes.

Issue 3
Basic quorum resource (physical disk resource) faults while Disk Group resource tries to get DgID.

Description:
Issue 1
This issue occurs when SFW removes a volume in response to a delayed device removal request. Because of this, the VCS MountV resource faults.

Issue 2
This issue occurs when expanding a volume using the "mirror across disks by Enclosure" option. During this, the Expand Volume command assigns disks to the wrong plexes, splitting the plexes across enclosures. This happens if the arrays have long and similar names except for the last few numbers; for example, EMC000292602920 and EMC000292602853. The function to compare names does not handle such names in the strings correctly and, therefore, treats different arrays as the same.

Issue 3
When the Volume Manager Disk Group resource tries to get the dynamic disk group ID (DgID) information, SFW clears the reservation of all the disks, including those that are part of Microsoft Cluster Server (MSCS). However, Microsoft Failover Cluster (MSFC) disk resources do not try to re-reserve the disks and, therefore, they fault.

Resolution
Issue 1
This issue has been resolved by not disabling the mount manager interface instance if it is active when the device removal request arrives.

Issue 2
This issue has been resolved by modifying the name comparison function. 

Issue 3
This issue has been resolved. Now, while clearing the disk reservations, SFW skips the offline and basic disks.

Binary / Version:
vxio.sys / 5.1.20068.87 
vxconfig.dll / 5.1.20068.87

-------------------------------------------------------+

[68] Hotfix name: Hotfix_5_1_20050_3053006 

Symptom:
The VCS SRDF agent resource fails to come online after a remote failover, if the resource is configured to monitor a SRDF composite group.

Description:
This issue occurs if a VCS SRDF resource is configured to monitor a SRDF composite group and a remote failover occurs from the primary site array to the secondary site array. The VCS SRDF agent fails over the device groups that are configured for dynamic swap using the following command: symrdf failover To execute this command, the VCS SRDF agent provides the path of the device file to the command. An incorrect device file path was being provided by the VCS SRDF agent to the symrdf failover command. As a result, the command fails and the resource faults.

Resolution: 
The issue has been fixed in the VCS SRDF agent. The symrdf command is able to retrieve and use the correct path of the device file.

Binary / Version:
SRDFAgent.pm / NA

-------------------------------------------------------+

[69] Hotfix name: Hotfix_5_1_20051_3054619 

Symptom:
This issue occurs on 64-bit systems, where Windows Server Backup fails to perform a system state backup with the following error: Error in backup of C:\program files\veritas\\cluster server\ during enumerate: Error [0x8007007b] The filename, directory name, or volume label syntax is incorrect.

Description:
The backup operation fails because the VCS service image path contains an extra backslash (\) character, which Windows Server Backup is unable to process.

Resolution: 
This issue has been fixed by removing the extra backslash character from the VCS service image path. This hotfix changes the following registry keys: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Had\ImagePath HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\HADHelper\ImagePath HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Cmdserver\ImagePath HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\VCSComm\ImagePath

Note: This hotfix is applicable to Windows Server 2008 and Windows Server 2008 R2 only.

Binary / Version:
NA / NA

-------------------------------------------------------+

[70] Hotfix name: Hotfix_5_1_20070_87_2940962   

Symptom:
Provisioned size of disks is reflected incorrectly in striped volumes after thin provisioning reclamation

Description:
This hotfix addresses an issue where thin provisioning (TP) reclamation in striped volumes shows incorrect provisioned size for disks. This issue is observed on Hitachi and HP arrays.

Resolution: 
This issue has been fixed by increasing the map size for striped volumes.

Binary / Version:
vxconfig.dll / 6.0.00018.362

-------------------------------------------------------+

[71] Hotfix name: Hotfix_5_1_20071_87_2963812    

Symptom:
VMDg resource in MSCS times out and faults because DG offline operation takes long time to complete

Description:
In a Microsoft Cluster Server (MSCS) configuration, this issue occurs while bringing a disk group offline. During this, the lock volume operation takes a long time to complete due to which the disk group offline operation also takes a long time. Because of this, the Volume Manager Disk Group (VMDg) resource times out and eventually faults.

Resolution: 
To resolve this issue, as per Microsoft's recommendation, the volume offline handling is changed to skip the lock volume operation while bringing the cluster DG offline. Earlier, the following steps were used to offline the data volume: 1. Open a volume handle. 2. Take a lock on the volume (FSCTL_LOCK_VOLUME). 3. Flush and Dismount the volume (FSCTL_DISMOUNT_VOLUME). 4. Release the volume handle (This will unlock the volume). Now, the following steps will be used in a clustered environment: 1. Open a volume handle. 2. Flush and Dismount the volume (FSCTL_DISMOUNT_VOLUME). 3. Release the volume handle.

Binary / Version:
vxconfig.dll / 5.1.20071.87

-------------------------------------------------------+

[72] Hotfix name: Hotfix_5_1_20072_87_2975132   

Symptom:
The Primary node hangs if TCP and compression are enabled

Description:
During a replication, this issue occurs if TCP and compression of data are enabled and the resources are low at the Secondary node. Because of low resources, decompression of data on the Secondary fails repeatedly, causing the TCP buffer to fill up. In such case, if network I/Os are performed on the Primary and a transaction is initiated, then the Primary node hangs.

Resolution: 
To resolve this issue, disconnect the RLINK if decompression fails repeatedly.

Binary / Version:
vxio.sys / 5.1.20072.87

-------------------------------------------------------+

[73] Hotfix name: Hotfix_5_1_20073_88_3081465

Symptom:
In FoC GUI, VMDg resource belatedly changes status from Online Pending to Online after the resource is online

Description:
In the Microsoft Failover Cluster (FoC) GUI, this issue occurs when, after a Volume Manager Disk Group (VMDg) resource is 
brought online, the resource takes some time to change the status from Online Pending to Online.
After the VMDg resource comes online, it receives the CLUSCTL_RESOURCE_STORAGE_GET_MOUNTPOINTS control code for each volume in it. 
Because the resource takes time to process this control code, there's a delay in changing the resource status in FoC GUI.

Resolution:
This hotfix reduces the delay in status change by improving the processing of the CLUSCTL_RESOURCE_STORAGE_GET_MOUNTPOINTS 
control code.

Binary / Version:
cluscmd.dll / 5.1.20073.88
cluscmd64.dll / 5.1.20073.88
vxres.dll / 5.1.20073.88
vxbridge.exe / 5.1.20073.88

-------------------------------------------------------+

[74] Hotfix name: Hotfix_5_1_20052_3187218

Symptom:

This hotfix addresses the following issues:

Issue 1
This hotfix addresses an issue related to the VCS MountV agent where the MountV resources fail to come online in a Global Cluster Option (GCO) setup that uses VVR for replication.

Issue 2
This hotfix addresses the issue where the mount point folders are accessible after MountV takes the volumes offline.

Issue 3
This hotfix addresses an issue where RegRep resources periodically fail to come online. This issue is also observed with resources that are configured on MountV resources.

Issue 4
This hotfix addresses an issue where a FileShare resource failed to come online during a service group Online operation. Volume is not accessible even though MountV is online.

Issue 5
After a service group is taken offline due to a VMDg fault, a local drive folder mount becomes inaccessible

Note 1:
Fixes for issues #1, #2, and #3 were released earlier as Hotfix_5_1_20044_2790100.
Fix for issue #1 on Windows Server 2003 was released earlier as Hotfix_5_1_20029_2536009.
Fix for issue #4 was released earlier as Hotfix_5_1_20047_2858362.
All these hotfixes are now part of this hotfix.


Description:

Issue 1
This hotfix depends on hotfix Hotfix_5_1_20036_87_2536009 and Hotfix_5_1_20056_87_2738430.

This issue occurs in a disaster recovery configuration set up using the VCS GCO option where VVR is used for replication.

If a global service group that contains multiple volumes is switched to another node either at the local site or to the remote site, some of the MountV resources in the service group fail to come online and may eventually fault.

The MountV agent log contains the following error: Filesystem at <driveletterormountpath> is not clean [2:87]

In case of a GCO setup that uses VVR, the cluster configuration is such that the application service group depends on the VVR replication service group.

The MountV agent's offline function tries to remove the mount point and that causes a conflict with replication.
This affects the subsequent MountV resource online operation.

Issue 2
The MountV agent can mount a volume as an NTFS folder. If this volume is unmounted, the folder remains accessible for writes by users and processes. This creates an open handle that can cause the MountV online operation to fail on the failover target node.

Issue 3
The Regrep resource performs file system operations (for example, createdir, write-to-file) on the mount point on which it is configured. During an Online operation of the service group, though the MountV resource reports ONLINE, the RegRep resource fails to perform the file system operations with an error (Device Not Ready). Analysis indicates that the file system was not fully usable even after a successful mount operation. 

Issue 4
The mount point that the MountV resource attempted to bring online was already present on the node. Therefore, the initial validation phase of MountV returned an error. However, the stale mount point remained on the node, which caused the subsequent monitor cycle to report the resource as online. Thus, the resource appeared online even though the mount point was not in a ReadWrite state, because the Online operation had exited prematurely. As a result, the dependent FileShare resource also failed to come online.

Issue 5
This issue occurs because the folder mount is incorrectly unmounted. When a volume device is lost, stale mount points are left behind on the faulted node as the mount paths are not cleaned up.

Resolution:

Issue 1
This issue has been fixed in the VCS MountV agent.
The MountV agent's offline function is modified to handle replication service group existence.

Issue 2
This hotfix adds a new attribute, BlockMountPointAccess, which defines whether the agent blocks access to the NTFS folder that is used as a folder mount point after the mount point is unmounted.

For example, if C:\temp is used as a folder mount for a volume, then after the mount point is unmounted, the agent blocks access to the folder C:\temp.

The value true indicates that the folder is not accessible. The default value false indicates that the folder is accessible.

Note: This hotfix is applicable only to non-VVR environments. If VVR is configured, then the folder will be accessible irrespective of the value of the BlockMountPointAccess attribute. 

Issue 3
An additional file system check has been implemented in the MountV agent. This check ensures that the file system is accessible and usable after the mount file system operation succeeds. 

Issue 4
During the Online operation of the MountV resource, the agent would check whether the mount point is already mounted. If so, the agent would delete the mount point and re-mount it. 

Issue 5
The MountV agent has been updated to clean up the mount paths in all error or fault conditions, so that this issue no longer occurs.

Binary / Version:
MountV.dll / 5.1.20052.522

-------------------------------------------------------+

[75] Hotfix name: Hotfix_5_1_20074_88_3104954

Symptom:
RLINK pause operation fails and system hangs when pausing an RLINK

Description:
This issue occurs when you perform the RLINK pause operation during which VVR kernel tries to process the pause command for a specified length of time. In some cases, if VVR is not able to process the pause command, then the command is completed with an error. This process may leave behind some flags, which forces VVR to stop reading back write requests from the Replicator Log. Eventually, the Replicator Log fills up and the DCM is activated. Because the SRL to DCM flush cannot proceed, new I/Os get throttled indefinitely, leading to a system hang.

Resolution:
VVR now ensures that any flags that were set while trying to pause the RLINK are cleaned up if the command is going to fail.

Binary / Version:
vxio.sys / 5.1.200074.88
vxconfig.dll / 5.1.200074.88

-------------------------------------------------------+

[76] Hotfix name: Hotfix_5_1_20075_87_3105641

Symptom:
I/O errors may occur while using the vxdisk list, vxdisk diskinfo, or vxassist rescan command

Description:
This issue may occur while using the vxdisk list, vxdisk diskinfo, or vxassist rescan command if the cluster disk group has a GPT disk. In this case, the use of such commands causes disk rescan, which eventually causes un-reservation and re-reservation of all the disks in the disk group. If application I/Os or vxconfig task I/Os result in disk I/Os while its reservation is cleared, then such disk I/Os may fail.

Resolution:
This issue has been resolved by not initiating disk rescan for a disk having GPT signature while firing vxdisk list and vxdisk diskinfo command.

Binary / Version:
vxcmd.dll / 5.1.20075.87

-------------------------------------------------------+

[77] Hotfix name: Hotfix_5_1_20076_87_3146554

Symptom:
Unable to perform thin provisioning and storage reclamation operations on non-track aligned volumes created on arrays that support these operations

Description:
This issue occurs when you try to perform thin provisioning and storage reclamation operations on non-track aligned volumes created on arrays that support these operations. Ideally, these operations are not allowed on Huawei Symantec’s Oceanspace S5600T array because the array does not support it. However, because of a bug in the vxvm.dll provider, SFW throws an error for these operations even for the other arrays as it fails to identify that the non-track aligned volumes were created on arrays other than Huawei S5600T.

Resolution:
This issue has been resolved by enhancing the vxvm.dll provider so that it correctly identifies non-track aligned volumes created on the Huawei S5600T 
array and blocks the thin provisioning and storage reclamation operations only on this array.

Binary / Version:
vxvm.dll / 5.1.20076.87

-------------------------------------------------------+

[78] Hotfix name: Hotfix_5_1_20077_87_3164349

Symptom:
During disk group import or deport operation, VMDg resources may result in a fault if VDS Refresh is also in progress

Description:
This issue occurs if the VDS Refresh operation is in progress and, at the same time, a dynamic disk group import or deport operation is performed and the Volume Manager Disk Group (VMDg) resources in a cluster are in the process of coming online or going offline. In this case, because of resource contention caused by two simultaneous operations, the VMDg resources take longer time and may eventually time out and result in a fault.

Resolution:
This issue has been resolved by automatically aborting the VDS Refresh operation when a dynamic disk group import or deport operation is initiated.

Binary / Version:
vxcmd.dll / 5.1.20077.87
vxvds.exe / 5.1.20077.87
-------------------------------------------------------+

[79] Hotfix name: Hotfix_5_1_20078_87_3190483

Symptom:
Cluster disk group resource faults after adding or removing a disk from a disk group if not all of its disks are available for reservation.

Description:
This issue occurs when you add or remove a disk from a dynamic disk group and if the majority of its disks are available for reservation, but not all. After the disk is added or removed, SFW checks to see if all the disks are available for reservation. In this case, because not all the disks are available, the cluster disk group resource faults.

Resolution:
This issue has been resolved so that, after a disk is added or removed from a disk group, SFW now checks only for a majority of disks available for reservation instead of all.

Binary / Version:
vxconfig.dll / 5.1.20078.87

-------------------------------------------------------+

[80] Hotfix name: Hotfix_5_1_20079_87_3211093

Symptom:
VVR Primary server may crash while replicating data using TCP multi-connection.

Description:
This issue may occur when VVR replication is happening in the TCP multi-connection mode. During the replication, the Primary may receive a duplicate data acknowledgement (ACK) for a message it had sent to the Secondary. Because of an issue in VVR's multi-threaded receiver code, VVR processes the message twice on the Secondary and sends two data acknowledgements to the Primary for the same message. Because of this, the Primary server crashes.

Resolution:
The issue in the multi-threaded receiver code in VVR has been fixed by modifying the vxio.sys binary.

Binary / Version:
vxio.sys / 5.1.20079.87
vxconfig.dll / 5.1.20079.87

-------------------------------------------------------+

[81] Hotfix name: Hotfix_5_1_20080_87_3226396

Symptom:
Error occurs while importing a dynamic disk group as a cluster disk group if a disk is missing.

Description:
This issue occurs while importing a dynamic disk group as a cluster disk group and if one of the disks in the cluster disk group is missing. Because this conversion from dynamic disk group to a cluster disk group wrongly interprets the missing disk to be not connected to a shared bus, it results in an error with the following message: disk is not on a shared bus.

Resolution:
This issue has been resolved so that now, during the conversion to a cluster disk group, the shared bus validation is skipped for the missing disks.

Binary / Version:
vxvm.dll / 5.1.20080.87

-------------------------------------------------------+

[82] Hotfix name: Hotfix_5_1_20081_87_3265897

Symptom:
Moving of subdisk fails for a striped volume with stripe unit size greater than 512 blocks.

Description:
This issue occurs while performing the Move Subdisk operation for a striped volume with stripe unit size greater than 512 blocks (256 KB). Because of the maximum size limit of 512 blocks of admin I/Os per call in the vxio driver, the operation to move the subdisk fails while trying to perform the admin I/Os.

Resolution:
This issue has been resolved by increasing the maximum size limit of admin I/Os per call to 16384 blocks.

Binary / Version:
vxio.sys / 5.1.20081.87 
vxconfig.sys / 5.1.20081.87

-------------------------------------------------------+


Additional notes
================|

[+] To confirm the list of cumulative patches installed on a system, run the following command from the directory where the CP files are extracted:
vxhfbatchinstaller.exe /list

The output of this command displays a list of cumulative patches and the hotfixes that are installed as part of a CP. This command also displays the hotfixes that are included in a CP but are not installed on the system.

[+] To confirm the installation of the hotfixes, run the following command:
vxhf.exe /list

The output of this command lists the hotfixes installed on the system.

[+] For details about a particular hotfix, run the following command:
vxhf.exe /display:<HotfixName>

Here, <HotfixName> is the name of the hotfix file without the platform and the .exe extension.

[+] The CP installer (vxhfbatchinstaller.exe) creates and stores logs at:
"%allusersprofile%\Application Data\Veritas\VxHF\VxHFBatchInstaller.txt"

[+] The hotfix installer (vxhf.exe) creates and stores logs at:
"%allusersprofile%\Application Data\Veritas\VxHF\VxHF.txt"

[+] For general information about the hotfix installer (vxhf.exe), please refer to the following technote:
http://www.symantec.com/docs/TECH73446

[+] To view a list of hotfixes already installed on a system, please refer to the steps mentioned in the following technote:
http://www.symantec.com/docs/TECH73438

[+] For information on uninstalling a hotfix, please refer to the steps mentioned in the following technote:
http://www.symantec.com/docs/TECH73443


Disclaimer
==========|

This fix is provided without warranty of any kind including the warranties of title or implied warranties of merchantability, fitness for a particular purpose and non infringement. Symantec disclaims all liability relating to or arising out of this fix. It is recommended that the fix be evaluated in a test environment before implementing it in your production environment. When the fix is incorporated into a Storage Foundation for Windows maintenance release, the resulting Hotfix or Service Pack must be installed as soon as possible. Symantec Technical Services will notify you when the maintenance release (Hotfix or Service Pack) is available if you sign up for notifications from the Symantec support site http://www.symantec.com/business/support and/or from Symantec Operations Readiness Tools (SORT) http://sort.symantec.com.