This page lists publically-released patches for Veritas Enterprise Products.
For Product GA build, see Veritas Entitlement Management System(VEMS) by clicking the Veritas Support 'Licensing' option.
For information on private patches, contact Veritas Technical Support.
Veritas is making it easier to find all software installers and updates for Veritas products with a completely redesigned experience. NetBackup HotFixes and NetBackup Appliance patches are now also available at the new Veritas Download Center.
Patches for your product can have a variety of names. These names are based on product, component, or package names. For more information on patch naming conventions and the relationship between products, components, and packages, see the SORT online help.
vcs-win_x64-CP12_SFWHA_60
Obsolete
The latest patch(es) : sfha-win_x64-CP14_SFWHA_60 
Sign in if you want to rate this patch.

 Basic information
Release type: P-patch
Release date: 2014-04-30
OS update support: None
Technote: TECH191975-Storage Foundation for Windows High Availability, Storage Foundation for Windows and Veritas Cluster Server 6.0 Cumulative Patches
Documentation: None
Popularity: 430 viewed    28 downloaded
Download size: 58.92 MB
Checksum: 1842990150

 Applies to one or more of the following products:
Storage Foundation HA 6.0 On Windows x64

 Obsolete patches, incompatibilities, superseded patches, or other requirements:

This patch is obsolete. It is superseded by: Release date
sfha-win_x64-CP13_SFWHA_60 (obsolete) 2014-05-26

This patch supersedes the following patches: Release date
vcs-win_x64-CP11_SFWHA_60 (obsolete) 2014-02-24

 Fixes the following incidents:
2593039, 2650336, 2677127, 2695917, 2704183, 2705769, 2715104, 2722108, 2730657, 2740833, 2744349, 2752096, 2761898, 2762153, 2763375, 2783096, 2786159, 2791125, 2810516, 2845295, 2862335, 2869461, 2871478, 2877405, 2882535, 2885211, 2897781, 2898414, 2904593, 2912682, 2913240, 2919276, 3053280, 3062856, 3082932, 3105024, 3111155, 3122364, 3137880, 3160235, 3202548, 3210766, 3226691, 3252941, 3300654, 3300658, 3377535, 3378041, 3381786, 3421570, 3431608, 3462786

 Patch ID:
None.

 Readme file  [Save As...]
                          * * * READ ME * * *
             * * * Veritas Storage Foundation HA 6.0 * * *
                         * * * P-patch 12 * * *
                         Patch Date: 2014-04-30


This document provides the following information:

   * PATCH NAME
   * OPERATING SYSTEMS SUPPORTED BY THE PATCH
   * PACKAGES AFFECTED BY THE PATCH
   * BASE PRODUCT VERSIONS FOR THE PATCH
   * SUMMARY OF INCIDENTS FIXED BY THE PATCH
   * DETAILS OF INCIDENTS FIXED BY THE PATCH
   * INSTALLATION PRE-REQUISITES
   * INSTALLING THE PATCH
   * REMOVING THE PATCH


PATCH NAME
----------
Veritas Storage Foundation HA 6.0 P-patch 12


OPERATING SYSTEMS SUPPORTED BY THE PATCH
----------------------------------------
Windows 2008 X64
Windows Server 2008 R2 X64



BASE PRODUCT VERSIONS FOR THE PATCH
-----------------------------------
   * Veritas Storage Foundation HA 6.0


SUMMARY OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
Patch ID: SFWHA CP12
* 2715104 (2715104) VEA GUI prompts for login credentials when a logged-in user or domain administrator connects to VEA server.
* 2763375 (2763375) The Fire Drill, Disaster Recovery, and Quick Recovery wizards, and the Solutions Configuration Center did not support Microsoft SQL Server 2012.
* 2705769 (2705769) The Quick Recovery Configuration Wizard fails to create snapshot schedules on ApplicationHA-managed node.
* 2650336 (2650336) This hotfix addresses an issue related to the VCS PrintSpool agent where information about printers that were newly added in the virtual server context is lost in case of a system crash or unexpected failures that require a reboot.
* 2704183 (2704183) In some cases, Microsoft Failover Cluster disk resources fault when a disk group comes online.
* 2695917 (2695917) This hotfix addresses an issue where the VCS engine log is flooded with information messages in case a of network congestion.
* 2722108 (2722108) This hotfix addresses an issue related to the VCS FileShare agent wherein non-scoped file shares are not accessible using virtual server name or IP address if NetBIOS and WINS are disabled.
* 2730657 (2730657) This hotfix addresses an issue where volumes created on LUNs exposed from a Hitachi Open-V array are not track aligned.
* 2761898 (2761898) VCS does not support local attached storage in RDC\ GCO DR with single node cluster environments.
* 2762153 (2762153) VCS does not support local attached storage in RDC\ GCO DR with single node cluster environments.
* 2740833 (2740833) VxSVC service fails to start because DDLProv provider does not load.
* 2752096 (2752096) Memory leak occurs in the SFW components vxvds.exe, vxvdsdyn.exe; the vxvdsdyn.exe component crashes.
* 2744349 (2744349) This hotfix addresses several issues where SFW snapshot and scheduling operations fail for Microsoft SQL Server 2012 databases.
* 2677127 (2677127) This hotfix addresses an issue related to the SFW component, mount.dll, that causes the Veritas Enterprise Administrator (VxSvc) Service to crash if pagefile is not configured.
* 2783096 (2783096) The Storage Migration Wizard pages appear truncated.
* 2810516 (2810516) Snapback operation causes too many blocks to resynchronize for the snapback volume.
* 2845295 (2845295) VxSVC service crashes during disk group import and during the subsequent restart.
* 2593039 (2593039) Provisioned size of disks is reflected incorrectly in striped volumes after thin provisioning reclamation.
* 2869461 (2869461) Veritas VDS software provider log file grows to a very large size.
* 2877405 (2877405) svchost.exe crashes because of a fault in shsvcs.dll.
* 2871478 (2871478) Volume shrink operation in SFW does not work correctly for the "New volume size" option.
* 2885211 (2885211) In some cases, Windows prompts to format the volume created using VEA.
* 2791125 (2791125) Memory leak occurs in the SFW component vxsvc.exe.
* 2786159 (2786159) This hotfix addresses an issue related to the VCS Cluster Manager (Java Console) where a service group switch or failover operation to the secondary site fails due to a user privilege error.
* 2862335 (2862335) After a reboot of the passive node, all the disk groups which are marked with fast failover, remain in deported none state.
* 2882535 (2882535) This hotfix includes the VCS FileShare agent that is enhanced to ensure that the file shares configured with VCS are accessible even if the LanmanResName attribute is not specified.
* 2897781 (2897781) Dynamic disk groups created on external disks are not supported in VMNSDg agent.
* 2898414 (2898414) This hotfix addresses an issue where the MSMQ resource failed to bind to the correct port.
* 2898414 (2898414) This hotfix addresses an issue where an application resolved a virtual name incorrectly due to DNS lag.
* 2904593 (2904593) SFW is unable to discover the EMC Symmetrix LUNs as thin reclaimable, having firmware version 5876.
* 2919276 (2919276) Dynamic disk groups created on external disks are not supported in VMNSDg agent.
* 2913240 (2913240) MountV resource faults because SFW removes a volume due to delayed device removal request.
* 3053280 (3053280) Windows Server Backup fails to perform a system state backup
* 3062856 (3062856) If LLT is running alongside a utility like Network Monitor, a system crash (BSOD) occurs when shutting down Windows.
* 2912682 (2912682) In some cases, capacity monitoring email notification fails
* 3105024 (3105024) The Disaster Recovery (DR) wizard and the Fire Drill (FD) wizard fail when you run them in an environment that contains the hardware replication setup.
* 3111155 (3111155) After every 49.7 days, VCS logs report that the Global Counter VCS attribute is not updated.
* 3122364 (3122364) MountV resources do not come online when performing a fire drill.
* 3082932 (3082932) Data corruption may occur if the volume goes offline while a resynchronization operation is in progress.
* 3137880 (3137880) Tagging of snapshot disks fails during the fire drill operation, causing disk import to fail.
* 3160235 (3160235) On an EMC SRDF setup, VEA VxSVC service may crash during a rescan if there are stale records on Secondary node.
* 3202548 (3202548) Missing disks cannot be removed from a disk group using the vxdg rmdisk command.
* 3210766 (3210766) On a system with multiple DRL-enabled volumes, a write I/O hang may occur on one of the volumes when write I/O operations are happening on all the volumes.
* 3226691 (3226691) VEA GUI may time out while connecting to a server.
* 3252941 (3252941) In a Microsoft Failover Cluster environment, creating a virtual machine fails if the volume has folder mount points.
* 3300658 (3300658) When NetBios is disabled, a FileShare service group fails to come online.
* 3300654 (3300654) If the cluster name string in the registry and the main.cf file is not an exact match, upgrading to SSO authentication or reconfiguring SSO fails. However, the operation is incorrectly reported to be successful.
* 3377535 (3377535) In a disaster recovery fire drill configuration, the disks containing the snapshot of the replicating data appear offline and the SRDFSnap resource from the fire drill service group fails.
* 3378041 (3378041) Server crashes during high write I/O operations on mirrored volumes.
* 3381786 (3381786) On an SRDF setup with multiple nodes at Secondary, the local failover at Secondary fails due to disk reservation errors on the target node.
* 3421570 (3421570) VMDg resource fails to come online during first RO-to-RW import at the time of a failover, resulting in RHS process restart.
* 3431608 (3431608) In a disk group with even number of disks of which only half of the disks go missing, you cannot remove the missing disks if any volumes are present on the non-missing disks.
* 3462786 (3462786) Memory leak occurs for SFW VSS provider while taking a VSS snapshot


DETAILS OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
This patch fixes the following Symantec incidents:

Patch ID: SFWHA CP12

* 2715104 (Tracking ID: 2715104)

SYMPTOM:
VEA GUI prompts for login credentials when a logged-in user or domain administrator connects to VEA server. Note:
Please note that if you have set up SFW cluster environment for Microsoft Clustering or VCS, then you must install the hotfix Hotfix_6_0_00005_362_2736679 before installing this hotfix.

DESCRIPTION:
In Veritas Storage Foundation for Windows 6.0, this issue occurs when a logged-in user or domain administrator connects to a Veritas Enterprise Administrator (VEA) server through the VEA GUI. The VEA GUI prompts for login credentials each time the user connects to a VEA server. 
This happens if the DCOM-based authentication in VEA GUI fails and the existing 32-bit midlman.dll for VEA server cannot be loaded into the 64-bit VEA VxSVC service.

RESOLUTION:
The DCOM-based authentication is enabled in VEA GUI and, in addition to the 32-bit midlman.dll, a 64-bit midlman64.dll is added for the server component so that it is loaded into VxSVC service.
Note: 
This hotfix is applicable to server components only. 
A separate hotfix is available for client components. Contact Symantec Technical Support for more details.
Binary / Version:
midlman64.dll / 3.4.531.0 
ci.jar	/ 3.4.29.0  
cicustom.jar / 3.4.29.0 
obCommon.jar / 3.4.29.0 
OBGUI.jar / 3.4.29.0

* 2763375 (Tracking ID: 2763375)

SYMPTOM:
The Fire Drill, Disaster Recovery, and Quick Recovery wizards, and the Solutions Configuration Center did not support Microsoft SQL Server 2012.

DESCRIPTION:
Need to provide support for SQL Server 2012 in the Fire Drill, Disaster Recovery, and Quick Recovery wizards, and the Solutions Configuration Center.

RESOLUTION:
Added support for SQL Server 2012 in the Fire Drill, Disaster Recovery, and Quick Recovery wizards, and the Solutions Configuration Center.

Note 1: For the Quick Recovery wizard to function properly with SQL Server 2012, you must apply Hotfix_6_0_00011_363_2744349 along with this hotfix.
Note 2: This hotfix is applicable to server components only. A separate hotfix is available for client components. Contact Symantec Technical Support for more details.

Binary / Version:
DRPluginProxy.dll / 6.0.1.709
FireDrillStep.dll / 6.0.1.709
qrpages.dll / 6.0.0.701 
QuickRecovery.dll / 6.0.1.709
CCFEngine.exe.config / NA

* 2705769 (Tracking ID: 2705769)

SYMPTOM:
The Quick Recovery Configuration Wizard fails to create snapshot schedules on ApplicationHA-managed node.

DESCRIPTION:
When Veritas Storage Foundation for Windows (SFW) 6.0 is installed on the Symantec ApplicationHA 6.0 managed node, the Quick Recovery Configuration Wizard fails to create snapshot schedules for the application.

The wizard displays the following error messages:
- Failed to Create, Modify or Delete volume settings for one or more schedules
- Failed while getting cluster related information for the application

RESOLUTION:
The VxSnapSchedule.dll file has been updated to address the issue.

Binary / Version:
VxSnapSchedule.dll / 6.0.00002.362

* 2650336 (Tracking ID: 2650336)

SYMPTOM:
This hotfix addresses an issue related to the VCS PrintSpool agent where information about printers that were newly added in the virtual server context is lost in case of a system crash or unexpected failures that require a reboot.

DESCRIPTION:
The PrintSpool agent stores printer information to the configuration during its offline function.
Therefore if printers are added in the virtual server context, but the printspool resource or the printshare service group is not gracefully taken offline or failed over, the new printer information does not get stored to the disk.
In such a case, if the node where the service group is online hangs, shuts down due to unexpected failures, or reboots abruptly, all the new printer information is lost.
The printshare service group may also fail to come online again on the node.

RESOLUTION:
This issue has been fixed in the PrintSpool agent.
A configurable parameter now enables the agent to save the printer information periodically.

A new attribute, 'SaveFrequency', is added to the PrintSpool agent.
SaveFrequency specifies the number of monitor cycles after which the PrintSpool agent explicitly saves the newly added printer information to the cluster configuration. The value 0 indicates that the agent does not explicitly save the information to disk.
It will continue to save the printer information during its offline function.
The default value is 5.

Binary / Version:
PrintSpool.dll / 6.0.3.416
PrintSpool.xml / NA

* 2704183 (Tracking ID: 2704183)

SYMPTOM:
In some cases, Microsoft Failover Cluster disk resources fault when a disk group comes online.
Note:
If you have set up SFW cluster environment for Microsoft Clustering or VCS, then you must install the hotfix Hotfix_6_0_00005_362_2736679 before installing this hotfix.

DESCRIPTION:
In some cases, while importing a disk group, Storage Foundation for Windows (SFW) clears the reservation of all the disks except those belonging to the imported disk groups. It also clears the reservation of offline  and basic disks. However, Microsoft Failover Cluster disk resources, which contains the basic disks, do not try to re-reserve the disks and, therefore, they fault.

The following are some of the scenarios in which this may happen:
1. A new disk group is created.
2. The Veritas Enterprise Administrator (VEA) VxSVC service on a node or the node itself crashes, and a failover is triggered.
3. The storage is disconnected from one of the nodes and a failover is triggered.

RESOLUTION:
For the first scenario above, no reservation clearing is required, and it has been fixed.
For the other scenarios where reservations need to be cleared, SFW skips the disks that are offline or basic.

Binary / Version:
vxconfig.dll / 6.0.4.363

* 2695917 (Tracking ID: 2695917)

SYMPTOM:
This hotfix addresses an issue where the VCS engine log is flooded with information messages in case a of network congestion.

DESCRIPTION:
MOMUtil.exe is a VCSAPI client. It uses VCS APIs to get information from VCS and present that to SCOM Server. 
MOMUtil.exe is invoked after every 5 minutes as part of VCS monitoring. 
This connects to the VCS Cluster running on the system and tries to fetch the information to display the current status of service groups. 
In case of a network congestion, the engine is unable to send the data to the client immediately. 

In this scenario, the following information message is logged after every 5 minutes:
Could not send data to peer MOMUtil.exe at this point; received error 10035 (Unknown error), will resend again

The engine successfully sends data to the client (MOMUtil.exe) after some time. By this time, the engine log is flooded with the information messages.

RESOLUTION:
The engine has been modified to print the information messages as debug messages. 
So the messages will appear in the engine log only if DBG_IPM debug level is set.

If you need to log the information messages in the engine log, run the following command to set the debug level DBG_IPM:
halog -addtags DBG_IPM

Binary / Version:
had.exe / 6.0.6.425
hacf.exe / 6.0.6.425

* 2722108 (Tracking ID: 2722108)

SYMPTOM:
This hotfix addresses an issue related to the VCS FileShare agent wherein non-scoped file shares are not accessible using virtual server name or IP address if NetBIOS and WINS are disabled.

DESCRIPTION:
VCS FileShare and Lanman agents can support non-scoped file shares on Windows Server 2008 and 2008 R2 systems if the DisableServerNameScoping registry is created and set to 1.

The VCS FileShare agent depends on NetBIOS and DNS to resolve the virtual name.
If NetBIOS and WINS are disabled and the DNS is not updated, the agent is unable to resolve the virtual name. 

This may typically occur when the file share service groups are configured to use localized IP addresses.
When the service group is switched or failed over, the virtual name to IP address mapping changes.
In such a case if WINS database and the DNS are not updated, the agent is unable to resolve the virtual name.
As a result the fileshare resources fault and the shares become inaccessible.

The following message is seen in the agent log:
VCS INFO V-16-10051-10530 FileShare:<servicegroupname>:online:Failed to access the network path (\\virtualname)

RESOLUTION:
The FileShare agent is enhanced to address this issue. 
The FileShare agent behavior can be controlled using the following registry keys:
- HKLM\SOFTWARE\VERITAS\VCS\BundledAgents\Lanman\__Global__\DisableServerNameScoping
Set the DisableServerNameScoping key to enable non-scoped file share support.

- HKLM\SOFTWARE\VERITAS\VCS\BundledAgents\Lanman\__Global__\DisableStrictVirtualNameCheck
Set the DisableStrictVirtualNameCheck key to have the FileShare agent make the file shares accessible irrespective of whether or not the virtual name is resolvable.

You must manually create these registry keys after installing this hotfix.
See the topic "Setting the DisableServerNameScoping and DisableStrictVirtualNameCheck registry keys" in this readme for more details.

Notes:
- The registry key DisableVirtualNameCheck will take effect only if DisableServerNameScoping key value is set to 1. 
- This FileShare agent behavior is applicable only for non-scoped file shares.
- In the earlier procedure you created these registry keys at the global level.
If there are multiple file share service groups that are to be used in the non-scoped mode, you may have to create these registry keys manually for each virtual server and then set the values.
You must create these keys at the following location:
HKLM\SOFTWARE\VERITAS\VCS\BundledAgents\Lanman\<VirtualName>\
Here <VirtualName> should be the virtual computer name assigned to the file share server. This is the VirtualName attribute of the lanman resource in the file share service group.
- In case these registry parameters are configured at a global level and also configured for individual file share virtual servers, the registry settings for
individual virtual servers takes precedence.
- You must create this key only for lanman resources that are part of VCS file share service groups. Configuring this key for lanman resources that are part of
other VCS service groups may result in an unexpected behavior.

Binary / Version:
Fileshare.dll / 6.0.7.433

* 2730657 (Tracking ID: 2730657)

SYMPTOM:
This hotfix addresses an issue where volumes created on LUNs exposed from a Hitachi Open-V array are not track aligned.

DESCRIPTION:
LUNs belonging to a Hitachi array are being discovered by SFW with ProductID as OPEN instead of OPEN-V.
This causes track alignment to ignore the arrays and set  the Veritas Disk ID and Product ID (VDID/PID) as DEFAULT. 
The track alignment is set to the default offset of 64 instead of the recommended value of 128.
As a result volumes created on this array are not track aligned.

RESOLUTION:
This issue has been addressed in the SFW binary Hitachi.dll that discovers Hitachi arrays.
SFW now correctly discovers and identifies Hitachi Open-V arrays.

Binary / Version:
Hitachi.dll / 6.0.7.362

* 2761898 (Tracking ID: 2761898)

SYMPTOM:
VCS does not support local attached storage in RDC\ GCO DR with single node cluster environments.

DESCRIPTION:
The VCS VMDg agent works only with clustered dynamic diskgroups, and is designed to use SCSI reservations. 
In some environments, SCSI support might not be available with certain local attached storage. 
There are a few common HA/DR configuration scenarios with non-SCSI storage, such as:
- Replicated Data Cluster (RDC) configuration
- Global Cluster Option (GCO) Disaster Recovery (DR) configuration with single node clusters at each site

Hence VMDg agent cannot be an acceptable solution.

In either of these common scenarios, the hotfix should help to accommodate a local, non-SCSI storage configuration based on the type of storage utilized, including:
- Local/DAS that do not support SCSI
- Use of Internal PCI-based SSD drives that do not support SCSI

RESOLUTION:
The two hotfixes, Hotfix_6_0_00009_362_2761898 and HotFix_6_0_00008_2762153, contain a new VCS agent called VMNSDg (Volume Manager Non-Shared Diskgroup), which will help you in managing dynamic disk groups created on non-shared storage that work without reservation.
Hotfix_6_0_00009_362_2761898 is the SFW Hotfix. It provides the necessary Volume Manager interfaces that the new VMNSDg agent can use.

Note:
While installing the Hotfixes, you must install the SFW Hotfix (Hotfix_6_0_00009_362_2761898) first, and then the SFW HA Hotfix (HotFix_6_0_00008_2762153).

Binary / Version:
cluscmd.dll / 6.0.9.362
cluster.dll / 6.0.9.362
cluster_msgs.dll / 6.0.9.362
vras.dll / 6.0.9.362

* 2762153 (Tracking ID: 2762153)

SYMPTOM:
VCS does not support local attached storage in RDC\ GCO DR with single node cluster environments.

DESCRIPTION:
The VCS VMDg agent works only with clustered dynamic diskgroups, and is designed to use SCSI reservations. 
In some environments, SCSI support might not be available with certain local attached storage. 
There are a few common HA/DR configuration scenarios with non-SCSI storage, such as:
- Replicated Data Cluster (RDC) configuration
- Global Cluster Option (GCO) Disaster Recovery (DR) configuration with single node clusters at each site

Hence VMDg agent cannot be an acceptable solution.

In either of these common scenarios, the hotfix should help to accommodate a local, non-SCSI storage configuration based on the type of storage utilized, including:
- Local/DAS that do not support SCSI
- Use of Internal PCI-based SSD drives that do not support SCSI

RESOLUTION:
The two hotfixes, Hotfix_6_0_00009_362_2761898 and HotFix_6_0_00008_2762153, contain a new VCS agent called VMNSDg (Volume Manager Non-Shared Diskgroup), which will help you in managing dynamic disk groups created on non-shared storage that work without reservation.
HotFix_6_0_00008_2762153 is the SFW HA Hotfix. It packages the VMNSDg agent and its supporting VCS files.

Binary / Version:
vcs_w2k3bagents_msgs_en.dll / 6.0.8.478
VCSConfig.dll / 6.0.8.478
VMNSDg.dll / 6.0.8.478

* 2740833 (Tracking ID: 2740833)

SYMPTOM:
VxSVC service fails to start because DDLProv provider does not load.

DESCRIPTION:
This issue occurs when the VEA VxSVC service is loading the SFW providers during startup. During this, the DDLProv provider fails to load while copying the SCSI mode page 0x31 response for the connected Fujitsu DX400 disks.
This happens because of the provider's incorrect buffer size.

RESOLUTION:
This issue has been resolved by modifying the buffer size of the DDLProv provider. 

Binary / Version:
ddlprov.dll / 6.0.8.362

* 2752096 (Tracking ID: 2752096)

SYMPTOM:
Memory leak occurs in the SFW components vxvds.exe, vxvdsdyn.exe; the vxvdsdyn.exe component crashes.

DESCRIPTION:
This issue occurs if you repeatedly perform the create and delete volume operations multiple times. 
Because of this, memory leak occurs in the VDS Dynamic software provider (vxvds.exe) and Super VDS provider  (vxvdsdyn.exe) components. 
Also, the Super VDS provider (vxvdsdyn.exe) component crashes.

RESOLUTION:
Most of the memory leak issues have been fixed in the vxvds.exe and vxvdsdyn.exe components. 
The components have been updated to address the memory leak issue. 
Also, the vxvdsdyn.exe component has been updated so that it does not crash.

Binary / Version:
vxcmd.dll / 6.0.00010.363   
vxvds.exe / 6.0.00010.363
vxvdsdyn.exe / 6.0.00010.363 
mount.dll / 6.0.00010.363
vxvm.dll / 6.0.00010.363

* 2744349 (Tracking ID: 2744349)

SYMPTOM:
This hotfix addresses several issues where SFW snapshot and scheduling operations fail for Microsoft SQL Server 2012 databases.

DESCRIPTION:
The following SFW issues may occur for SQL Server 2012 databases: 
- While running the restore operation with the automatic log replay, the vxsnap command fails to bind to the SQL Server instance and as a result the restore operation is unable to complete and the SQL database remains in the restoring state. 
The following error is logged: Failed to complete the operation V-76-58657-1059: 
Error trying to bind to the SQL Server Instance. This issue occurs because SFW is unable to discover a change in a SQL registry value created during SQL Server installation.
- Schedules created using the Quick Recovery (QR) Configuration Wizard fail to execute. 
- Schedules created using the Quick Recovery (QR) Configuration Wizard or the Veritas Enterprise Administrator (VEA) are not replicated across cluster nodes.

RESOLUTION:
These issues have been fixed in the updated SFW binaries. 
For the changes in this hotfix to take effect, ensure that the [NT AUTHORITY\SYSTEM] account is granted "sysadmin" server role (from SQL Management Studio Console).

Binary / version:
vssprov.dll / 6.0.00011.363
vxsnapschedule.dll / 6.0.00011.363 
vxschedservice.exe / 6.0.00011.363

* 2677127 (Tracking ID: 2677127)

SYMPTOM:
This hotfix addresses an issue related to the SFW component, mount.dll, that causes the Veritas Enterprise Administrator (VxSvc) Service to crash if pagefile is not configured.

DESCRIPTION:
This issue occurs when the VxSVC service starts without a configured pagefile.
The vxsvc.exe service crashes because of an exception in mount.dll.

RESOLUTION:
VxSvc service used to crash because an empty string variable was passed to free function. 
The component has been updated to address the issue.

Binary / Version:
mount.dll / 6.0.00001.362

* 2783096 (Tracking ID: 2783096)

SYMPTOM:
The Storage Migration Wizard pages appear truncated.

DESCRIPTION:
This hotfix addresses an issue where the Storage Migration Wizard pages appear truncated, as a result it is not possible to perform the storage migration tasks using the wizard.
This issue is observed on operating systems where the locale is set to Japanese.

RESOLUTION:
The hotfix fixes the Storage Migration Wizard binaries. 

Binary / Version
VxVmCE.jar / NA

* 2810516 (Tracking ID: 2810516)

SYMPTOM:
Snapback operation causes too many blocks to resynchronize for the snapback volume.

DESCRIPTION:
During a snapback operation, SFW determines the blocks to be synced using the Disk Change Object (DCO) logs, which maintain information about the changed blocks. It syncs all the changed blocks from the original volume to the snapback volume and, because of a bug, also updates the DCO logs so that the resynced blocks appear modified on the original volume.
Now, if there is another volume on which a snapback operation needs to be performed, then it copies extra blocks on the new snapback volume.

For example, consider the following scenario:
1. Create two snapshots of volume "F" (snapshots "G" and "H").
2. Make "G" writable, and copy files to "G".
3. Snapback "G" using data from "F".
4. Snapback "H" using data from "F".

In the above scenario, in Step 4, there should be nothing to resynchronize as no changes are made to "F" and "H".
However, it is observed that the number of blocks to be synchronized is the same as in Step 3.
This is because the blocks resynced in Step 3 are treated as modified. This causes the extra blocks to be resynced.

RESOLUTION:
For a snapback operation, corrected the DCO log handling to not treat the resynced blocks as modified.

Binary / Version
vxio.sys / 6.0.16.362
vvxconfig.dll / 6.0.16.362

* 2845295 (Tracking ID: 2845295)

SYMPTOM:
VxSVC service crashes during disk group import and during the subsequent restart.

DESCRIPTION:
This issue occurs while trying to import a disk group. The maximum buffer size value of the "LastUpdatedDgList" key is set to 1024 at the following registry path: HKLM\SOFTWARE\VERITAS\VxSvc\CurrentVersion\VolumeManager\CBR If you have several disk groups with lengthy names and the combined length of all the disk group names is greater than 1024, then it results in a buffer overrun in the Volume Manager for Windows (vxvm) provider specific to the VxCBR code. Because of this, the Veritas Storage Foundation for Windows (VxSVC) service crashes. The service crashes again when you try to restart it.

RESOLUTION:
This issue has been resolved by allocating buffer dynamically.

Binary / Version:
vxvm.dll / 6.0.00017.362

* 2593039 (Tracking ID: 2593039)

SYMPTOM:
Provisioned size of disks is reflected incorrectly in striped volumes after thin provisioning reclamation.

DESCRIPTION:
This hotfix addresses an issue where thin provisioning (TP) reclamation in striped volumes shows incorrect provisioned size for disks. This issue is observed on Hitachi and HP arrays.

RESOLUTION:
This issue has been fixed by increasing the map size for striped volumes.

Binary / Version:
vxconfig.dll / 6.0.00018.362

* 2869461 (Tracking ID: 2869461)

SYMPTOM:
Veritas VDS software provider log file grows to a very large size.

DESCRIPTION:
This hotfix addresses an issue where the Veritas Virtual Disk Service (VDS) software provider keeps logging in the log file without checking its size or recycling it.  This results in a very large log file.

RESOLUTION:
The hotfix fixes the issue with the help of the following two registry keys that tune the VDS software provider logging:

1. MAXSIZE gives the maximum size of an individual log file in KB units. A backup log file is created when the log file exceeds the default size. The default value of MAXSIZE is 16384 KB. However, you can customize the default value by running the command Regedit to open the registry editor. Locate the MAXSIZE of the VDS software provider in the registry under the following key:
SOFTWARE\Veritas\VxSvc\CurrentVersion\Tracing\vds\MaxSize

2. MAXFILES gives the maximum number of log files that can be present at a time.
The old log files are deleted once they exceed the limit of maximum number of files. The default value of MAXFILES is 5. However, you can customize the default value by running the command Regedit to open the registry editor. Locate the MAXFILES of the VDS software provider in the registry under the following key:
SOFTWARE\Veritas\VxSvc\CurrentVersion\Tracing\vds\MaxFiles

The Veritas VDS software provider logs are found at:
Location: %vmpath%/log
Filenames: vxvds.log and vxvdsdyn.log

Binary / Version:
vdsprovutil.dll / 6.0.00019.362

* 2877405 (Tracking ID: 2877405)

SYMPTOM:
svchost.exe crashes because of a fault in shsvcs.dll.

DESCRIPTION:
This hotfix addresses the issue where svchost.exe crashes because of a fault in shsvcs.dll. This issue occurs when shsvcs calls COM interface to query disk extents, causing the WMI and Hardware shell detection services to terminate unexpectedly.

RESOLUTION:
The hotfix fixes the binaries that caused svchost.exe to crash.

Binary / Version:
vxvds.exe / 6.0.00020.362 
vxvdsdyn.exe / 6.0.00020.362

* 2871478 (Tracking ID: 2871478)

SYMPTOM:
Volume shrink operation in SFW does not work correctly for the "New volume size" option.

DESCRIPTION:
In Storage Foundation for Windows (SFW), this issue occurs if you are using the "New volume size" option of the online volume shrink feature. When you provide the new size for the volume in the "New volume size" box, the volume shrink operation incorrectly shrinks the volume by the new volume size instead of shrinking it by the difference of the current and the specified new size of the volume.

RESOLUTION:
This issue has been resolved by correcting the online volume shrink functionality.

Binary / Version:
VxVmCE.jar / NA

* 2885211 (Tracking ID: 2885211)

SYMPTOM:
In some cases, Windows prompts to format the volume created using VEA.

DESCRIPTION:
In SFW, while creating a volume using the New Volume Wizard from the Veritas Enterprise Administrator (VEA), if you choose to assign a drive letter or mount the volume as an empty NTFS folder, then Windows wrongly prompts to format the volume. This happens because, as part of volume creation, SFW creates RAW volumes, assigns mount points, and then proceeds with the formatting of volumes. Therefore, mounting RAW volumes explicity causes Windows to display the volume format dialog box. However, the volume is successfully created, and you can close the dialog box displayed by Windows and access the volume.

RESOLUTION:
This issue has been resolved. SFW now assigns drive letters or mount paths only after the volume formatting task is completed.

Binary / Version:
vxvm.dll / 6.0.00023.362

* 2791125 (Tracking ID: 2791125)

SYMPTOM:
Memory leak occurs in the SFW component vxsvc.exe.

DESCRIPTION:
This issue occurs in SFW when you perform an action repeatedly and multiple times. Because of this, memory leak occurs in actionprovider.dll and, therefore, in vxsvc.exe.

RESOLUTION:
This issue has been resolved by fixing the memory leak in actionprovider.dll and deleting the event notification jobs immediately on completion. 

Binary / Version:
vxvea3.dll / 3.4.545.0 
actionprovider.dll / 3.4.545.0

* 2786159 (Tracking ID: 2786159)

SYMPTOM:
Issue 1:
This hotfix addresses an issue related to the VCS Cluster Manager (Java Console) where the only the first user in the Cluster Administrator's group is treated as the cluster administrator. All other users in the group do not get cluster administrator privileges and are treated as Guest users.

Issue 2:
This hotfix addresses an issue related to the VCS Cluster Manager (Java Console) where a service group switch or failover operation to the secondary site fails due to a user privilege error. 

Fix for issue #1 was earlier released as Hotfix_6_0_00002_2649700. It is now included in this hotfix.

DESCRIPTION:
Issue 1:
If you add multiple users as Cluster Administrators, the VCS Java Console only treats the first user in the list as an administrator. All the other users are treated as Guest users even though the users are listed as Cluster Administrators in the cluster configuration file.

Issue 2:
This issue occurs in secure clusters set up in a disaster recovery (DR) environment. When you switch or failover a global service group to the remote site, the operation fails with the following error:
Error when trying to failover GCO Service Group. V-16-1-50824 At least Group Operator privilege required on remote cluster.

This error occurs even if the user logged on to the Java Console is a local administrator, which by default has the Cluster Administrator privilege in the local cluster.

This issue occurs because the local administrator is not explicitly added to the cluster admin group at the remote site. During a switch or a failover, the Java Console is unable to determine whether the logged-on user at the local cluster has any privileges on the remote cluster and hence fails the operation.

If you use the Java Console to grant the local administrator with operator or administrator privileges to the remote cluster, the Java Console only assigns guest privileges to that user.

RESOLUTION:
Issue 1:
This issue has been fixed in the Java Console.

Issue 2:
This issue is fixed in the Java Console. The Java Console now allows you to assign the local administrator at the local cluster with cluster admin or cluster operator privileges in the remote cluster.
This change is applicable only on Windows.

Binary Name / Version
VCSGui.jar / NA

* 2862335 (Tracking ID: 2862335)

SYMPTOM:
After a reboot of the passive node, all the disk groups which are marked with fast failover, remain in deported none state.

DESCRIPTION:
Fast failover improves the failover time for the storage stack configured in service groups in a clustered environment.
During normal failover, SFW performs a complete disk group deport operation on the active node followed by a Read/Write import operation on a passive node. 
With fast failover, instead of performing deport and import operations, SFW now performs only a mode change for the disk group. The disk group state on the passive node is changed from Read-Only to Read/Write.

A mode change (Read-Only to Read/Write) is a much faster operation compared to a full deport and import (Deport None to Import Read/Write) and thus results in faster disk group failovers.

But after a reboot of the passive node, sometimes all of the disk groups which are marked with fast failover, may remain in Deport None state. This causes the failover to take more time than a fast failover.

RESOLUTION:
This issue has been fixed in this hotfix. Even after the reboot of the passive node, the disk groups marked for fast failover will now be in the Read-Only state.

Binary / Version:
VMDg.dll / 6.0.00012.503

* 2882535 (Tracking ID: 2882535)

SYMPTOM:
This hotfix includes the VCS FileShare agent that is enhanced to ensure that the file shares configured with VCS are accessible even if the LanmanResName attribute is not specified.

DESCRIPTION:
File shares configured with VCS come online if the LanmanResName attribute of the Fileshare resource is specified. In case this attribute is not specified, the Fileshare resource would go into an unknown state and would not come online. The Fileshare resource should have the ability to share the folder under physical server context even if LanmanResName attribute is not specified.

RESOLUTION:
The FileShare agent is enhanced to address this issue. 

Binary / Version: 
FileShare.dll / 6.0.00013.509

* 2897781 (Tracking ID: 2897781)

SYMPTOM:
Dynamic disk groups created on external disks are not supported in VMNSDg agent.

DESCRIPTION:
Currently, the VMNSDg (Volume Manager Non-Shared Diskgroup) agent does not support the dynamic disk groups created on external disks.

RESOLUTION:
To support the dynamic disk groups configured on external disks, the hotfix installer adds the SkipStorageValidation attribute to the VMNSDg agent. By default, this attribute blocks the configuration of disk groups containing external disk (shared or non-shared). In case of SCSI controllers, the disks are considered as internal (non-shared) if they are attached to the same controller as the boot/OS disk, while all the other disks are considered as external. In case of IDE controllers, all the disks are considered as internal.

To allow configuration of VMNSDg resource on the dynamic disk groups with external disks, set the SkipStorageValidation attribute to True. Once the attribute is set, the agent shall not differentiate between internal or external disks. 

Note 1: Please use this attribute with caution. If not configured properly, this attribute can lead to data corruption in certain scenarios, such as split-brain.
Note 2: This hotfix depends on Hotfix_6_0_00025_362_2919276.

Binary / Version:
VMNSDg.dll / 6.0.00015.524
VMNSDg.xml / 6.0.00015.524
IMFCommon.dll / 6.0.00015.524
vcs_w2k3bagents_msgs_en.dll / 6.0.00015.524

* 2898414 (Tracking ID: 2898414)

SYMPTOM:
This hotfix addresses an issue where the MSMQ resource failed to bind to the correct port.

DESCRIPTION:
When the MSMQ agent was brought online, the Event Viewer reported an error stating that Message Queuing failed to bind to port 1801. This error could occur due to various reasons. Even though the binding failed, the agent reported the MSMQ resource as Online.

RESOLUTION:
The MSMQ agent has been enhanced to verify that the clustered MSMQ service is bound to the correct virtual IP and port. By default, the agent performs this check only once during the Online operation. If the clustered MSMQ service is not bound to the correct virtual IP and port, the agent stops the service and the resource faults. You can configure the number of times that this check is performed. To do so, create the DWORD tunable parameter 'VirtualIPPortCheckRetryCount' under the registry key 'HKLM\SOFTWARE\VERITAS\VCS\BundledAgents\MSMQ'. If this parameter is set to a value greater than 1, the agent starts the clustered MSMQ service again and verifies its virtual IP and port binding as many times. It waits 2 seconds between each verification attempt. If the clustered MSMQ service is bound to the correct virtual IP and port, the agent reports Online.

Binary / Version:
MSMQ.dll / 6.0.00016.534

* 2898414 (Tracking ID: 2898414)

SYMPTOM:
This hotfix addresses an issue where an application resolved a virtual name incorrectly due to DNS lag.

DESCRIPTION:
This issue occurred due to a delay in resolving the DNS name. For example, MSMQ resolved a virtual name incorrectly. Therefore, when the MSMQ agent was brought online, the Event Viewer reported an error stating that Message Queuing failed to bind to the appropriate port.

RESOLUTION:
The Lanman agent has been enhanced to verify the DNS lookup and flush the DNS resolver cache after bringing the Lanman resource online. You need to create the DWORD tunable parameter 'DNSLookupRetryCount' under the registry key 'HKLM\SOFTWARE\VERITAS\VCS\BundledAgents\Lanman'. If this parameter is set, the Lanman agent verifies the DNS lookup as per the parameter value. It waits 5 seconds between each verification attempt. You can also create the DWORD tunable parameter 'SkipDNSCheckFailure' under the Lanman registry key. The default value of 'SkipDNSCheckFailure' is 0 (zero), which indicates that the resources should fault if the DNS lookup fails. If this parameter value is set to 1, the resources should not fault even if the DNS lookup fails. 

Binary / Version:
Lanman.dll / 6.0.00017.534

* 2904593 (Tracking ID: 2904593)

SYMPTOM:
SFW is unable to discover the EMC Symmetrix LUNs as thin reclaimable, having firmware version 5876.

DESCRIPTION:
This hotfix addresses an issue where EMC Symmetrix array, after a firmware upgrade to version 5876, is unable to identify the thin provisioning reclaimable LUNs.

RESOLUTION:
The hotfix enhances an SFW library to identify the EMC Symmetrix thin reclaimable LUNs having firmware version 5876.

Binary / Version:
ddlprov.dll / 6.0.00024.362

* 2919276 (Tracking ID: 2919276)

SYMPTOM:
Dynamic disk groups created on external disks are not supported in VMNSDg agent.

DESCRIPTION:
Currently, the VMNSDg (Volume Manager Non-Shared Diskgroup) agent does not support the dynamic disk groups created on external disks.

RESOLUTION:
This issue has been resolved by removing the cluster disk group check from SFW for the VMNSDg agent requests.

Binary / Version:
cluscmd.dll / 6.0.00025.362

* 2913240 (Tracking ID: 2913240)

SYMPTOM:
MountV resource faults because SFW removes a volume due to delayed device removal request.

DESCRIPTION:
This issue occurs when SFW removes a volume in response to a delayed device removal request. Because of this, the VCS MountV resource faults.

RESOLUTION:
This issue has been resolved by not disabling the mount manager interface instance if it is active when the device removal request arrives.

Binary / Version:
vxio.sys / 6.0.00026.362 
vxconfig.dll / 6.0.00026.362

* 3053280 (Tracking ID: 3053280)

SYMPTOM:
Windows Server Backup fails to perform a system state backup with the following error: 
Error in backup of C:\program files\veritas\\cluster server\ during enumerate: Error [0x8007007b] The filename, directory name, or volume label syntax is incorrect.

DESCRIPTION:
The backup operation fails because the VCS service image path contains an extra backslash (\) character, which Windows Server Backup is unable to process.

RESOLUTION:
This issue has been fixed by removing the extra backslash character from the VCS service image path. 
This hotfix changes the following registry keys: 
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Had\ImagePath
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\HADHelper\ImagePath 
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Cmdserver\ImagePath
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\VCSComm\ImagePath

Binary / Version:
NA / NA

NOTE: This hotfix supersedes HotFix_6_0_00019_3053280a, which was released earlier.

* 3062856 (Tracking ID: 3062856)

SYMPTOM:
If LLT is running alongside a utility like Network Monitor, a system crash (BSOD) occurs when shutting down Windows.

DESCRIPTION:
The system crashes due to a time out that occurs while waiting for LLT to unbind from the adapters during a shutdown operation. This issue occurs when LLT is configured over Ethernet.

RESOLUTION:
The LLT driver has been updated to properly handle unbind calls so that Windows can shut down gracefully.

Binary / Version:
llt.sys / 6.0.00020.625

* 2912682 (Tracking ID: 2912682)

SYMPTOM:
In some cases, capacity monitoring email notification fails

DESCRIPTION:
This issue occurs when the capacity monitoring feature is enabled so that when the used disk space on a volume reaches a user-specified threshold, an email alert message is sent. However, for some SMTP servers, the email notification for capacity monitoring fails with the following error message even if the provided email addresses of the recipients were valid: No valid recipients. Because Action Provider does not enclose recipient email addresses in angle brackets in the RCPT command, the SMTP server rejects the email addresses though they are valid.

RESOLUTION:
The issue has been resolved by modifying Action Provider so that it now encloses recipient email addresses in angle brackets.

Binary / Version:
actionprovider.dll / 3.4.564.0

* 3105024 (Tracking ID: 3105024)

SYMPTOM:
The Disaster Recovery (DR) wizard and the Fire Drill (FD) wizard fail when you run them in an environment that contains the hardware replication setup.

DESCRIPTION:
When large number of LUNs are discovered in a hardware replication setup, buffer overflow occurs that causes the Plugin Host service used by
the DR and FD wizards to crash.

RESOLUTION:
This issue has been fixed and the DR and FD wizards can now successfully complete their respective configurations.

Binary / Version:
CommonUtility.dll / 6.0.00021.733
clusffconfigrator.exe.config /

NOTE: 
#1. This hotfix depends on HotFix_6_0_00001_2763375. 
#2. This hotfix supersedes Hotfix_6_0_00021_3105024, Hotfix_6_0_00021_3105024a, and Hotfix_6_0_00021_3105024b which were released earlier.

* 3111155 (Tracking ID: 3111155)

SYMPTOM:
After every 49.7 days, VCS logs report that the Global Counter VCS attribute is not updated. This error is reported on all nodes in the cluster, except the node with the lowest node ID value. The 'GloablCounter not updated' error also appears in the Event Viewer.

DESCRIPTION:
The error indicates that there could be communication issue between the cluster nodes. However, this error is incorrectly reported, and no communication issue actually occurs.

RESOLUTION:
This issue has been fixed and the error is no longer reported at that odd interval.

Binary / Version:
had.exe / 6.0.22.628 
hacf.exe / 6.0.22.628

* 3122364 (Tracking ID: 3122364)

SYMPTOM:
Issue 1:
This hotfix addresses issues related to the VCS MountV agent where the RegRep resources configured on MountV resources fail to come online, and mount point folders remain accessible even after MountV takes the volumes offline.

Issue 2:
MountV resources do not come online when performing a fire drill.

Fix for issue #1 was earlier released as HotFix_6_0_00010_2856969. It is now replaced by this hotfix.

DESCRIPTION:
Issue 1:
The following issues are observed with the VCS MountV agent:

a) The Regrep resource performs file system operations (for example, createdir, write-to-file) on the mount point on which it is configured. During an Online operation of the service group, though the MountV resource reports ONLINE, the RegRep resource fails to perform the file system operations with an error (Device Not Ready).

Analysis indicates that the file system was not fully usable even after a successful mount operation.

b) The MountV agent can mount a volume as an NTFS folder. If this volume is unmounted, the folder remains accessible for writes by users and processes. This creates an open handle that can cause the MountV online operation to fail on the failover target node.


Issue 2:
This issue occurs as an indirect result of volume GUID caching. The MountV agent caches volume GUIDs for subsequent monitor operations. Each time a fire drill is performed, new volumes are created and new volume GUIDs are assigned to them. The MountV agent is unable to identify these new volumes, because it does not find their GUIDs in the cache. Therefore, even though the volumes (or mount paths) resources are available, the status of the corresponding MountV resources is not reported as ONLINE.

RESOLUTION:
Issue 1:
For a) - An additional file system check has been implemented in the MountV agent. This check ensures that the file system is accessible and usable after the mount file system operation succeeds.

For b) - This hotfix adds a new attribute, BlockMountPointAccess, which defines whether the agent blocks access to the NTFS folder that is used as a folder mount point after the mount point is unmounted.

For example, if C:\temp is used as a folder mount for a volume, then after the mount point is unmounted, the agent blocks access to the folder C:\temp.

The value True indicates that the folder is not accessible. The default value False indicates that the folder is accessible.


Issue 2:
A new MountV resource attribute, ForFireDrill, has been introduced to overcome this issue. The value of this attribute indicates whether the resource is created for a fire drill operation. If the value of this attribute is set to 1, the MountV agent does not cache the volume GUID for that particular resource. The default value of this attribute is 0, which indicates that MountV should cache the volume GUID for that resource. NOTE: After installing the hotfix and before performing a fire drill, make sure that you set the ForFireDrill attribute of your existing MountV resources to the appropriate value.

Binary / Version:
MountV.dll / 6.0.23.629 
MountV.xml / 6.0.23.629

* 3082932 (Tracking ID: 3082932)

SYMPTOM:
Data corruption may occur if the volume goes offline while a resynchronization operation is in progress.

DESCRIPTION:
This issue occurs where SFW SmartMove is enabled and data resynchronization operations such as subdisk move, mirror resync, or mirror attach are being performed on a volume. If the volume goes offline (for example, because the MountV resource of the volume went offline) while a resync operation is in progress, the resync task completes abnormally and the volume may report data corruption. The task progress on the GUI may suddenly jump to 100%, but the actual task does not complete. This issue occurs due to improper error handling by SFW.

RESOLUTION:
This issue has been fixed by adding correct handling of error conditions.

Binary / Version:
vxconfig.dll / 6.0.00027.362

* 3137880 (Tracking ID: 3137880)

SYMPTOM:
Tagging of snapshot disks fails during the fire drill operation, causing disk import to fail.

DESCRIPTION:
This issue occurs while performing the fire drill operations in case of hardware replication agents, which involve tagging of snapshot disks so that they can be imported separately from the original disks. Because of an issue with SFW, it does not write tags to the disks, and also proceeds without giving any error. The import operation on the snapshot disks also fails because there are no disks present with the specified tag.

RESOLUTION:
This was an existing issue where SFW did not write to disks that are marked as read-only. The issue has been resolved by allowing the fire drill tag to be written to a disk even if the disk is marked as read-only.

Binary / Version:
vxconfig.dll / 6.0.10004.308

* 3160235 (Tracking ID: 3160235)

SYMPTOM:
On an EMC SRDF setup, VEA VxSVC service may crash during a rescan if there are stale records on Secondary node.

DESCRIPTION:
In an SFW HA environment with the EMC SRDF agent, this issue may occur when a rescan (automatic or manual) is performed on the Secondary node and the node holds some stale records. During the rescan, the Veritas Enterprise Administrator service (VxSVC) tries to access the stale records on the Secondary. As the records contain invalid data, VxSVC crashes.

RESOLUTION:
This issue has been resolved by modifying the affected binary so that the VxSVC service now ignores the stale records.
Binary / Version:
vxconfig.dll / 6.0.00029.362

* 3202548 (Tracking ID: 3202548)

SYMPTOM:
Missing disks cannot be removed from a disk group using the vxdg rmdisk command.

DESCRIPTION:
This issue occurs when you try to remove a missing disk from a disk group using the vxdg rmdisk command. In the command, you need to mandatorily provide the name of disk group that the missing disk needs to be removed from. Despite providing the correct disk group name, the command fails because of a bug in the internal check performed for the disk group name.

RESOLUTION:
This issue has been resolved by modifying the way a missing disk can be removed from a disk group. With this hotfix, while using the vxdg rmdisk command, you can remove a missing disk from a disk group either by specifying only its display name (for example, "Missing Disk (disk#)") or by specifying both its internal name and name of the disk group to which it belongs.

Binary / Version:
vxdg.exe / 6.0.00030.362

* 3210766 (Tracking ID: 3210766)

SYMPTOM:
On a system with multiple DRL-enabled volumes, a write I/O hang may occur on one of the volumes when write I/O operations are happening on all the volumes.

DESCRIPTION:
This issue may occur on a system that has multiple volumes with dirty region logging (DRL) added. When heavy write I/O operations are happening on the DRL-enabled volumes, the writes are processed first by the DRL module and then they are written to the disk. If the I/O writes exceed the maximum limitation of I/Os that the DRL module can process, then the next I/O write for a DRL-enabled volume gets stuck in the retry queue of the volume and subsequently hangs on the system.

RESOLUTION:
This issue has been fixed by maintaining a global retry queue and restarting the write I/O operations when any write I/Os complete in the DRL module.

Binary / Version:
vxio.sys / 6.0.31.363
vxconfig.dll / 6.0.31.363

* 3226691 (Tracking ID: 3226691)

SYMPTOM:
VEA GUI may time out while connecting to a server.

DESCRIPTION:
In some cases, when you try to log on to the VEA GUI, the connection times out while it is trying to connect to the VxSVC service on an SFW server. This happens if, during the connection, the process of SSL initialization takes a long time and exceeds the connection timeout value of 30 seconds.

RESOLUTION:
This issue has been resolved by optimizing the SSL initialization process so that now it happens when the VxSVC service is starting up rather than during the VEA GUI connection.

Binary / Version:
vxvea3.dll / 3.4.554.0

* 3252941 (Tracking ID: 3252941)

SYMPTOM:
In a Microsoft Failover Cluster environment, creating a virtual machine fails if the volume has folder mount points.

DESCRIPTION:
In a Microsoft Failover Cluster environment, when you try to create a virtual machine where the volume has folder mount points, Microsoft Failover Cluster sends the IOCTL_DISK_GET_DRIVE_LAYOUT_EX control code. Because of an improper handling of the IOCTL control code, the virtual machine creation fails and incorrect error messages are sent for the code requests. For the boot devices, when checking for the buffer size, incorrectly the VKE_EINVAL message is sent instead of STATUS_BUFFER_TOO_SMALL. For the non-boot devices, the request is not processed and, incorrectly, the VKE_ENXIO message is sent. The following are the error message values and their descriptions: VKE_EINVAL: An invalid parameter was passed to a service or function. STATUS_BUFFER_TOO_SMALL: The buffer is too small to contain the entry. No information has been written to the buffer. VKE_ENXIO: The specified request is not a valid operation for the target device.

RESOLUTION:
This issue has been fixed by correcting the handling of the IOCTL control code. Now, for the boot devices, when checking for the buffer size, the STATUS_BUFFER_TOO_SMALL message is sent if the supplied buffer is small. And for the non-boot devices, the request is processed successfully by passing the call to the next layer of drivers in the stack.

Binary / Version:
vxio.sys / 6.0.00032.362

* 3300658 (Tracking ID: 3300658)

SYMPTOM:
When NetBios is disabled, a FileShare service group fails to come online.

DESCRIPTION:
This issue occurs if the value of the VirtualName attribute in the Lanman resource is specified in lower case characters, and NetBios is disabled. The Lanman resource faults and therefore the FileShare service group fails to come online.

RESOLUTION:
The hotfix addresses this issue, and now, the FileShare service group comes online successfully.

Binary / Version:
FileShare.dll / 6.0.24.694

* 3300654 (Tracking ID: 3300654)

SYMPTOM:
If the cluster name string in the registry and the main.cf file is not an exact match, upgrading to SSO authentication or reconfiguring SSO fails. However, the operation is incorrectly reported to be successful.

DESCRIPTION:
The upgrade or reconfiguration fails because, even though the other configuration settings are updated correctly, the SecureClus cluster attribute is not set to 1 (true).

RESOLUTION:
The hotfix addresses this issue, and now you can successfully upgrade to SSO authentication or reconfigure SSO even if there is a case mismatch in the cluster name string recorded in the registry and the main.cf file.

Binary / Version:
VCWDlgs.dll / 6.0.25.694

* 3377535 (Tracking ID: 3377535)

SYMPTOM:
In a disaster recovery fire drill configuration, the disks containing the snapshot of the replicating data appear offline and the SRDFSnap resource from the fire drill service group fails.

DESCRIPTION:
In a disaster recovery fire drill configuration, the disks containing the snapshot of the replicating application data are offline. This issue typically occurs in the following environment: - The disks are shared between cluster nodes on the secondary site. - The cluster systems run Windows Server 2008 or Windows Server 2008 R2 operating system. As per Windows SAN policy for shared disks, the shared disks are initialized in an Offline Shared mode. Since the disks are offline, SFW fails to tag the disks. As a result, the SRDFSnap resource from the fire drill service group fails and the service group fails to come online, on the secondary site.

RESOLUTION:
SFW behaviour is now updated such that the disks are brought online before they are tagged. This enables the SRDFSnap resource to come online.

Binary / Version:
vxconfig.dll / 6.0.33.362

* 3378041 (Tracking ID: 3378041)

SYMPTOM:
Server crashes during high write I/O operations on mirrored volumes.

DESCRIPTION:
This issue occurs when heavy write I/O operations are performed on mirrored volumes. During such high I/O operations, the server crashes due to a problem managing the memory for data buffers.

RESOLUTION:
This issue has been resolved by appropriately mapping the system-address-space described by MDL for the write I/Os on mirrored volumes.

Binary / Version:
vxio.sys / 6.0.00034.362

* 3381786 (Tracking ID: 3381786)

SYMPTOM:
On an SRDF setup with multiple nodes at Secondary, the local failover at Secondary fails due to disk reservation errors on the target node.

DESCRIPTION:
On an EMC SRDF setup with multiple nodes at Secondary, this issue occurs while trying to perform a local failover of a Fire Drill service group. There is a stale SCSI reservation thread created on the node when the SRDF disk group is imported for the first time. When the failover is attempted, this thread is not killed and it keeps defending disks when the target node challenges for reservation. Because of this, the disk group import on the target node fails and, therefore, the failover also fails.

RESOLUTION:
This issue has been resolved so that the stale reservation threads are removed during an SRDF disk group import.

Binary / Version:
vxconfig.dll / 6.0.35.362

* 3421570 (Tracking ID: 3421570)

SYMPTOM:
VMDg resource fails to come online during first RO-to-RW import at the time of a failover, resulting in RHS process restart.

DESCRIPTION:
When the FastFailover property of a disk group is set to TRUE on the active node, the disk group is imported as Read Only on passive node and all volumes from the disk group arrives as part of import on the passive node. Sometimes, due to successive transactions within a short time, the PnP device interface property of volume is not updated properly. Therefore, during failover to the passive node, the VxVM provider queries for device name using device interface property and it fails. Because of this, the process to bring the VMDg resource online times out and, as a result, the RHS process restarts.

RESOLUTION:
This issue has been resolved so that the VxVM provider queries device name using the volume name property in case PnP device interface property is NULL.

Binary / Version:
vxvm.dll / 6.0.00036.362

* 3431608 (Tracking ID: 3431608)

SYMPTOM:
In a disk group with even number of disks of which only half of the disks go missing, you cannot remove the missing disks if any volumes are present on the non-missing disks.

DESCRIPTION:
This issue occurs when there are even number of disks in a disk group with half of the disks missing. In this case, if there are any volumes on the non-missing disks, then removing the missing disks is not allowed. If you try to remove them, then it fails with the "Cannot remove last disk in dynamic disk group" error. This happens because the operation to remove disks incorrectly compares the number of disks to be removed with that of the non-missing disks. If the number is equal, then the operation tries to remove the complete disk group. However, the presence of volume resources prevents the removal of the disk group and the intended missing disks as well.

RESOLUTION:
This issue has been resolved so that the operation to remove disks now compares the number of disks to be removed with the total number of disks in the disk group and not with the number of non-missing disks.

Binary / Version:
vxconfig.dll / 6.0.00037.362

* 3462786 (Tracking ID: 3462786)

SYMPTOM:
Memory leak occurs for SFW VSS provider while taking a VSS snapshot

DESCRIPTION:
This issue occurs during a VSS snapshot operation when VSS is loading and unloading providers. The SFW VSS provider connects to VEA database during the loading and disconnects during the unloading of providers. Because of an issue in the VEA database cleanup during the unloading, the memory leak occurs.

RESOLUTION:
This issue has been resolved so that now SFW VSS provider does not connect to and disconnect from VEA during every load and unload operation. Instead, it creates a connection at the beginning and disconnects when the Veritas VSS Provider Service (vxvssprovider.exe) is stopped.

Binary / Version:
vxvssprovider.exe / 6.0.38.362



INSTALLING THE PATCH
--------------------
What's new in this CP
=====================|

The following hotfix has been added in this CP:
 - Hotfix_6_0_00038_362_3462786
 
For more information about this hotfix, see the "Errors/Problems Fixed" section in this readme.


Install instructions
====================|

Download the appropriate cumulative public patch (CP) executable file to a temporary location on your system.
You can install the CP in a verbose mode or in a non-verbose mode. Instructions for both options are provided below.

Each cumulative public patch includes the individual hotfixes that contain enhancements and fixes related to reported issues.
See "Errors/Problems Fixed" section for details.

Before you begin
----------------:
[1] Ensure that the logged-on user has privileges to install the CP on the systems.

[2] One or more hotfixes that are included with this CP may require a reboot.
Before proceeding with the installation ensure that the system can be rebooted.

[3] Symantec recommends that you close the Cluster Manager (Java Console) and the Veritas Enterprise Administrator (VEA) Console before installing this CP.

[4] Ensure that you close the Windows Event Viewer before proceeding with the installation.

[5] Before installing CP, ensure that the Startup type of the VCS Authentication Service is set to Automatic. This is applicable only if you have configured a secure cluster and installed CP or Agent Pack 2Q2012 on your system. 

[6] Before installing CP on Windows Server Core systems, ensure that Visual Studio 2005 x86 redistributable is installed on the systems.


To install in the verbose mode
------------------------------:

In the verbose mode, the cumulative patch (CP) installer prompts you for inputs and displays the installation progress status in the command window.

Perform the following steps:

[1] Double-click the CP executable file to extract the contents to a default location on the system.
The installer displays a list of hotfixes that are included in the CP.
    - On 64-bit systems, the hotfixes executable files are extracted to:
      "%commonprogramfiles(x86)%\Veritas Shared\WxRTPrivates\<CPName>"

The installer also lists the hotfixes that require a reboot of the system after the installation. 
If system reboot is not an option at this time, you can choose not to install these hotfixes. 
In such a case, exit the installation and then launch the CP installer again from the command line using the /exclude option.
See "To install in a non-verbose (silent) mode" section for the syntax.

[2] When the installer prompts whether you want to continue with the installation; type Y to begin the hotfix installation.
The installer performs the following tasks:
    - Extracts all the individual hotfix executable files
      On 64-bit systems the files are extracted at %commonprogramfiles(x86)%\Veritas Shared\WxRTPrivates\<HotfixName>
    - Runs the pre-install tasks
    - Installs all the hotfixes sequentially
    - Runs the post-install tasks
The installation progress status is displayed in the command window.

[3] After all the hotfixes are installed, the installer prompts you to restart the system.
Type Y to restart the system immediately, or type N to restart the system later. 
You must restart the system for the changes to take effect.

Note that the installer prompts for a system restart only if hotfixes that require a reboot are included in the CP and are installed.

To install in the non-verbose (silent) mode
-------------------------------------------:

In the non-verbose (silent) mode, the cumulative patch (CP) installer does not prompt you for inputs and directly proceeds with the installation tasks. 
The installer displays the installation progress status in the command window.

Use the VxHFBatchInstaller.exe utility to install a CP from the command line.
The syntax options for this utility are as follows:

vxhfbatchinstaller.exe /CP:<CPName> [/Exclude:<HF1.exe>,<HF2.exe>...] [/PreInstallScript:<PreInstallScript.pl>] [/silent [/forcerestart]]

where,
    - CPName is the cumulative patch executable file name without the platform, architecture, and .exe extension.
For example, if CP executable file name is CP12_SFWHA_60_W2K8_x64.exe, specify it as CP12_SFWHA_60.

    - HF1.exe, HF2.exe,... represent the executable file names of the hotfixes that you wish to exclude from the installation. Note that the file names are separated by commas, with no space after a comma. The CP installer skips the mentioned hotfixes during the installation.

    - PreInstallScript.pl is the Perl script that includes the pre-installation steps. These steps forcefully kill the required services and processes in case a graceful stop request does not succeed.
    Symantec recommends that you use this option and script only in case the CP installer fails repeatedly while performing the pre-installation tasks.

    - /silent indicates the installation is run in a non-verbose mode; the installer does not prompt for any inputs during the installation.

    - /forcerestart indicates that the system is automatically restarted, if required, after the installation is complete.


Perform the following steps:

[1] From the command prompt, navigate to the directory where the CP executable file is located and then run the file to extract the contents to a default location on the system. 
The installer displays a list of hotfixes that are included in the CP.
    - On 64-bit systems, the hotfixes executable files are extracted to:
      "%commonprogramfiles(x86)%\Veritas Shared\WxRTPrivates\<CPName>"

The installer also lists the hotfixes that require a reboot of the system after the installation. If system reboot is not an option at this time, you can choose not to install these hotfixes. In such a case, launch the CP installer from the command line using the /exclude option.

[2] When the installer prompts whether you want to continue with the installation; type N to exit the installer.

[3] In the same command window, run the following command to begin the CP installation in the non-verbose mode:
vxhfbatchinstaller.exe /CP:<CPName> /silent

For example, to install a SFW HA 6.0 x64 CP for Windows Server 2008, the command is:
vxhfbatchinstaller.exe /CP:CP12_SFWHA_60 /silent

The installer performs the following tasks:

    - Extracts all the individual hotfix executable files
      On 64-bit systems the files are extracted at %commonprogramfiles(x86)%\Veritas Shared\WxRTPrivates\<HotfixName>
    - Runs the pre-install tasks
    - Installs all the hotfixes sequentially
    - Runs the post-install tasks
The installation progress status is displayed in the command window.

[4] After all the hotfixes are installed, the installer displays a message for restarting the system.
You must restart the system for the changes to take effect.

Note that the installer prompts for a system restart only if hotfixes that require a reboot are included in the CP and are installed. The installer automatically restarts the system if you had specified the /forcerestart option in step 3 earlier.

VxHFBatchInstaller usage examples
---------------------------------:

[+] Install CP in silent mode, exclude hotfixes HotFix_6_0_00001_2763375_w2k8_x64.exe and Hotfix_6_0_00002_362_2705769_w2k8_x64.exe:

vxhfbatchinstaller.exe /CP:CP12_SFWHA_60/Exclude:HotFix_6_0_00001_2763375_w2k8_x64.exe, Hotfix_6_0_00002_362_2705769_w2k8_x64.exe /silent

[+] Install CP in silent mode, restart automatically:
vxhfbatchinstaller.exe /CP:CP12_SFWHA_60 /silent /forcerestart

Post-install steps
==================|
The following section describes the steps that must be performed after installing the hotfixes included in this CP.
Note that these steps are applicable only to the hotfixes listed in this section.

[1] HotFix_6_0_00007_2722108 
Perform the following steps only if you have installed HotFix_6_0_00007_2722108 as part of the CP installation.

Setting the DisableServerNameScoping and DisableStrictVirtualNameCheck registry keys 
Perform the following steps to create and set the DisableServerNameScoping and DisableStrictVirtualNameCheck registry keys that are used by the FileShare agent to support non-scoped file shares and disable virtual name availability as a requirement for file shares to be available on the network.

Caution: Incorrectly editing the registry may severely damage your system. Make a backup copy before making changes to the registry.

To configure DisableServerNameScoping and DisableStrictVirtualNameCheck registry parameters:
1. To open the Registry Editor, click Start > Run, type regedit, and then click OK.
2. In the registry tree (on the left), navigate to
HKLM\SOFTWARE\VERITAS\VCS\BundledAgents.
3. Click Edit > New > Key and create a key by the name Lanman, if it does not
exist already.
4. Select Lanman and then click Edit > New > Key and create a key by the name
__GLOBAL__. (underscore-underscore-GLOBAL-underscore-underscore)
5. Select __GLOBAL__ and add a DWORD type and specify Value name as
DisableServerNameScoping and Value data as 1.
The value 1 indicates that the FileShare agent supports non-scoped file shares
on Windows Server 2008 and 2008 R2 systems.
6. Select __GLOBAL__ and add a DWORD type and specify Value name as
DisableStrictVirtualNameCheck and Value data as 1.
Setting this registry value to 1 allows the FileShare agent to make the file
shares available even if the virtual name is not available on the network.
You must create the DisableStrictVirtualNameCheck key parallel to the file share
scoping key DisableServerNameScoping.
7. Save and exit the Registry Editor.

-------------------------------------------------------+

[2] Hotfix_6_0_00003_2650336, HotFix_6_0_00008_2762153, HotFix_6_0_00015_2897781, Hotfix_6_0_00013_2882535 and Hotfix_6_0_00023_3122364

If the cluster is not configured before installing this hotfix, the post-install task of modifying the type will fail. 
In that case, manually run the batch files for the hotfixes after cluster is configured to update the type. 
You can find the batch file name from the location where the hotfix is extracted.

-------------------------------------------------------+

Known issues
============|

The following section describes the issues related to the individual hotfixes that are included in this CP:

[1] Hotfix_6_0_00009_362_2761898 and HotFix_6_0_00008_2762153

- These Hotfixes are not qualified for the Microsoft Hyper-V guest operating systems.
- Fast failover and Intelligent Monitoring Framework (IMF) are not supported.
- To install or uninstall a Hotfix using the Hotfix installer, the corresponding VxHF.exe or VxHFW.exe must be invoked from the location where the Hotfix files are extracted.
- If a split-brain occurs, the application service group may not be available on any of the cluster nodes.
- If there are one or more resources of the VMNSDg type configured, then the resource type definitions for VMNSDg do not get deleted during the SFW HA Hotfix (2762153) uninstallation.

-------------------------------------------------------+

[2] The post-installation task of starting the Plugin-Host service fails on Windows Server Core system.

The Plugin-Host service requires that the .NET Framework is present on the system where the service is to be started. As the .NET Framework is not supported on the Windows Server Core systems, the service cannot be started on these systems. Hence for the hotfixes that have the post-installation task of starting the Plugin-Host service, the task fails. This is expected behavior and can be safely ignored.

-------------------------------------------------------+


REMOVING THE PATCH
------------------
NO


SPECIAL INSTRUCTIONS
--------------------
This fix is provided without warranty of any kind including the warranties of title or implied warranties of merchantability, 
fitness for a particular purpose and non-infringement. Symantec disclaims all liability relating to or arising out of this fix. 
It is recommended that the fix be evaluated in a test environment before implementing it in your production environment. 
When the fix is incorporated into a Storage Foundation for Windows maintenance release, the resulting Hotfix or Service Pack 
must be installed as soon as possible. Symantec Technical Services will notify you when the maintenance release (Hotfix or Service Pack) 
is available if you sign up for notifications from the Symantec support site http://www.symantec.com/business/support and/or 
from Symantec Operations Readiness Tools (SORT) http://sort.symantec.com.

Additional notes
================|

[+] To confirm the list of cumulative patches installed on a system, run the following command from the directory where the CP files are extracted:
vxhfbatchinstaller.exe /list

The output of this command displays a list of cumulative patches and the hotfixes that are installed as part of a CP. 
This command also displays the hotfixes that are included in a CP but are not installed on the system.

[+] To confirm the installation of the hotfixes, perform one of the following:
    - Run the following command:
      vxhf.exe /list
      The output of this command lists the hotfixes installed on the system.
    - In the Windows Add/Remove program, click "View installed updates" to view the list of the hotfixes installed on the system.

[+] The CP installer (vxhfbatchinstaller.exe) creates and stores logs at:
"%allusersprofile%\Application Data\Veritas\VxHF\VxHFBatchInstaller.txt"

[+] The hotfix installer (vxhf.exe) creates and stores logs at:
"%allusersprofile%\Application Data\Veritas\VxHF\VxHF.txt"

[+] For general information about the hotfix installer (vxhf.exe), please refer to the following technote:
http://www.symantec.com/docs/TECH73446

[+] To view a list of hotfixes already installed on a system, please refer to the steps mentioned in the following technote:
http://www.symantec.com/docs/TECH73438

[+] For information on uninstalling a hotfix, please refer to the steps mentioned in the following technote:
http://www.symantec.com/docs/TECH73443


OTHERS
------
NONE





Read and accept Terms of Service