sfha-win_x64-CP6_SFWHA_61

 Basic information
Release type: Patch
Release date: 2017-02-28
OS update support: None
Technote: None
Documentation: None
Popularity: 2852 viewed    downloaded
Download size: 67.76 MB
Checksum: 1938708614

 Applies to one or more of the following products:
Storage Foundation HA 6.1 On Windows x64

 Obsolete patches, incompatibilities, superseded patches, or other requirements:

This patch supersedes the following patches: Release date
win_solutions-win_x64-Hotfix_6_1_00007_3553667 (obsolete) 2014-10-24

 Fixes the following incidents:
3483164, 3500210, 3525531, 3547461, 3553667, 3558216, 3568040, 3590862, 3622272, 3627063, 3659693, 3684124, 3684343, 3691649, 3745619, 3752571, 3755803, 3767113, 3768192, 3771885, 3773252, 3773919, 3782628, 3820750, 3826326, 3836471, 3838760, 3839102, 3853418, 3853963, 3859624, 3861094, 3862936, 3863759, 3872010, 3874444, 3874745, 3880433, 3889526, 3889886, 3898318, 3901194, 3905599, 3910006

 Patch ID:
None.

Readme file
                          * * * READ ME * * *
             * * * Symantec Storage Foundation HA 6.1 * * *
                      * * * Patch 6.1.0.600 * * *
                         Patch Date: 2016-06-28


This document provides the following information:

   * PATCH NAME
   * OPERATING SYSTEMS SUPPORTED BY THE PATCH
   * PACKAGES AFFECTED BY THE PATCH
   * BASE PRODUCT VERSIONS FOR THE PATCH
   * SUMMARY OF INCIDENTS FIXED BY THE PATCH
   * DETAILS OF INCIDENTS FIXED BY THE PATCH
   * INSTALLATION PRE-REQUISITES
   * INSTALLING THE PATCH
   * REMOVING THE PATCH


PATCH NAME
----------
Symantec Storage Foundation HA 6.1 Patch 6.1.0.600


OPERATING SYSTEMS SUPPORTED BY THE PATCH
----------------------------------------
Windows Server 2008 R2 X64
Windows 2012 X64
Windows Server 2012 R2 X64



BASE PRODUCT VERSIONS FOR THE PATCH
-----------------------------------
   * Symantec Storage Foundation HA 6.1


SUMMARY OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
Patch ID: SFW HA CP6
* 3910006 (3910307) Issue 1: The status of a Symantec Cluster Server (VCS) cluster is displayed incorrectly on the Veritas Operations Manager interface.
Issue 2: High Availability (HA) commands take long time to execute in a secure cluster.
Issue 3: CP4 and CP5 post-install tasks take a long time to complete or fail.
* 3905599 (3905019) WAC resource stops sending IAmAlive messages after a certain duration of time and reports the TCP/IP connection as unresponsive.
* 3500210 (3497449) Cluster disk group loses access to majority of its disks due to a SCSI error
* 3525531 (3525528) VxSVC crashes with heap corruption if paging file is disabled
* 3547461 (3547460) The Add Disk to Dynamic Disk Group wizard stops responding on the second panel if the wizard is launched from a disk without sufficient free space
* 3558216 (3456746) VxSvc services crashes with heap corruption in VRAS.dll
* 3568040 (3568039) VDID module fails to generate Unique Disk ID for the Fujitsu ETERNUS array LUNs
* 3622272 (3622271) When you import a cluster disk group with a large number of LUNs and volumes, the server stops responding.
* 3483164 (3483164) In a cluster configured to use single sign-on authentication (secure cluster), the application service group configuration wizards fail with a 
"CmdServer service not running on system:nodename" error.
* 3590862 (3572840) The VxExplorer and the hagetcf utility fails to collect the GAB/LLT logs.
* 3627063 (3589195) For SQL server 2014, in the Quick Recovery wizard, only the schedule for VSS Snapshots gets created. The vxsnap prepare and vxsnap create commands are not executed.
* 3684124 (3684123) When you convert a basic disk with partition to a Cluster Disk Group, the Volume created on the basic group may be marked as Missing.
* 3691649 (3691647) The system crashes when you perform a read/write operation on a mirrored volume with Dirty region logging (DRL).
* 3745619 (3745616) An enclosure is not formed for LUNS created on EMC Invista (VPLEX) array.
* 3755803 (3755802) The system stops responding when replication is active.
* 3771885 (3771877) Storage Foundation for Windows does not provide array specific support for Infinidat arrays other than DSM.
* 3659693 (3659697) Some of the cluster nodes may crash (BSOD) when all the network links are unconfigured from all the cluster nodes.
* 3684343 (3684342) A system crash occurred when VxExplorer gathered the GAB/LLT logs.
* 3773919 (3773915) Lanman resource fails to come online.
* 3773252 (3521503) The Lanman resources fault when checking whether the virtual server name is alive.
* 3553667 (3553664) The Fire Drill (FD), Disaster Recovery (DR), and Quick Recovery (QR) wizards, and the Solutions Configuration Center (SCC) do not support Microsoft SQL Server 2014.
* 3752571 (3752570) Volume becomes inaccessible after you perform multiple snapshot operations followed by multiple snapback (resync from replica) operations.
* 3767113 (3767110) Mirrored volumes with DCO log volumes and DRL logs are not resynchronized using DRL, after a system crash.
* 3768192 (3819634) The storage reclamation operation may hang on an array that supports UNMAP command.
* 3820750 (3820747) A VMwareDisks resource takes a long time to come online or to go offline.
* 3782628 (3615728) VMwareDisks agent supports only 'persistent' virtual disk mode.
* 3839102 (3839096) Storage Foundation for Windows (SFW) broadcasts the license keys over the network on the UDP port number 2164.
* 3826326 (3826318) When you add a large number of disks and perform a Rescan, the Veritas Enterprise Administrator (VEA) GUI hangs on the 'Disks View'.
* 3836471 (3831005) DR wizard does not automatically populate AutoStartList for RVG service groups.
* 3859624 (3859623) The VxSVC service crashes during the snapshot operation.
* 3861094 (3861093) When you reattach a mirror plex to a volume, the system crashes (BSOD).
* 3838760 (3838759) A system crashes when excessive number of large write I/O requests are received on volumes mirrored using Dirty Region Logging (DRL).
* 3853963 (3854164) When the logged-on user is a nested domain user with administrative privileges, the user is not able to perform Storage Foundation for Windows (SFW) administrative tasks on the SFW 6.1 server.
* 3853418 (3853417) Issue 1: The disk group created on EMC Invista devices are automatically deported during another disk group import.
Issue 2: The value of the Vendor Disk ID (VDID) attribute for the EMC Invista devices is not consistent.
Issue 3: The failure of the SCSI INQUIRY command in the ddlprov.dll file causes incorrect generation of VDID.
* 3862936 (3862932) Rapid memory consumption occurs when Fire Drill is configured.
* 3863759 (3862346) After reclaiming storage on disks that support thin provisioning and storage reclamation, SQL Server data might get corrupted.
* 3872010 (3854164) When the logged-on user is a nested domain user with administrative privileges, the user is not able to perform Storage Foundation for Windows (SFW) administrative tasks on the SFW 6.1 server.
* 3874444 (3874443) After upgrading from SFW 6.1 CP3 to CP4, the disk queue length keeps increasing.
* 3880433 (3880457) Generic VDID is displayed for LUNs belonging to Dell PowerVault MD3000 array series.
* 3874745 (3867326) VMDg resource fails to get online with error "OnlineDg failed. Error: 1".
* 3889526 (3880490) Issue 1: Users cannot increase the value of the LLT tunable parameter 'peerinact' beyond 3200 on Windows.
Issue 2: Heartbeat issues affecting virtual and physical servers.
* 3889886 (3889885) VVR replication switches between "Active" and "Activating" state.
* 3898318 (3898317) In a Microsoft Failover Cluster Manager, volumes created on a Volume Manager diskgroup are not auto-mounted after an offline/online/failover operation of VMDG resource.
* 3901194 (3898322) A system may crash when a disk having volume snapshot is removed.


DETAILS OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
This patch fixes the following incidents:

Patch ID: SFW HA CP6

* 3910006 (Tracking ID: 3910307)

SYMPTOM:
Issue 1: Even though a cluster is online and active, its information is displayed incorrectly on the Veritas Operations Manager interface.
Issue 2: High Availability (HA) commands take long time to execute in a secure cluster.
Issue 3: After any one of the following patches is installed, the VCS engine fails to start on a local node:
- Hotfix_6_1_00018_3856596
- CP4  
- CP5

DESCRIPTION:
Issue 1: Veritas Operations Manager runs the HA commands to discover a cluster and its details. The HA commands from Veritas Operations Manager  work with the default %VCS_HOME% path, so they fail if VCS is installed at a custom location. Thus, when VCS is installed at a custom location, Veritas Operations Manager is unable identify its correct status.
Issue 2: Executing HA commands may take approx 5 to 7 seconds in case when PDC is located in different geographical location. This happens because DC related APIs take long time to get completed, when they are invoked from AT for creating AT credentials.
Issue 3: An error message in the engine_A.log file indicates that the local node has left the cluster due to a difference in the VCS engine version. This issue occurs due to a version mismatch between the VCS engine on the upgraded node and the non-upgraded nodes. The engine version on the upgraded nodes is incorrectly set to 7.0.

RESOLUTION:
Issue 1: This patch addresses the issue by updating the HA executables to identify the home directory location from the appropriate registry key instead of the default %VCS_HOME% path.
Issue 2: This issue is fixed wherein time-consuming AT APIs are only invoked first time when they are executed. For subsequent execution of HA command, cached credentials are used.
Issue 3: This patch corrects the VCS engine version that was shipped in the hotfix 'Hotfix_6_1_00018_3856596' and the cumulative patches CP4 and CP5 from 7.0 to 6.1.

NOTE: You do not need to restart the system after installing this patch.

NOTE: This hotfix supersedes Hotfix_6_1_00018_3856596, which was released earlier.

FILE / VERSION:
haagent.exe/6.1.28.315
haalert.exe/6.1.28.315
haattr.exe/6.1.28.315
hacf.exe/6.1.28.315
hacli.exe/6.1.28.315
haclus.exe/6.1.28.315
haconf.exe/6.1.28.315
had.exe/6.1.28.315
hadebug.exe/6.1.28.315
hagrp.exe/6.1.28.315
hadiscover.exe/6.1.28.315
hahb/6.1.28.315
halog/6.1.28.315
hamsg/6.1.28.315
hareg/6.1.28.315
hares/6.1.28.315
hashadow/6.1.28.315
hasite/6.1.28.315
hastart/6.1.28.315
hastatus/6.1.28.315
hastop/6.1.28.315
hasys/6.1.28.315
hatype/6.1.28.315
hauser/6.1.28.315
hanotify/6.1.28.315

* 3905599 (Tracking ID: 3905019)

SYMPTOM:
WAC resource stops sending IAmAlive messages after a certain duration of time and reports the TCP/IP connection as hung.

DESCRIPTION:
VCS incorrectly calculates the current time for a WAC resource due to the limitation of a Microsoft API (GetTickCount()). This current time is further used to control the sending of IAmAlive messages betweeen the WAC resources. Therefore, after a certain duration of time, the WAC resource stops sending IAmAlive messages to its peer and causes the connection to become unresponsive.

RESOLUTION:
This hotfix addresses the issue by correctly calculating the current time for a WAC resource using of a different API (GetTickCount64()).

FILE / VERSION:
wac.exe / 6.1.27.315

* 3500210 (Tracking ID: 3497449)

SYMPTOM:
Cluster disk group loses access to majority of its disks due to a SCSI error

DESCRIPTION:
This issue occurs while processing a request to renew or query the SCSI reservation on a disk belonging to a cluster disk group. Because of the following error in the SCSI command, the operation fails:
Unit Attention - inquiry parameters changed (6/3F/03)
Because of this, the cluster disk group loses access to majority of its disks.

RESOLUTION:
This issue has been resolved by retrying the SCSI reservation renew or query request.

File Name / Version:
vxio.sys / 6.1.00001.445

* 3525531 (Tracking ID: 3525528)

SYMPTOM:
VxSVC crashes with heap corruption if paging file is disabled

DESCRIPTION:
This issue occurs if the Windows paging file is disabled, because of which the Veritas Enterprise Administrator (VxSVC) service crashes causing heap corruption while loading the mount provider. This happens because the provider variable initialization was not done properly.

RESOLUTION:
This issue has been resolved by initializing the provider variable properly.
NOTE: Additional information has been added to the hotfix installation instructions. Follow them to resolve the issue completely.

File Name / Version:
mount.dll / 6.1.00004.445

* 3547461 (Tracking ID: 3547460)

SYMPTOM:
The Add Disk to Dynamic Disk Group wizard stops responding on the second panel if the wizard is launched from a disk without sufficient free space

DESCRIPTION:
This issue occurs when you launch the Add Disk to Dynamic Disk Group wizard from a disk that does not have sufficient free space. When you click "Next" on the second panel of the wizard, it gives a null pointer exception. Therefore, you cannot proceed to the next panel and the wizard needs to be closed.

RESOLUTION:
This issue has been fixed by modifying the code to handle null check so that now the wizard gives a proper error message instead of the null pointer exception.

File Name / Version:
VxVmCE.jar / N/A

* 3558216 (Tracking ID: 3456746)

SYMPTOM:
VxSvc service crashes with heap corruption in VRAS.dll

DESCRIPTION:
VRAS decided to discard a malformed packet it received, since the size of the packet was too large. It encountered an issue while freeing the IpmHandle pointer and crashed eventually.

RESOLUTION:
This hotfix resolves the crash which occurred during the handling of malformed packet.

File Name / Version:
vras.dll / 6.1.00007.445

* 3568040 (Tracking ID: 3568039)

SYMPTOM:
VDID module fails to generate Unique Disk ID for the Fujitsu ETERNUS array LUNs

DESCRIPTION:
This issue occurs during generic Veritas Disk ID (VDID) formation for Fujitsu ETERNUS array LUNs. During this, the SFWVDID module incorrectly claims wrong descriptor ID and fails to generate Unique Disk ID for the array LUNs.

RESOLUTION:
This issue has been fixed as part of the SFWVDID binary veritas.dll, which discovers VDID generically for the given LUNs. SFW now correctly discovers VDID generically for the Fujistu ETERNUS array LUNs.

File Name / Version:
veritas.dll / 6.1.00008.445

* 3622272 (Tracking ID: 3622271)

SYMPTOM:
When you import a cluster disk group with a large number of LUNs and volumes, the server stops responding.

DESCRIPTION:
When you import a cluster disk group with a large number of LUNs and volumes, it causes a deadlock between the Mount Manager and the vxio driver, due to which the Windows server on which SFW or SFW HA is installed, stops responding.

RESOLUTION:
This issue has been resolved by modifying the vxio driver.

FILE / VERSION:
vxio.sys / 6.1.00011.445

* 3483164 (Tracking ID: 3483164)

SYMPTOM:
In a cluster configured to use single sign-on authentication (secure cluster), the application service group configuration wizards fail with a 
"CmdServer service not running on system:nodename" error.

DESCRIPTION:
If a cluster is configured to use single sign-on authentication (secure cluster), then the application service group configuration wizards fail to connect to the remote cluster nodes with the following error:
"CmdServer service not running on system:nodename"
This error is displayed even though the VCS Command Server service is running on all the nodes. 
The wizard displays the error on the System Selection panel.
The issue occurs because the remote node IP address used for its authentication is not reachable.

RESOLUTION:
This hotfix enhances the wizard behaviour.
If the IP address of the remote cluster node is not reachable, then the wizard uses the hostname to connect to the nodes.

To resolve the issue, 
1) Exit the service group configuration wizard
2) Install the hotfix
3) Run the service group configuration wizard again

Binary File / Version:
WizCmdClient.dll / 6.1.1.283

* 3590862 (Tracking ID: 3572840)

SYMPTOM:
The VxExplorer and the hagetcf utility fails to collect the GAB/LLT logs in the following two scenarios:
- The product is installed at a location other than the C: drive 
- The product is installed at a location on C: but the Windows option to generate the file names in 8dot3 format is disabled

DESCRIPTION:
During the product installation, by default, the installer installs the product package at the following location:
C:\program files\Veritas
Instead of this default location if you choose to install the package at a custom location (other than the c: drive) or if you disable (on the C: drive) the Windows option to generate 8dot3 format file names,
then the VxExplorer and the hagetcf utility fails to collect the GAB/LLT logs.
This issue occurs because the VxExplorer and the hagetcf utility uses the 8dot3 format files names (instead of the actual installation path) to collect the logs.
When the product is installed at a location on C: drive, the VxExplorer or the hagetcf utility can access the files using the 8dot3 format file names. 
However, if the product is installed at a location other than the C: drive or if the Windows option to generate the file names in 8dot3 format is disabled on C: drive, 
then the VxExplorer and the hagetcf utility fails to access the files and to collect the logs.

RESOLUTION:
The behavior of VxExplorer and the hagetcf utility is modified through this hotfix. 
The VxExplorer or the hagetcf utility now uses the complete installation path to collect the logs.
Note: The hotfix installer replaces the getcomms.pl file at the following location:
"%vcs_home%\bin\"
You must copy this file and replace its instance in the VxExplorer folder.  

FILE / VERSION:
getcomms.pl

* 3627063 (Tracking ID: 3589195)

SYMPTOM:
For SQL server 2014, in the Quick Recovery wizard, only the schedule for VSS Snapshots gets created. The vxsnap prepare and vxsnap create commands are not executed.

DESCRIPTION:
When you run the Quick Recovery wizard for SQL Server 2014, only the schedule for VSS Snapshots gets created. The vxsnap prepare and vxsnap create commands are not executed. Due to this, the scheduled snapshot does not occur.

RESOLUTION:
This issue has been resolved by adding support for SQL Server 2014.

FILE / VERSION:
vxsnapschedule.dll / 6.1.00012.445

* 3684124 (Tracking ID: 3684123)

SYMPTOM:
When you convert a basic disk with partition to a Cluster Disk Group, the Volume created on the basic group may be marked as Missing.

DESCRIPTION:
When you create a volume on a basic disk and upgrade the basic disk to an SFW dynamic disk by creating a cluster disk group on it, the volume may be marked as Missing instead of healthy. To view the correct state of the volume, you need to deport and then import the cluster disk group.

Note: To upgrade a basic disk to an SFW dynamic disk, you must have minimum 16MB free space on the basic disk.

RESOLUTION:
This issue has been resolved by ensuring that the device name and other attributes for the VxVM volumes do not get overwritten.

FILE / VERSION:
ftdisk.dll / 6.1.00013.445

* 3691649 (Tracking ID: 3691647)

SYMPTOM:
When you create a mirrored volume with DRL and then perform a read/write operation on the volume, the system crashes (BSOD).

DESCRIPTION:
When you create a mirrored volume with DRL and then perform a read/write operation on the volume, the system crashes (BSOD) and the following error message is displayed: 
STOP: 0x000000B8
ATTEMPTED_SWITCH_FROM_DPC.

RESOLUTION:
This issue was because of a fault in the SFW vxio driver. It has been resolved by modifying the vxio driver.

FILE / VERSION:
vxio.sys / 6.1.00014.445

* 3745619 (Tracking ID: 3745616)

SYMPTOM:
SFW does not form an enclosure for the LUNS created on EMC Invista (VPLEX) arrays.

DESCRIPTION:
SFW currently does not provide support for advanced reporting on EMC Invista (VPLEX) arrays.
As a result, enclosures are not formed and a VDID is not generated for the LUNS created on these arrays.

RESOLUTION:
This hotfix addresses the issue by providing support for advanced reporting on EMC Invista (VPLEX) arrays. 
SFW now generates a VDID and forms enclosures for the LUNS created on EMC Invista (VPLEX) arrays.

FILE / VERSION:
emc.dll / 6.1.00016.445

* 3755803 (Tracking ID: 3755802)

SYMPTOM:
The system stops responding when replication is active.

DESCRIPTION:
When replication is active, the application I/O may hang when Volume Replicator replicates from the replicator log. This happens because at times, Volume Replicator may incorrectly allocate memory twice for data that is read back from the log, freeing it only once. If this happens, the memory module runs out of memory and causes the application I/Os to hang.

RESOLUTION:
The issue has been fixed by allocating memory to the READBACK memory pool only once during the read-back operation.

FILE / VERSION:
vxio.sys / 6.1.00017.445

* 3771885 (Tracking ID: 3771877)

SYMPTOM:
Storage Foundation does not provide array specific support for Infinidat arrays other than DSM.

DESCRIPTION:
Storage Foundation does not provide any array specific support for Infinidat arrays, except DSM. As a result, Storage Foundation is unable to perform any operations related to enclosures, thin provisioning reclamation and track alignment on the LUNS created on Infinidat arrays.

RESOLUTION:
This hotfix addresses the issue by providing support for enclosures, thin provisioning reclamation and track alignment for Infinidat arrays.

Known issue: When the reclaim operation for the disk group is in progress and you disconnect a disk path, the reclaim operation fails for last disk in the disk group.
Workaround: Retry the disk group reclaim operation.

Note: When you install this hotfix, a registry key is created to add track alignment support for the Infinidat array. This registry key is not deleted when you uninstall this hotfix.

FILE / VERSION:
NFINIDAT.dll / 6.1.00020.445
ddlprov.dll / 6.1.00020.445

* 3659693 (Tracking ID: 3659697)

SYMPTOM:
Systems may crash (BSOD) when all the network links are unconfigured from all the cluster nodes.

DESCRIPTION:
This issue occurs if all the network links are unconfigured from all the cluster nodes.
The network links from the all the cluster nodes can be unconfigured in any of the following scenarios:
- User executes the following command:
  -lltconfig -u
- Network adapter is disabled from all the machines
- Hyper-V live migration occurs
Once all the links are unconfigured, LLT reports to GAB about the network links being unconfigured from local as well as the other nodes in the cluster.
This resets the value of GAB membership to "Empty", due to which some of the nodes may crash.

RESOLUTION:
This hotfix addresses the issue by updating LLT behaviour. 
LLT now does not report to GAB about the network links being disconnected from the local node.
This prevents to reset the GAB membership to "Empty" and the cluster nodes do not crash. 

FILE / VERSION:
llt.sys / 6.1.10.298

* 3684343 (Tracking ID: 3684342)

SYMPTOM:
When VxExplorer gathers the GAB/LLT logs, the program crashes or causes a system crash.

DESCRIPTION:
This issue occurs when the lltshow utility attempts to fetch information about the LLT nodes or links using the following options:
lltshow -n -1
lltshow -l -1
VxExplorer executes these commands when gathering the logs. Therefore, you might encounter the issue when you run VxExplorer with the GAB/LLT option.

RESOLUTION:
This hotfix addresses the issue by updating the lltshow utility to identify the correct sizes of the LLT nodes and links.

FILE / VERSION:
lltshow.exe / 6.1.11.301

* 3773919 (Tracking ID: 3773915)

SYMPTOM:
The Lanman resource fails to come online due to certain DNS settings.

DESCRIPTION:
This issue occurs if your setup uses CNAME records instead of  HOST (A) records for the virtual name. The Lanman agent faults when it performs a duplicate name check, because it incorrectly looks at non-HOST records.

RESOLUTION:
This hotfix addresses the issue by updating the Lanman agent to look at only the HOST records from a DNS query.

FILE / VERSION:
Lanman.dll / 6.1.12.308

* 3773252 (Tracking ID: 3521503)

SYMPTOM:
When bringing the Lanman resource online, the Lanman agent checks whether the virtual server name is alive. If the virtual server name is alive, the resource faults.

DESCRIPTION:
This issue occurs if your setup contains DNS entries that point to a virtual IP address that is online. The Lanman agent faults when it performs a duplicate name check.

RESOLUTION:
This hotfix addresses the issue by updating the Lanman agent to skip the duplicate name check.

ADDITIONAL TASKS: After you apply this hotfix, perform the following tasks on all the nodes in the cluster.
1. Open the registry editor.
2. Navigate to the registry key:
'HKLM\SOFTWARE\VERITAS\VCS\BundledAgents\Lanman\<Lanman_Resource_Name>'
or
'HKLM\SOFTWARE\VERITAS\VCS\BundledAgents\Lanman\__GLOBAL__' 
3. Create the DWORD key 'DisablePingCheckForDND' with the value '1'.
Note: If you create this tunable parameter under the '__GLOBAL__' key, it applies to all the Lanman resources configured in the cluster.

FILE / VERSION:
Lanman.dll / 6.1.12.308

* 3553667 (Tracking ID: 3553664)

SYMPTOM:
Unable to configure SQL Server 2014 applications using the FD, DR, or QR wizards.

DESCRIPTION:
Need to provide support for SQL Server 2014 in the FD, DR, and QR wizards, and the SCC.

RESOLUTION:
This hotfix provides support for SQL Server 2014 in the FD, DR, and QR wizards, and the SCC.

FILE / VERSION:
DRPluginProxy.dll / 6.1.00007.351
QuickRecovery.dll / 6.1.00007.351
CCFEngine.exe.config / -
00_HA_Solutions.adv-xml / - 

Note: For the QR wizard to function properly with SQL Server 2014, you must apply Hotfix_6_1_00012_445_3627063 along with the current hotfix.

* 3752571 (Tracking ID: 3752570)

SYMPTOM:
Volume becomes inaccessible after you perform multiple snapshot operations followed by multiple snapback (resync from replica) operations.

DESCRIPTION:
When you perform multiple snapshot operations followed by multiple snapback (resync from replica) operations, the Data Change Object (DCO) map for the snapshot volumes are not updated correctly. This results in the corruption of the main volume on the subsequent restore operation and the volume becomes inaccessible.

RESOLUTION:
This issue has been resolved by modifying the code to correctly update the DCO map during the snapback operation.

File Name / Version:
vxconfig.dll / 6.1.00018.445

* 3767113 (Tracking ID: 3767110)

SYMPTOM:
Mirrored volumes with DCO log volumes and DRL logs are not resynchronized using DRL, after a system crash.

DESCRIPTION:
This issue occurs for mirrored volumes with Disk Change Object (DCO) log volumes and Dirty region logging (DRL) logs in case of a system failure. Here, instead of using DRL to quickly resynchronize all the copies of a mirrored volume, the system restores all mirrors of the volume by copying the full contents of the volume between its mirrors. This is a lengthy and I/O intensive process.

RESOLUTION:
This hotfix resolves the issue by modifying SFW behaviour.

Note: If your existing mirrored volumes have DCO log volumes and DRL logs, you must delete and then recreate the DRL logs.

FILE / VERSION:
vxio.sys / 6.1.22.445
vxboot.sys / 6.1.22.445
vxconfig.dll / 6.1.22.445

* 3768192 (Tracking ID: 3819634)

SYMPTOM:
The storage reclamation operation may hang and the system may become unresponsive, on an array that supports UNMAP command.

DESCRIPTION:
SFW wizards and CLI commands provide support for reclaiming the unused storage from the disks that support reclamation.
If these disks belong to an array that supports UNMAP command, the SFW operations that are initiated for reclamation may hang.
As a result, the system becomes unresponsive. 
This issue occurs because of a deadlock in vxio.sys.

RESOLUTION:
The SFW behaviour to reclaim the unused storage has been modified as part of this hotfix.
SFW now overcomes the deadlock faced in the vxio.sys, during the reclamation of the unused storage.

Note: This issue is seen only on Windows Server 2012/R2.

FILE / VERSION:
vxio.sys / 6.1.00023.445

* 3820750 (Tracking ID: 3820747)

SYMPTOM:
The VMwareDisks agent takes longer than expected to bring a VMwareDisks resource online or to take it offline.

DESCRIPTION:
In large ESX or vCenter configurations with lots of datastores, the VMwareDisks agent does not work efficiently.

RESOLUTION:
This hotfix addresses the issue by enhancing the VMwareDisks agent to work more efficiently in large ESX or vCenter configurations.

FILE / VERSION:
VMwareDisks.dll / 6.1.14.313
VMwarelib.dll / 6.1.14.313

NOTE: Veritas recommends that you set the NumThreads attribute of the VMwareDisks agent to 1; otherwise, the agent may fail intermittently.

* 3782628 (Tracking ID: 3615728)

SYMPTOM:
VMwareDisks agent supports only 'persistent' virtual disk mode.

DESCRIPTION:
The VMwareDisks agent currently supports only 'persistent' virtual disk mode.
If you have configured the disk in 'independent' mode, in the event of a failover, the VMwareDisks agent sets the mode to 'persistent'.

RESOLUTION:
The behaviour of VMwareDisks agent is modified through this hotfix. 
The VMwareDisks agent now supports 'independent' disk mode.
An attribute 'VirtualDiskMode' is now added to the VMwareDisks agent's attributes. 
You can set the value of this attribute to 'persistent', 'independent_persistent' or 'independent_nonpersistent'.
By default the value is set to 'persistent'. You must modify the value after you configure application monitoring.
Note: The VMwareDisks agent does not detect the mode in which the disk is configured. After a fail over the disk is attached in the mode as that defined 
in the attribute value. 
For more details about the disk modes, refer to VMware documentation.

To modify the value use the VCS Java Console or perform the following steps through command line:
1.	If you have configured the disk in 'persistent' mode, bring the service group offline.
hagrp -offline service_group -sys system_name
2.	Change the cluster configuration to read-write mode
haconf -makerw
3.	Modify the attribute value
hares -modify resource_name VirtualDiskMode independent_persistent
4.	Save the cluster configuration and change the mode to read-only.
haconf -dump -makero 

FILE / VERSION:
VMwareDisks.dll 6.1.13.311
VMwarelib.dll 6.1.13.311
VMwareDisks.xml

* 3839102 (Tracking ID: 3839096)

SYMPTOM:
SFW broadcasts the license keys over the network on the UDP port number 2164.

DESCRIPTION:
Even though the network duplication check is disabled, SFW broadcasts the license keys on UDP port 2164 every 30 minutes. SFW broadcasts these license keys with LIC_CHECK_SEND in
the header.

RESOLUTION:
This hotfix resolves the issue by disabling the broadcast if the network duplication check is disabled.

FILE / VERSION:
sysprov.dll / 6.1.26.445

* 3826326 (Tracking ID: 3826318)

SYMPTOM:
When you add a large number of disks and perform a Rescan, the VEA GUI hangs on the 'Disks View'.

DESCRIPTION:
This issue occurs when you add a large number of Logical Units (LUNs) or disks to the host and perform a Rescan. When you select the 'Disks View' on the VEA GUI, the GUI stops responding.

RESOLUTION:
This issue has been resolved by adding two VEA Refresh Timeout tunables. You can use the following tunables to control the intervals in which the view is refreshed:
- Quick Refresh Timeout - It is the minimum time interval after which any change will reflect in GUI. The Quick Refresh Timeout value can range in between 20 to 2500 milliseconds.

- Delayed Refresh Timeout - It is used to optimize these refreshes in case of a large number of events or notifications. If a large number of notifications come in the 'quick refresh timeout' time interval, then instead of refreshing the view in the 'quick refresh timeout' time interval, the 'delayed refresh timeout' time interval is used. 
The Delayed Refresh Timeout value can range in between 100 to 25000 milliseconds.

For example, suppose the Quick Refresh Timeout is set at 20ms, and the Delayed Refresh Timeout is set at 100ms. 
When the first update notification comes, the view is refreshed after 20ms. But if more than one update notifications come in the 20ms time interval, then instead of refreshing the view after 20ms, the view is refreshed after 100ms.

To access these tunables, select Preferences from the VEA Tools menu. In the dialog box that appears, select the Advanced tab.

Note: The value for the VEA Refresh Timeout tunables will vary depending on your environment.  

Prerequisite: Before you proceed with installing the current hotfix, make sure that Hotfix_6_1_00011_445_3622272 is installed.

FILE / VERSION:
VxVmCE.jar
OBGUI.jar
ci.jar
obCommon.jar

* 3836471 (Tracking ID: 3831005)

SYMPTOM:
The Disaster Recovery Configuration Wizard does not set the AutoStartList attribute when creating an RVG service group.

DESCRIPTION:
The Disaster Recovery Configuration Wizard creates an RVG service group as part of configuring disaster recovery (DR) for a cluster. However, when creating the RVG service group, the wizard does not automatically populate its AutoStartList attribute with the names of the nodes in that cluster.
If, for some reason, all the cluster nodes restart at the same time, the cluster does not know which node to bring the RVG online on, because AutoStartList is empty. Since the RVG is not online, the replication does not start automatically. In such a scenario, the AutoStartList attribute needs to be updated manually so that the replication can start automatically.

RESOLUTION:
The Disaster Recovery Configuration Wizard is now enhanced to set the value of the AutoStartList attribute automatically while creating the RVG service group.

FILE / VERSION:
DRPluginProxy.dll / 6.1.00016.370

NOTE: This hotfix depends on Hotfix_6_1_00007_3553667.

* 3859624 (Tracking ID: 3859623)

SYMPTOM:
The VxSVC service crashes during the snapshot operation.

DESCRIPTION:
During the snapshot operation, a new volume record is created in the vold for the snapshot volume.  This record is incomplete until the transaction commits. During a parallel activity (like Refresh), this incomplete record may get accessed by the provider, which causes the VxSVC service to crash.

RESOLUTION:
This issue has been resolved by ensuring that incomplete records are not exposed.

FILE / VERSION:
vxconfig.dll / 6.1.00031.445

* 3861094 (Tracking ID: 3861093)

SYMPTOM:
When you reattach a mirror plex to a volume, the system crashes (BSOD).

DESCRIPTION:
This issue occurs when you enable FastResync (FR). When the re-synchronization task is performed with multiple threads, the system crashes due to memory allocation issues.

RESOLUTION:
This hotfix resolves the issue by modifying the multi-thread re-synchronization behaviour.

FILE / VERSION:
vxconfig.dll / 6.1.00032.445

* 3838760 (Tracking ID: 3838759)

SYMPTOM:
A system crashes when excessive number of large write I/O requests are received on volumes mirrored using DRL.

DESCRIPTION:
A system crashes when excessive number of I/O requests that involve more than 1MB of data write operations are received, on a volume that is mirrored using DRL.
The issue occurs because of the excessive number of large write I/O requests that result in low System Page Table Entries (PTEs). 
As a result of low system PTE, the SFW driver fails to map the application I/O buffer to the system address space.

RESOLUTION:
The SFW behavior to map the application I/O buffer to the system address space has been modified as part of this hotfix.
SFW now breaks the larger I/Os into smaller I/Os and then tries to map the application buffer into system address space.

FILE / VERSION:
vxio.sys / 6.1.00027.446

* 3853963 (Tracking ID: 3854164)

SYMPTOM:
When the logged-on user is a nested domain user with administrative privileges, the user is not able to perform SFW administrative tasks on the SFW 6.1 server.

DESCRIPTION:
When you connect to the SFW server through the logged-on user option as a nested domain user (with administrative rights on the server), you are able to connect to the VEA server but are unable to perform any SFW-related administrative operation on the SFW server. This is due to a limitation in the existing Windows API for the nested domain user.

RESOLUTION:
This issue has been resolved by using the LDAP provider to fetch the user information from the active directory. The LDAP provider checks the nested user privileges and reports if the user has admin privileges.

FILE / VERSION:
vxvea3.dll / 3.4.730.0
OBGUI.jar / NA
ci.jar / NA
obCommon.jar / NA

* 3853418 (Tracking ID: 3853417)

SYMPTOM:
Issue 1: The disk group created on EMC Invista devices are automatically deported during another disk group import.
Issue 2: The value of the VDID attribute for the EMC Invista devices is not consistent.
Issue 3: The failure of the SCSI INQUIRY command in the ddlprov.dll file causes incorrect generation of VDID.

DESCRIPTION:
VDID is used to uniquely identify a disk in the vold. 
In some cases, the device discovery module (EMC.DLL) incorrectly categorizes the disks as Count Key Data (CDK) devices leading to a change in the VDID. Due to this change, the vold is unable to find the disk with the original VDID. 
A failure of the SCSI INQUIRY command in the ddlprov.dll file on a disk can also cause an inconsistent VDID.
Subsequently, when you perform a refresh/rescan or import a disk group, the previously imported disk groups which are associated with disks with inconsistent VDID may get deported.

RESOLUTION:
This issue has been resolved by ensuring that the EMC Invista devices are not discovered as CDK devices and fixing the logic in the ddlprov.dll file. 

FILE / VERSION:
emc.dll/7.0.05301.1
ddlprov.dll/7.0.05300.2

* 3862936 (Tracking ID: 3862932)

SYMPTOM:
Memory consumption increases rapidly in a FireDrill setup for IMF-enabled agents.

DESCRIPTION:
When a fire drill (FD) service group (SG) is brought online, the offline monitoring function of Intelligent Monitoring Framework (IMF) detects that the resources are up. IMF then notifies the corresponding application SG resources that the FD resources are online. Since the FireDrill attribute is set to True, this notification is ignored and the state of the resources remains unchanged in the application service group.
IMF once again registers for offline monitoring, and again notifies the application SG again that the FD resources are up. These continue in a loop.
The same behavior is observed if you configure an application for fire drill and then bring the application SG online.

RESOLUTION:
This hotfix addresses the issue by disabling offline IMF monitoring whenever the FireDrill attribute is set to true.
The function that initiates offline monitoring now skips the IMF registration if FireDrill is set to True. IMF now attempts to register for offline monitoring only a fixed number of times and the stops.

NOTE:
- If the Intelligent Monitoring Framework (IMF) Mode value is set to 3, IMF is applied to both offline and online monitoring. However, if the FireDrill attribute is set to True, offline 'intelligent' monitoring is disabled for as long as FireDrill is enabled.
- If the affected VCSAgDriver process continues to consume memory even after you have applied this hotfix, the process may not have been killed during the installation. If you encounter this situation, kill the process manually. Then, restart HAD by using the following commands:
1. hastop -local -force
2. hastart

FILE / VERSION:
vcsagfw.dll / 6.1.17.315
IAL.dll / 6.1.17.315

* 3863759 (Tracking ID: 3862346)

SYMPTOM:
After reclaiming storage on disks that support thin provisioning and storage reclamation, SQL Server data might get corrupted.

DESCRIPTION:
A file system bitmap is acquired during a reclaim storage operation. The reclaim region and the region that is represented by the file system bitmap may not be exactly aligned. Therefore, some region beyond the reclaim boundary may get reclaimed, and this region may be in active use. In such a scenario, the data in the region that is in active use can get corrupted.

RESOLUTION:
This hotfix addresses the issue by updating the boundary condition check to consider that the file system map may not completely match the reclaim region.

FILE / VERSION:
vxconfig.dll / 6.1.00033.445
vxio.sys / 6.1.00033.445

* 3872010 (Tracking ID: 3854164)

SYMPTOM:
When the logged-on user is a nested domain user with administrative privileges, the user is not able to perform SFW administrative tasks on the SFW 6.1 server.

DESCRIPTION:
When you connect to the SFW server through the logged-on user option as a nested domain user (with administrative rights on the server), you are able to connect to the VEA server but are unable to perform any SFW-related administrative operation on the SFW server.
This is due to a limitation in the existing Windows API for the nested domain user.

RESOLUTION:
This issue has been resolved by using Windows AuthZ APIs to fetch the user information from the active directory.
These APIs check the nested user privileges and report if the user has admin privileges.

File Name / Version:
vxvea3.dll / 3.4.731.0

* 3874444 (Tracking ID: 3874443)

SYMPTOM:
After upgrading from SFW 6.1 CP3 to CP4, the disk queue length keeps increasing.

DESCRIPTION:
After upgrading from SFW 6.1 CP3 to CP4, the Disk queue length steadily increases. When you perform an I/O operation on the drives, the disk queue length increases but does not reduce after the operation is complete.
After sometime, the SCOM server sends out alerts regarding the high disk queue length on the server.

RESOLUTION:
This hotfix addresses an issue where the value of the disk queue length performance counter does not reduce after the I/O operation is complete.

File Name / Version:
vxio.sys / 6.1.5305.445

* 3880433 (Tracking ID: 3880457)

SYMPTOM:
Irrespective of creating separate enclosures for Dell PowerVault MD3000 array series, a generic VDID appears for LUNs that belong to these arrays.

DESCRIPTION:
When LUNs from any of the following Dell PowerVault MD3000 array series are exposed to a system, SFW generates array-specific enclosures, but fails to generate proper VDIDs.
- MD3400
- MD3600
- MD3800
A generic VDID is displayed in VEA for the LUNs that belong to these arrays.
This issue occurs due to an internal error.

RESOLUTION:
The SFW behavior to generate VDIDs for Dell PowerVault MD3000 array series has been modified as part of this hotfix.
SFW now uses an updated Dell VDID library 'dell.dll' to populate proper VDIDs for LUNs that are exposed from Dell PowerVault MD3000 array series.

File Name / Version:
dell.dll / 7.0.16101.4

* 3874745 (Tracking ID: 3867326)

SYMPTOM:
VMDg resource fails to get online with error "OnlineDg failed. Error: 1".

DESCRIPTION:
If the VMDg is in the Disabled state, VCS cannot find the disk group and so it fails to bring the VMDg resource online. 
This situation may occur if the disk group gets disabled at the array level before VCS sends a 'disable' call to SFW. 
The updated state of the disk group is not available when VCS attempts to bring the VMDg resource online, so the import fails.

RESOLUTION:
This hotfix addresses the issue by updating the VMDg agent introduce a rescan operation. 
If VCS encounters the error code 1 (one) while importing a disk group, it performs a rescan, and waits for the operation to complete. 
When a rescan is successful and the state of the disks is updated to Enabled, VCS attempts to bring the disk group online. 
The rescanning of disk groups depends on a newly introduced registry setting. Perform the following steps to add the 'RescanEnabled' registry key.
ADDITIONAL TASKS: After you apply this hotfix, perform the following tasks on all the nodes in the cluster. 
1. Open the registry editor. 
2. Navigate to the registry key: HKEY_LOCAL_MACHINE\SOFTWARE\VERITAS\VCS\BundledAgents\VMDg\ 
3. Create the DWORD key 'RescanEnabled' with the value '1'. 
NOTE: After installing this hotfix on a secure cluster, the Startup Type of the VCS Authentication Service may change to Manual from Automatic. 
If this happens, make sure to change the Startup Type back to Automatic. 

FILE / VERSION:
VMDg.dll / 6.1.00019.315

* 3889526 (Tracking ID: 3880490)

SYMPTOM:
Issue 1: Intermittent network issues lead to a split-brain scenario in a VCS cluster.
Issue 2: LLT links are intermittently lost on VCS cluster nodes that are configured with LLT over ethernet.

DESCRIPTION:
Issue 1: Increasing the value of the LLT peerinact parameter on Windows to more than 3200 (32 seconds) does not take effect. 
For example, you may set the value as follows: 
-lltconfig -T peerinact:3400 
However, it still remains capped at 3200.
Issue 2: This issue occurs when LLT honors an operating system (OS) call to unbind the all the adapters (NICs) that are registered with NDIS. 
When the adapters are unbound, the corresponding LLT links are also closed.

RESOLUTION:
Issue 1: This hotfix provides an update that lets you set the peerinact value to a maximum of 214748300 (2147483 seconds). 
Issue 2: This hotfix resolves the issue by updating LLT so that after honoring a call from the OS to unbind an adapter, it binds that adapter back. 

FILE / VERSION: 
llt.sys / 6.1.23.315

* 3889886 (Tracking ID: 3889885)

SYMPTOM:
The VVR replication switches between "Active" and "Activating" state, and then switches to a DCM mode.

DESCRIPTION:
Volume Replicator enables you to perform a takeover operation to transfer a Primary role to a Secondary node. 
This operation is typically performed when an original Primary node fails or is brought down for maintenance.

After a takeover with fast-failback enabled operation, if you fail over the Volume Replicator disk group (vvr_dg) from old Primary to a passive node in a cluster, then both (old Primary active and passive node) the nodes continue to send heartbeats to the new Primary node.
As a result, the VVR replication switches between "Active" and "Activating" state.
This issue occurs due to an internal error.

RESOLUTION:
VVR behavior is modified with this hotfix. It now de-references a node when it transitions from PRIMARY to an ACTING SECONDARY role.
As a result, only an active node can send heartbeats to a secondary node.

FILE / VERSION:
vxio.sys, 6.1.06102.445

* 3898318 (Tracking ID: 3898317)

SYMPTOM:
In a Microsoft Failover Cluster Manager, volumes created on a Volume Manager diskgroup are not auto-mounted after an online/offline/failover operation of VMDG resource.

DESCRIPTION:
In a Microsoft Failover Cluster Manager, the volumes that are created on a Volume Manager diskgroup may fail to appear or are not auto-mounted after the VMDG resource is failed over or is brought online or taken offline.
As a result, the volumes are inaccessible.
This issue occurs due to an internal error.

RESOLUTION:
The VMDG resource behavior is modified as part of this hotfix. 
The volumes are now mounted correctly after a VMDG resource offline/online/failover operation.
FILE / VERSION:
vxres.dll / 6.1.6103.445
cluscmd.dll/6.1.6103.445

* 3901194 (Tracking ID: 3898322)

SYMPTOM:
A system may crash when a disk having volume snapshot is removed.

DESCRIPTION:
When you enable FastResync or take a volume snapshot, the disk having a mirrored volume is detached and a Data Change Object (DCO) is activated. The DCO keeps a track of any new I/Os served and the updates that take place when mirrored volume is detached.
When the updates are logged in DCO, at any given point in time, the vxio driver processes 256 number of updates from a single StagedIO (SIO) and cleans up the underlying memory buffer in that SIO. If there are more than 256 updates in the SIO, the additional updates are processed in the next batch. 
During the processing of additional updates, if the vxio driver accesses the cleaned up memory, then a system crash may occur.

RESOLUTION:
vxio behavior to clean up the underlying memory buffer from an SIO is modified with this hotfix. The memory is now cleaned up only after all the updates are processed.
FILE / VERSION:
vxio.sys/ 6.1.06104.445



INSTALLING THE PATCH
--------------------
What's new in this CP
=====================

The following hotfixes have been added in this CP:
 - Hotfix_6_1_06102_445_3889886 
 - Hotfix_6_1_06103_445_3898318 
 - Hotfix_6_1_06104_445_3901194 
 - Hotfix_6_1_00027_3905599
 - Patch_6_1_00028_3910006
 
For more information about these hotfixes, see the "FIXED_INCIDENTS" section in this Readme.

Note: Create the cluster before you install this CP. If you have already installed the CP before the cluster was created, then create the cluster and execute the C:\Program Files (x86)\Common Files\Veritas Shared\WxRTPrivates\Hotfix_6_1_00014_3820750\InstallVMwareDisksType.bat batch file.

Install instructions
====================|

Download the appropriate cumulative public patch (CP) executable file to a temporary location on your system.

Each cumulative public patch includes the individual hotfixes that contain enhancements and fixes related to reported issues.
See the "FIXED_INCIDENTS" section for details.

Before you begin
----------------:
[1] Ensure that the system has all the latest Microsoft Windows updates installed before proceeding with the CP installation.

[2] For Windows Server 2008 R2, ensure that the following Microsoft Knowledge Base (KB) Article's are installed before proceeding with the CP installation: 
    - 3033929: Availability of SHA-2 code signing support for Windows 7 and Windows Server 2008 R2
    
[3] Ensure that the logged-on user has privileges to install the CP on the systems.

[4] One or more hotfixes that are included with this CP may require a reboot.
Before proceeding with the installation ensure that the system can be rebooted.

[5] Veritas recommends that you close the Cluster Manager (Java Console) and the Veritas Enterprise Administrator (VEA) Console before installing this CP.

[6] Ensure that you close the Windows Event Viewer before proceeding with the installation. 

[7] Before installing CP, ensure that the Startup type of the VCS Authentication Service is set to Automatic. This is applicable only if you have configured a secure cluster.

To install the CP in the silent mode
-----------------------------------:

Perform the following steps:

[1] Double-click the CP executable file to start the CP installation. 

The installer performs the following tasks:
    - Extracts all the individual hotfix executable files
      The files are extracted at %commonprogramfiles(x86)%\Veritas Shared\WxRTPrivates\<HotfixName>
    - Runs the pre-install tasks
    - Installs all the hotfixes sequentially
    - Runs the post-install tasks
The installation progress status is displayed in the command window.

[2] After all the hotfixes are installed, the installer prompts you to restart the system.
Type Y to restart the system immediately, or type N to restart the system later. 
You must restart the system for the changes to take effect.

Note that the installer prompts for a system restart only if hotfixes that require a reboot are included in the CP and are installed.

To install the CP using the command line
----------------------------------------:

Use the VxHFBatchInstaller.exe utility to install a CP from the command line.
The syntax options for this utility are as follows:

vxhfbatchinstaller.exe /CP:<CPName> [/PreInstallScript:<PreInstallScript.pl>] [/silent [/forcerestart]]

where,
    - CPName is the cumulative patch executable file name without the platform, architecture, and .exe extension.
For example, if CP executable name is CP6_SFWHA_61_W2K12_x64.exe, specify it as CP6_SFWHA_61.

    - PreInstallScript.pl is the Perl script that includes the pre-installation steps. These steps forcefully kill the required services and processes in case a graceful stop request does not succeed.
    Veritas recommends that you use this option and script only in case the CP installer fails repeatedly while performing the pre-installation tasks.

    - /silent indicates the installation is run in a non-verbose mode; the installer does not prompt for any inputs during the installation.

    - /forcerestart indicates that the system is automatically restarted, if required, after the installation is complete.


Perform the following steps:

[1] From the command prompt, navigate to the directory where the CP executable file is located and then run the file to extract the contents to a default location on the system. 
The installer displays a list of hotfixes that are included in the CP.
    - On 64-bit systems, the hotfixes executable files are extracted to:
      "%commonprogramfiles(x86)%\Veritas Shared\WxRTPrivates\<CPName>"
      
[2] In the same command window, run the following command to begin the CP installation in the silent mode:
vxhfbatchinstaller.exe /CP:<CPName> /silent

For example, to install a SFW 6.1 CP for Windows Server 2012, the command is:
vxhfbatchinstaller.exe /CP:CP6_SFW_61 /silent

The installer performs the following tasks:

    - Extracts all the individual hotfix executable files
      The files are extracted at %commonprogramfiles(x86)%\Veritas Shared\WxRTPrivates\<HotfixName>
    - Runs the pre-install tasks
    - Installs all the hotfixes sequentially
    - Runs the post-install tasks
The installation progress status is displayed in the command window.

[3] After all the hotfixes are installed, the installer displays a message for restarting the system.
You must restart the system for the changes to take effect.

Note that the installer prompts for a system restart only if hotfixes that require a reboot are included in the CP and are installed. The installer automatically restarts the system if you had specified the /forcerestart option in step 2 earlier.

VxHFBatchInstaller usage examples
---------------------------------:

[+] Install CP in silent mode, restart automatically:
vxhfbatchinstaller.exe /CP:CP6_SFWHA_61 /silent /forcerestart


Post-install steps
==================|
The following section describes the steps that must be performed after installing the hotfixes included in this CP.

Ensure that VIP_PATH environment variable is set to "C:\Program Files\Veritas\Veritas Object Bus\bin" 
and NOT to "C:\<INSTALLDIR_BASE>\Veritas Object Bus\bin". Assuming that C:\ is the default installation drive.

Known issues
============|
NONE


REMOVING THE PATCH
------------------
NO


SPECIAL INSTRUCTIONS
--------------------
DISCLAIMER: This fix is provided without warranty of any kind including the warranties of title or implied warranties of merchantability, fitness for a particular purpose and non-infringement. Veritas disclaims all liability relating to or arising out of this fix. It is recommended that the fix be evaluated in a test environment before implementing it in your production environment. When the fix is incorporated into an InfoScale for Windows maintenance release, the resulting Hotfix or Service Pack must be installed as soon as possible. Veritas Technical Services will notify you when the maintenance release (Hotfix or Service Pack) is available if you sign up for notifications from the Veritas support site http://www.veritas.com/support and/or from Services Operations Readiness Tools (SORT) http://sort.veritas.com.

Additional notes
================|

[+] To confirm the list of cumulative patches installed on a system, run the following command from the directory where the CP files are extracted:
vxhfbatchinstaller.exe /list

The output of this command displays a list of cumulative patches and the hotfixes that are installed as part of a CP. 
This command also displays the hotfixes that are included in a CP but are not installed on the system.

[+] To confirm the installation of the hotfixes, perform one of the following:
    - Run the following command:
      vxhf.exe /list
      The output of this command lists the hotfixes installed on the system.
    - In the Windows Add/Remove program, click "View installed updates" to view the list of the hotfixes installed on the system.

[+] To confirm the latest cumulative patch installed on a system, run the following command from the directory where the CP files are extracted:
vxhfbatchinstaller.exe /cplevel

The output of this command displays the latest CP that is installed, the CP status, and a list of all hotfixes that were a part of the CP but not installed on the system.

[+] The CP installer (vxhfbatchinstaller.exe) creates and stores logs at:
"%allusersprofile%\Veritas\VxHF\VxHFBatchInstaller.txt"

[+] The hotfix installer (vxhf.exe) creates and stores logs at:
"%allusersprofile%\Veritas\VxHF\VxHF.txt"

[+] For general information about the hotfix installer (vxhf.exe), please refer to the following technote:
http://www.veritas.com/docs/000039694

[+] To view a list of hotfixes already installed on a system, please refer to the steps mentioned in the following technote:
http://www.veritas.com/docs/000039691

[+] For information on uninstalling a hotfix, please refer to the following technotes:
http://www.veritas.com/docs/000023795
http://www.veritas.com/docs/000039693

[+] For general information about the CP, please refer to the following technote:
http://www.veritas.com/docs/000023789


OTHERS
------
NONE