sf-win_x64-CP4_SFW_601
Obsolete
The latest patch(es) : sf-win_x64-CP8_SFW_601 

 Basic information
Release type: Patch
Release date: 2014-04-29
OS update support: None
Technote: TECH209086-Storage Foundation for Windows High Availability, Storage Foundation for Windows and Veritas Cluster Server 6.0.1 Cumulative Patches
Documentation: None
Popularity: 4077 viewed    downloaded
Download size: 9.15 MB
Checksum: 595029671

 Applies to one or more of the following products:
Storage Foundation 6.0.1 On Windows x64

 Obsolete patches, incompatibilities, superseded patches, or other requirements:

This patch is obsolete. It is superseded by: Release date
sf-win_x64-CP8_SFW_601 2016-12-12
sf-win_x64-CP7_SFW_601 (obsolete) 2015-06-29
sf-win_x64-CP6_SFW_601 (obsolete) 2014-11-24
sf-win_x64-CP5_SFW_601 (obsolete) 2014-07-28

This patch supersedes the following patches: Release date
sfw-win_x64-CP3_SFW_601 (obsolete) 2014-01-28
sfw-win_x64-CP2_SFW_601 (obsolete) 2013-10-29
sfw-win_x64-CP1_SFW_601A (obsolete) 2013-07-29

 Fixes the following incidents:
3061942, 3086900, 3099805, 3124269, 3146196, 3231600, 3322266, 3345684, 3347495, 3352705, 3360992, 3435678, 3447110, 3450291, 3456751, 3458775, 3460423

 Patch ID:
None.

Readme file
                          * * * READ ME * * *
              * * * Veritas Storage Foundation 6.0.1 * * *
                      * * * Patch 6.0.1.400 * * *
                         Patch Date: 2014-04-30


This document provides the following information:

   * PATCH NAME
   * OPERATING SYSTEMS SUPPORTED BY THE PATCH
   * PACKAGES AFFECTED BY THE PATCH
   * BASE PRODUCT VERSIONS FOR THE PATCH
   * SUMMARY OF INCIDENTS FIXED BY THE PATCH
   * DETAILS OF INCIDENTS FIXED BY THE PATCH
   * INSTALLATION PRE-REQUISITES
   * INSTALLING THE PATCH
   * REMOVING THE PATCH


PATCH NAME
----------
Veritas Storage Foundation 6.0.1 Patch 6.0.1.400


OPERATING SYSTEMS SUPPORTED BY THE PATCH
----------------------------------------
Windows 2008 X64
Windows Server 2008 R2 X64



BASE PRODUCT VERSIONS FOR THE PATCH
-----------------------------------
   * Veritas Storage Foundation 6.0.1


SUMMARY OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
Patch ID: SFW CP4
* 3061942 (3061942) Two issues related to storage migration of a volume with SmartMove enabled.
* 3086900 (3086900) Mount points configured under Failover Cluster are deleted after upgrading to SFW 6.0.1
* 3099805 (3099805) SFW and SFWHA 6.0.1 unable to identify Hitachi HUS VM LUNs as thin reclaimable.
* 3124269 (3124269) Tagging of snapshot disks fails during the fire drill operation, because of which disk import also fails.
* 3146196 (3146196) In some cases, mount points configured under FOC are lost during a failover.
* 3231600 (3231600) Memory leak occurs for SFW VSS provider while taking a VSS snapshot.
* 3322266 (3322266) In Failover Cluster Manager, refreshing a virtual machine configuration results in storage-related errors.
* 3345684 (3345684) Cluster disk group resource faults after adding or removing a disk from a disk group if all of its disks are not available for reservation.
* 3352705 (3352705) "vxdmpadm disk list" may display the disk name multiple times and it may crash by itself.
* 3360992 (3360992) Server crashes during high write I/O operations on mirrored volumes.
* 3347495 (3347495) After a failover, VEA sometimes does not show the drive letter or mounted folder paths of a successfully-mounted volume.
* 3435678 (3435678) Rhs.exe may crash while performing an operation on the VMDg resource
* 3450291 (3450291) SFW cannot form correct Enclosure for Hitachi Unified Storage 150 arrays
* 3458775 (3458775) After a VxSVC restart on a fast failover configuration, VMDg/MountV resources may fault and failover to other cluster nodes may also fail
* 3456751 (3456751) VxSvc services crashes with heap corruption in VRAS.dll
* 3460423 (3460423) The Primary node hangs if TCP and compression are enabled.
* 3447110 (3447110) Two scenarios where missing disks cannot be removed from a disk group


DETAILS OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
This patch fixes the following Symantec incidents:

Patch ID: SFW CP4

* 3061942 (Tracking ID: 3061942)

SYMPTOM:
Two issues related to storage migration of a volume with SmartMove enabled.

DESCRIPTION:
In some cases, the following two issues are observed when storage migration is performed for a volume (with operations, such as subdisk move, mirror resync, mirror attach, etc.) and SmartMove is enabled: 1. If the VCS MountV resource for the volume is brought offline, then the migration task completes abnormally and the volume may report data corruption. 2. If the data migration is performed for a volume of size 2 TB or greater, then the task never reaches 100% completion because of integer overflow while handling large offsets.

RESOLUTION:
The two issues mentioned above have been fixed. The first has been fixed by adding correct handling of error conditions so that the task is aborted if the volume is taken offline. The second issue has been fixed by using a 64-bit variable for that purpose.


Binary / Version:
vxconfig.dll / 6.0.20002.76

* 3086900 (Tracking ID: 3086900)

SYMPTOM:
Mount points configured under Failover Cluster are deleted after upgrading to SFW 6.0.1

DESCRIPTION:
This issue occurs in a Microsoft Failover Cluster environment after upgrading to SFW 6.0.1. SFW 6.0.1 stores mount point information in the cluster database under the "VolMountInfo" key. As per the functionality, if mount points are not present in the cluster database when the VMDg resource comes online, then SFW deletes them from the mount database assuming that they were deleted from the other cluster node. After upgrading to SFW 6.0.1, SFW deletes the mount points configured using a previous version of SFW when the VMDg resource comes online because the "VolMountInfo" key was not part of earlier versions of SFW.

RESOLUTION:
The VMDg resource now checks if the "VolMountInfo" key is already present under the cluster registry database. If it is not present, then the VMDg resource updates the database with the valid mount points. NOTE: It is required that, after upgrading the node, you must install this hotfix before performing a reboot or bringing the service group online.

Binary / Version:
vxres.dll / 6.0.10002.308

* 3099805 (Tracking ID: 3099805)

SYMPTOM:
SFW and SFWHA 6.0.1 unable to identify Hitachi HUS VM LUNs as thin reclaimable.

DESCRIPTION:
This issue is regarding SFW and SFW HA 6.0.1 not able to identify the Hitachi Unified Storage VM (HUS VM) LUNs as thin reclaimable LUNs because thin provisioning and storage reclamation is not supported on Hitachi HUS VM LUNs.

RESOLUTION:
The issue has been resolved by enhancing ddlprov.dll to add thin provisioning and storage reclamation support for the Hitachi HUS VM array.

Binary / Version:
ddlprov.dll / 6.0.10003.308

* 3124269 (Tracking ID: 3124269)

SYMPTOM:
Tagging of snapshot disks fails during the fire drill operation, because of which disk import also fails.

DESCRIPTION:
This issue occurs while performing the fire drill operations in case of hardware replication agents, which involve tagging of snapshot disks so that they can be imported separately from the original disks. Because of an issue with SFW, it does not write tags to the disks, and also proceeds without giving any error. Then, the import operation on the snapshot disks also fails because there are no disks present with the specified tag.

RESOLUTION:
This was an existing issue where SFW did not write to disks that are marked as read-only. The issue has been resolved by allowing the fire drill tag to be written to a disk even if the disk is marked as read-only.

Binary / Version:
vxconfig.dll / 6.0.10004.308

* 3146196 (Tracking ID: 3146196)

SYMPTOM:
In some cases, mount points configured under FOC are lost during a failover.

DESCRIPTION:
This issue may occur while performing a failover in a Microsoft Failover Cluster (FOC) environment. During this, Microsoft's "GetVolumePathNamesForVolumeName" function, which is used by Volume Manager Disk Group (VMDg) resource mount handling, fails to return mount point information even though the mount points exist on the system. This happens because of an issue with the "GetVolumePathNamesForVolumeName" function. As a result of this behavior, the VMDg resource removes the mount points from the cluster database during the volume arrival notification and then from the system during a failover.

RESOLUTION:
This issue has been resolved by modifying the present handling of mount points during volume arrival. Because of this change, you need to perform the following workaround in case of a dynamic disk group join operation where the target disk group is under the VMDg resource: After performing the dynamic disk group join operation, reset the VMDg resource property.

Binary / Version:
vxres.dll / 6.0.10005.308

* 3231600 (Tracking ID: 3231600)

SYMPTOM:
Memory leak occurs for SFW VSS provider while taking a VSS snapshot.

DESCRIPTION:
This issue occurs during a VSS snapshot operation when VSS is loading and unloading providers. The SFW VSS provider connects to VEA database during the loading and disconnects during the unloading of providers. Because of an issue in the VEA database cleanup during the unloading, the memory leak occurs.

RESOLUTION:
This issue has been resolved so that now SFW VSS provider does not connect to and disconnect from VEA during every load and unload operation. Instead, it creates a connection at the beginning and disconnects when the Veritas VSS Provider Service (vxvssprovider.exe) is stopped.

Binary / Version:
vxvssprovider.exe / 6.0.10006.308

* 3322266 (Tracking ID: 3322266)

SYMPTOM:
In Failover Cluster Manager, refreshing a virtual machine configuration results in storage-related errors.

DESCRIPTION:
In Failover Cluster Manager, this issue occurs while trying to refresh a virtual machine configuration and the underlying volumes are managed by the SFW VMDg resource. This operation fails with storage-related errors because of the Microsoft code for this operation being limited to a physical disk resource and the VMDg resource type unable to handle some control codes.

RESOLUTION:
This issue has been fixed by implementing required control codes for the VMDg resource type. However, because of Microsoft's support for a single disk, note that this fix will work only for virtual machines with the disk group (VMDG resource) having one data volume laying on the MBR disks. 

Binary / Version:
vxres.dll / 6.0.10008.308
cluscmd.dll / 6.0.10008.308

* 3345684 (Tracking ID: 3345684)

SYMPTOM:
Cluster disk group resource faults after adding or removing a disk from a disk group if all of its disks are not available for reservation.

DESCRIPTION:
This issue occurs when you add or remove a disk from a dynamic disk group and if the majority of its disks are available for reservation, but not all. After the disk is added or removed, SFW checks to see if all the disks are available for reservation. In this case, because not all the disks are available, the 
cluster disk group resource faults.

RESOLUTION:
This issue has been resolved so that, after a disk is added or removed from a disk group, SFW now checks only for a majority of disks available for
reservation instead of all.

Binary / Version:
vxconfig.dll / 6.0.10009.308

* 3352705 (Tracking ID: 3352705)

SYMPTOM:
"vxdmpadm disk list" may display the disk name multiple times and it may crash by itself.

DESCRIPTION:
On some system, "vxdmpadm disk list" may display the disk name multiple times and sometimes it may crashes by itself. This happens due to incorrect logic in vxcmd.dll, where vxdmpadm tries to access invalid memory.

RESOLUTION:
This issue has been resolved by implementing correct logic code in vxcmd library.

Binary / Version:
vxcmd.dll / 6.0.10010.308

* 3360992 (Tracking ID: 3360992)

SYMPTOM:
Server crashes during high write I/O operations on mirrored volumes.

DESCRIPTION:
This issue occurs when heavy write I/O operations are performed on mirrored volumes. During such high I/O operations, the server crashes due to a problem managing the memory for data buffers.

RESOLUTION:
This issue has been resolved by appropriately mapping the system-address-space described by MDL for the write I/Os on mirrored volumes.

Binary / Version:
vxio.sys / 6.0.10011.308

* 3347495 (Tracking ID: 3347495)

SYMPTOM:
After a failover, VEA sometimes does not show the drive letter or mounted folder paths of a successfully-mounted volume.

DESCRIPTION:
This issue may occur after a failover when VEA sometimes does not show the drive letter or mounted folder paths of a volume even though the volume is successfully mounted with the expected drive letter or folder paths. During a failover, when a disk group gets imported, SFW mounts all volumes of the disk group by querying the mount points using Microsoft API GetVolumePathNamesForVolumeName(). Sometimes, this API fails to return the correct drive letter or mounted folder paths because of which VEA fails to update the same.

RESOLUTION:
NOTE: Please note that using the following workaround has a performance impact on the service group offline and failover operations. This happens because, during the service group offline or failover operation, the performance of the disk group deport operation is impacted by "n/2" seconds maximum, where "n" is the number of volumes in the disk group. To resolve this issue, the operation needs to be retried after a few milliseconds so that the Microsoft API GetVolumePathNamesForVolumeName() returns correct information. As a workaround, a new retry logic is added to the GetVolumePathNamesForVolumeName() API so that it retries the operation in case the mount path returned is empty. It will retry after every 100 milliseconds for "n" number of attempts (5 by default), which can be configured using the registry. This retry logic is disabled by default.

To use the workaround, do the following:
1. Enable the retry logic by changing the value of the registry entry "RetryEnumMountPoint" from 0 to 1 under the registry key 
- HKEY_LOCAL_MACHINE\SOFTWARE\VERITAS\VxSvc\CurrentVersion\VolumeManager
2. Configure the number of retry attempts by changing the value of the registry entry "RetryEnumMPAttempts" under the registry key
- HKEY_LOCAL_MACHINE\SOFTWARE\VERITAS\VxSvc\CurrentVersion\VolumeManager


Binary / Version:
mount.dll / 6.0.10012.308

* 3435678 (Tracking ID: 3435678)

SYMPTOM:
Rhs.exe may crash while performing an operation on the VMDg resource

DESCRIPTION:
When the VMDg resource is configured in Microsoft failover cluster, any operation on this resource requiring disk group or disk information may crash the Resource Hosting Subsystem (RHS.exe) process. This happens because of deallocation of an invalid memory location.

RESOLUTION:
This issue has been fixed by correcting the deallocation of memory.

Binary / Version:
vxres.dll / 6.0.10013.308

* 3450291 (Tracking ID: 3450291)

SYMPTOM:
SFW cannot form correct Enclosure for Hitachi Unified Storage 150 arrays

DESCRIPTION:
This issue occurs for Hitachi Unified Storage 150 disk arrays. Because SFW does not provide Enclosure support for Hitachi Unified Storage 150 arrays, SFW cannot form correct Enclosure for this array. Because of this, VEA GUI incorrectly shows disks of two different Enclosures under one Enclosure. Moreover, mirror across disks by Enclosure cannot be performed.

RESOLUTION:
This issue has been resolved by enhancing SFW with Enclosure support for Hitachi Unified Storage 150 arrays.

Binary / Version:
Hitachi.dll / 6.0.10014.308

* 3458775 (Tracking ID: 3458775)

SYMPTOM:
After a VxSVC restart on a fast failover configuration, VMDg/MountV resources may fault and failover to other cluster nodes may also fail

DESCRIPTION:
This issue occurs on restart of the VxSVC service after RO to RW conversion. After the service restarts, re-import of fast failover enabled disk groups may fail, which makes the VMDg or their dependent MountV resources to fault. As the reservation thread is not stopped in this scenario, failover of cluster disk groups to other cluster nodes fail with reservation errors.

RESOLUTION:
This issue has been resolved by ignoring the Host ID check for fast failover enabled disk groups. Therefore, the disk group re-import is successful. If re-import fails for any other reason, then the reservation thread is stopped so that failover to other nodes is successful.

Binary / Version:
vxconfig.dll / 6.0.10016.308

* 3456751 (Tracking ID: 3456751)

SYMPTOM:
VxSvc services crashes with heap corruption in VRAS.dll

DESCRIPTION:
VRAS decided to discard a malformed packet it received, since the size of the packet was too large. It encountered an issue while freeing the IpmHandle pointer and crashed eventually.

RESOLUTION:
This hotfix resolves the crash which occurred during the handling of malformed packet.

Binary / Version:
vras.dll / 6.0.10017.308

* 3460423 (Tracking ID: 3460423)

SYMPTOM:
The Primary node hangs if TCP and compression are enabled.

DESCRIPTION:
During replication, this issue occurs if TCP and compression of data are enabled and the resources are low at the Secondary node. Because of low resources, decompression of data on the Secondary fails repeatedly, causing the TCP buffer to fill up. In such case, if network I/Os are performed on the Primary and a transaction is initiated, then the Primary node hangs.

RESOLUTION:
The issue of system hang caused by VVR is resolved in this hotfix.

Binary / Version:
vxio.sys / 6.0.10018.308

* 3447110 (Tracking ID: 3447110)

SYMPTOM:
Two scenarios where missing disks cannot be removed from a disk group

DESCRIPTION:
The issue where missing disks cannot be removed from a disk group occurs in the following two scenarios: 1. When you try to remove a missing disk from a disk group using the vxdg rmdisk command. In the command, you need to mandatorily provide the name of disk group that the missing disk needs to be removed from. Despite providing the correct disk group name, the command fails because of a bug in the internal check performed for the disk group name. 2. When there are even number of disks in a disk group with half of the disks missing. In this case, if there are any volumes on the non-missing disks, then removing the missing disks is not allowed. If you try to remove them, then it fails with the "Cannot remove last disk in dynamic disk group" error. This happens because the operation to remove disks incorrectly compares the number of disks to be removed with that of the non-missing disks. If the number is equal, then the operation tries to remove the complete disk group. However, the presence of volume resources prevents the removal of the disk group and the intended missing disks as well.

RESOLUTION:
The issue in both the scenarios has been resolved as follows: 1. The issue in the first scenario has been resolved by modifying the way a missing disk can be removed from a disk group. While using the vxdg rmdisk command, you can remove a missing disk from a disk group either by specifying only its display name (for example, "Missing Disk (disk#)") or by specifying both its internal name and name of the disk group to which it belongs. 2. The issue in the second scenario has been resolved so that the operation to remove disks now compares the number of disks to be removed with the total number of disks in the disk group and not with the number of non-missing disks.

Binary / Version:
vxconfig.dll / 6.0.10019.308
vxdg.exe / 6.0.10019.308



INSTALLING THE PATCH
--------------------
What's new in this CP
=====================|

The following hotfixes have been added in this CP:
 - Hotfix_6_0_10013_308_3435678
 - Hotfix_6_0_10014_308_3450291
 - Hotfix_6_0_10016_308_3458775
 - Hotfix_6_0_10017_308_3456751
 - Hotfix_6_0_10018_308_3460423
 - Hotfix_6_0_10019_308_3447110

For more information about these hotfixes, see the "Errors/Problems Fixed" section in this readme.


Install instructions
====================|

Download the appropriate cumulative public patch (CP) executable file to a temporary location on your system.
You can install the CP in a verbose mode or in a non-verbose mode. Instructions for both options are provided below.

Each cumulative public patch includes the individual hotfixes that contain enhancements and fixes related to reported issues.
See "Errors/Problems Fixed" section for details.

Before you begin
----------------:
[1] Ensure that the logged-on user has privileges to install the CP on the systems.

[2] One or more hotfixes that are included with this CP may require a reboot.
Before proceeding with the installation ensure that the system can be rebooted.

[3] Symantec recommends that you close the Cluster Manager (Java Console) and the Veritas Enterprise Administrator (VEA) Console before installing this CP.

[4] Ensure that you close the Windows Event Viewer before proceeding with the installation.

[5] Before installing CP on Windows Server Core systems, ensure that Visual Studio 2005 x86 re-distributable is installed on the systems.



To install in the verbose mode
------------------------------:

In the verbose mode, the cumulative patch (CP) installer prompts you for inputs and displays the installation progress status in the command window.

Perform the following steps:

[1] Double-click the CP executable file to extract the contents to a default location on the system.
The installer displays a list of hotfixes that are included in the CP.
    - On 64-bit systems, the hotfixes executable files are extracted to:
      "%commonprogramfiles(x86)%\Veritas Shared\WxRTPrivates\<CPName>"

The installer also lists the hotfixes that require a reboot of the system after the installation. 
If system reboot is not an option at this time, you can choose not to install these hotfixes. 
In such a case, exit the installation and then launch the CP installer again from the command line using the /exclude option.
See "To install in a non-verbose (silent) mode" section for the syntax.

[2] When the installer prompts whether you want to continue with the installation; type Y to begin the hotfix installation.
The installer performs the following tasks:
    - Extracts all the individual hotfix executable files
      On 64-bit systems the files are extracted at %commonprogramfiles(x86)%\Veritas Shared\WxRTPrivates\<HotfixName>
    - Runs the pre-install tasks
    - Installs all the hotfixes sequentially
    - Runs the post-install tasks
The installation progress status is displayed in the command window.

[3] After all the hotfixes are installed, the installer prompts you to restart the system.
Type Y to restart the system immediately, or type N to restart the system later. 
You must restart the system for the changes to take effect.

Note that the installer prompts for a system restart only if hotfixes that require a reboot are included in the CP and are installed.

To install in the non-verbose (silent) mode
-------------------------------------------:

In the non-verbose (silent) mode, the cumulative patch (CP) installer does not prompt you for inputs and directly proceeds with the installation tasks. 
The installer displays the installation progress status in the command window.

Use the VxHFBatchInstaller.exe utility to install a CP from the command line.
The syntax options for this utility are as follows:

vxhfbatchinstaller.exe /CP:<CPName> [/Exclude:<HF1.exe>,<HF2.exe>...] [/PreInstallScript:<PreInstallScript.pl>] [/silent [/forcerestart]]

where,
    - CPName is the cumulative patch executable file name without the platform, architecture, and .exe extension.
For example, if CP executable name is CP4_SFW_601_W2K8_x64.exe, specify it as CP4_SFW_601.

    - HF1.exe, HF2.exe,... represent the executable file names of the hotfixes that you wish to exclude from the installation. Note that the file names are separated by commas, with no space after a comma. The CP installer skips the mentioned hotfixes during the installation.

    - PreInstallScript.pl is the Perl script that includes the pre-installation steps. These steps forcefully kill the required services and processes in case a graceful stop request does not succeed.
    Symantec recommends that you use this option and script only in case the CP installer fails repeatedly while performing the pre-installation tasks.

    - /silent indicates the installation is run in a non-verbose mode; the installer does not prompt for any inputs during the installation.

    - /forcerestart indicates that the system is automatically restarted, if required, after the installation is complete.


Perform the following steps:

[1] From the command prompt, navigate to the directory where the CP executable file is located and then run the file to extract the contents to a default location on the system. 
The installer displays a list of hotfixes that are included in the CP.
    - On 64-bit systems, the hotfixes executable files are extracted to:
      "%commonprogramfiles(x86)%\Veritas Shared\WxRTPrivates\<CPName>"

The installer also lists the hotfixes that require a reboot of the system after the installation. If system reboot is not an option at this time, you can choose not to install these hotfixes. In such a case, launch the CP installer from the command line using the /exclude option.

[2] When the installer prompts whether you want to continue with the installation; type N to exit the installer.

[3] In the same command window, run the following command to begin the CP installation in the non-verbose mode:
vxhfbatchinstaller.exe /CP:<CPName> /silent

For example, to install a SFW 6.0.1 x64 CP for Windows Server 2008, the command is:
vxhfbatchinstaller.exe /CP:CP4_SFW_601 /silent

The installer performs the following tasks:

    - Extracts all the individual hotfix executable files
      On 64-bit systems the files are extracted at %commonprogramfiles(x86)%\Veritas Shared\WxRTPrivates\<HotfixName>
    - Runs the pre-install tasks
    - Installs all the hotfixes sequentially
    - Runs the post-install tasks
The installation progress status is displayed in the command window.

[4] After all the hotfixes are installed, the installer displays a message for restarting the system.
You must restart the system for the changes to take effect.

Note that the installer prompts for a system restart only if hotfixes that require a reboot are included in the CP and are installed. The installer automatically restarts the system if you had specified the /forcerestart option in step 3 earlier.

VxHFBatchInstaller usage examples
---------------------------------:

[+] Install CP in silent mode, exclude hotfixes HotFix_6_0_00001_2763375_w2k8_x64.exe and HotFix_6_0_00001_2763375_w2k8_x64.exe:

vxhfbatchinstaller.exe /CP:CP4_SFW_601 /Exclude:HotFix_6_0_00001_2763375_w2k8_x64.exe, HotFix_6_0_00001_2763375_w2k8_x64.exe /silent

[+] Install CP in silent mode, restart automatically:

vxhfbatchinstaller.exe /CP:CP4_SFW_601 /silent /forcerestart

Post-install steps
==================|
The following section describes the steps that must be performed after installing the hotfixes included in this CP.

Ensure that VIP_PATH environment variable is set to "C:\Program Files\Veritas\Veritas Object Bus\bin" 
and NOT to "C:\<INSTALLDIR_BASE>\Veritas Object Bus\bin". Assuming that C:\ is the default installation drive.

Known issues
============|
The following section describes the issues related to the individual hotfixes that were included in the previous 6.0.1 CP1:

[1] Hotfix_6_0_10004_308_3124269, Hotfix_6_0_10003_308_3099805, and Hotfix_6_0_10001_308_3061942

 - These hotfixes were initially part of CP1 and they were re-archived in CP2 to ensure that VIP_PATH is set to correct value.

 - In CP1, after installing/un-installing these hotfixes, VIP_PATH environment variable was incorreclty getting set to "C:\<INSTALLDIR_BASE>\Veritas Object Bus\Bin" 
 instead of "C:\Program Files\Veritas\Veritas Object Bus\Bin".

-------------------------------------------------------+


REMOVING THE PATCH
------------------
NO


SPECIAL INSTRUCTIONS
--------------------
This fix is provided without warranty of any kind including the warranties of title or implied warranties of merchantability, 
fitness for a particular purpose and non-infringement. Symantec disclaims all liability relating to or arising out of this fix. 
It is recommended that the fix be evaluated in a test environment before implementing it in your production environment. 
When the fix is incorporated into a Storage Foundation for Windows maintenance release, the resulting Hotfix or Service Pack 
must be installed as soon as possible. Symantec Technical Services will notify you when the maintenance release (Hotfix or Service Pack) 
is available if you sign up for notifications from the Symantec support site http://www.symantec.com/business/support and/or 
from Symantec Operations Readiness Tools (SORT) http://sort.symantec.com.

Additional notes
================|

[+] To confirm the list of cumulative patches installed on a system, run the following command from the directory where the CP files are extracted:
vxhfbatchinstaller.exe /list

The output of this command displays a list of cumulative patches and the hotfixes that are installed as part of a CP. 
This command also displays the hotfixes that are included in a CP but are not installed on the system.

[+] To confirm the installation of the hotfixes, perform one of the following:
    - Run the following command:
      vxhf.exe /list
      The output of this command lists the hotfixes installed on the system.
    - In the Windows Add/Remove program, click "View installed updates" to view the list of the hotfixes installed on the system.

[+] The CP installer (vxhfbatchinstaller.exe) creates and stores logs at:
"%allusersprofile%\Application Data\Veritas\VxHF\VxHFBatchInstaller.txt"

[+] The hotfix installer (vxhf.exe) creates and stores logs at:
"%allusersprofile%\Application Data\Veritas\VxHF\VxHF.txt"

[+] For general information about the hotfix installer (vxhf.exe), please refer to the following technote:
http://www.symantec.com/docs/TECH73446

[+] To view a list of hotfixes already installed on a system, please refer to the steps mentioned in the following technote:
http://www.symantec.com/docs/TECH73438

[+] For information on uninstalling a hotfix, please refer to the steps mentioned in the following technote:
http://www.symantec.com/docs/TECH73443


OTHERS
------
NONE