sfha-win_x64-CP7_SFWHA_601
Obsolete
The latest patch(es) : sfha-win_x64-CP8_SFWHA_601 

 Basic information
Release type: Patch
Release date: 2015-08-10
OS update support: None
Technote: None
Documentation: None
Popularity: 4107 viewed    downloaded
Download size: 35.93 MB
Checksum: 2788585695

 Applies to one or more of the following products:
Storage Foundation HA 6.0.1 On Windows x64

 Obsolete patches, incompatibilities, superseded patches, or other requirements:

This patch is obsolete. It is superseded by: Release date
sfha-win_x64-CP8_SFWHA_601 2016-12-12

This patch supersedes the following patches: Release date
sfha-win_x64-CP6_SFWHA_601 (obsolete) 2014-11-24
sfha-win_x64-CP5_SFWHA_601 (obsolete) 2014-07-28
vcs-win_x64-CP4_SFWHA_601 (obsolete) 2014-04-28
sfw-win_x64-Hotfix_6_0_10004_308_3124269 (obsolete) 2013-05-15

 Fixes the following incidents:
2938704, 3054751, 3061942, 3062860, 3065921, 3099727, 3099805, 3113887, 3124269, 3164649, 3231486, 3231600, 3298600, 3300849, 3314700, 3314705, 3347495, 3352705, 3360992, 3369234, 3386077, 3447110, 3450291, 3456751, 3458775, 3460423, 3499335, 3521509, 3524590, 3579881, 3610931, 3615729, 3623654, 3644019, 3694213, 3736357, 3746359, 3771896

 Patch ID:
None.

Readme file
                          * * * READ ME * * *
            * * * Veritas Storage Foundation HA 6.0.1 * * *
                      * * * Patch 6.0.1.700 * * *
                         Patch Date: 2015-07-22


This document provides the following information:

   * PATCH NAME
   * OPERATING SYSTEMS SUPPORTED BY THE PATCH
   * PACKAGES AFFECTED BY THE PATCH
   * BASE PRODUCT VERSIONS FOR THE PATCH
   * SUMMARY OF INCIDENTS FIXED BY THE PATCH
   * DETAILS OF INCIDENTS FIXED BY THE PATCH
   * INSTALLATION PRE-REQUISITES
   * INSTALLING THE PATCH
   * REMOVING THE PATCH


PATCH NAME
----------
Veritas Storage Foundation HA 6.0.1 Patch 6.0.1.700


OPERATING SYSTEMS SUPPORTED BY THE PATCH
----------------------------------------
Windows 2008 X64
Windows Server 2008 R2 X64



BASE PRODUCT VERSIONS FOR THE PATCH
-----------------------------------
   * Veritas Storage Foundation HA 6.0.1


SUMMARY OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
Patch ID: SFWHA CP7
* 3065921 (3065929) VMwareDisks agent does not work if ESX is not on the same network as VM.
* 2938704 (2907846) This hotfix addresses an issue where the MSMQ resource failed to bind to the correct port.
* 3099727 (3098019) An online Lanman resource takes 1 minute or longer to go offline.
* 3054751 (3053279) Windows Server Backup fails to perform a system state backup with the following error: Error in backup of C:\program files\veritas\\cluster server\ during enumerate: Error [0x8007007b] The filename, directory name, or volume label syntax is incorrect.
* 3164649 (3164604) Users are unable to add or modify custom settings on printers clustered under VCS.
* 3113887 (3111144) After every 49.7 days, VCS logs report that the Global Counter VCS attribute is not updated.
* 3061942 (3061941) Two issues related to storage migration of a volume with SmartMove enabled.
* 3099805 (3111073) SFW and SFWHA 6.0.1 unable to identify Hitachi HUS VM LUNs as thin reclaimable.
* 3124269 (3155620) Tagging of snapshot disks fails during the fire drill operation, because of which disk import also fails.
* 3231600 (3231593) Memory leak occurs for SFW VSS provider while taking a VSS snapshot.
* 3298600 (3298597) The MountV resource faults when bringing a fire drill service group online at the DR site.
* 3300849 (3300843) A fire drill service group fails to come online, because a required MountV resource is in the UNKNOWN state.
* 3231486 (3231484) The SQL Server 2008 Agent Configuration Wizard crashes with an "VCS component - SQL server 2008 Wizard has stopped working" error, after you click Next on the User Databases List panel.
* 3369234 (3369209) When adding the relevant services after the initial configuration, the VCS SQL Server 2008 Agent Configuration Wizard does not update the paths to the shared storage.
* 3386077 (3386071) VMDg resource faults and goes offline unexpectedly in a fire drill configuration.
* 3352705 (3352702) "vxdmpadm disk list" may display the disk name multiple times and it may crash by itself.
* 3360992 (3360987) Server crashes during high write I/O operations on mirrored volumes.
* 3347495 (3347491) After a failover, VEA sometimes does not show the drive letter or mounted folder paths of a successfully-mounted volume.
* 3450291 (3450059) SFW cannot form correct Enclosure for Hitachi Unified Storage 150 arrays
* 3458775 (3458773) After a VxSVC restart on a fast failover configuration, VMDg/MountV resources may fault and failover to other cluster nodes may also fail
* 3456751 (3456746) VxSvc services crashes with heap corruption in VRAS.dll
* 3460423 (3460421) The Primary node hangs if TCP and compression are enabled.
* 3447110 (3424478) Two scenarios where missing disks cannot be removed from a disk group
* 3314700 (3279413) In a network using tagged VLAN configuration, the VCS IP agent may assign the virtual IP address to an incorrect network interface, if the interfaces have same MAC address.
* 3314705 (3279413) In a network using tagged VLAN configuration, the VCS NIC agent may monitor an incorrect network interface, if the interfaces have same MAC address.
* 3062860 (3062849) If LLT is running alongside a utility like Network Monitor, a system crash (BSOD) occurs when shutting down Windows.
* 3499335 (3497449) Cluster disk group loses access to majority of its disks due to a SCSI error
* 3524590 (3524586) VEA hangs and eventually crashes when a user tries to access "View Historic Bandwidth Usage" graph.
* 3521509 (3521503) This hotfix addresses two issues: 1. a Lanman resource faults with the error code: [2, 0x0000006F], and 2. the Lanman resources fault after upgrading VCS or SFW HA from 5.1 SP2 to 6.0.1.
* 3644019 (3644017) A system crash occurred when VxExplorer gathered the GAB/LLT logs.
* 3615729 (3615728) VMwareDisks agent supports only 'persistent' virtual disk mode.
* 3623654 (3623653) Disk group import fails with the following error: "Unexpected kernel error in configuration update"
* 3610931 (3575793) When the Primary server disconnects the replication link (RLINK), Volume Replicator causes I/O delays.
* 3579881 (3579878) When you take the replication service group offline on the Secondary host, the Secondary host stops responding.
* 3736357 (3735641) A failover cluster deports the faulted cluster quorum disk group.
* 3746359 (3746355) Disk group import fails after a Disk group with Fast Mirror Resync volumes is split.
* 3771896 (3771877) Storage Foundation for Windows does not provide array specific support for Infinidat arrays other than DSM.
* 3694213 (3773915) Lanman resource fails to come online.


DETAILS OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
This patch fixes the following Symantec incidents:

Patch ID: SFWHA CP7

* 3065921 (Tracking ID: 3065929)

SYMPTOM:
VMwareDisks agent does not work if ESX is not on the same network as VM.

DESCRIPTION:
VMwareDisks agent accepts an ESXDetails attribute which contains the details of ESX to be used to perform online/offline operations. In case ESX is not accessible from the VM network, agent will not work.

RESOLUTION:
VMwareDisks agent has been modified to work with vCenter details as well. ESXDetails attribute will now accept either ESX details or vCenter details. No attribute change is involved, the same attribute will be used.
Limitation - With vCenter given in the ESXDetails attribute, and if VMHA is not enabled, then if the ESX of the node where application is running faults, then the agent will not be able to detach disks from the faulted ESX's VM. Application failover will not be impacted. The subsequent power ON of the faulted ESX's VM will involve extra manual step to detach disks from the VM. Otherwise VM power ON will not go through.

File Name / Version:
VMwareDisks.dll / 6.0.10002.264
VMwarelib.dll /  6.0.10002.264

* 2938704 (Tracking ID: 2907846)

SYMPTOM:
This hotfix addresses an issue where the MSMQ resource failed to bind to the correct port.

DESCRIPTION:
When the MSMQ agent was brought online, the Event Viewer reported an error stating that Message Queuing failed to bind to port 1801. This error could occur due to various reasons. Even though the binding failed, the agent reported the MSMQ resource as Online.

RESOLUTION:
The MSMQ agent has been enhanced to verify that the clustered MSMQ service is bound to the correct virtual IP and port. By default, the agent performs this check only once during the Online operation. If the clustered MSMQ service is not bound to the correct virtual IP and port, the agent stops the service and the resource faults. You can configure the number of times that this check is performed. To do so, create the DWORD tunable parameter 'VirtualIPPortCheckRetryCount' under the registry key 'HKLM\SOFTWARE\VERITAS\VCS\BundledAgents\MSMQ'. If this parameter is set to a value greater than 1, the agent starts the clustered MSMQ service again and verifies its virtual IP and port binding as many times. It waits 2 seconds between each verification attempt. If the clustered MSMQ service is bound to the correct virtual IP and port, the agent reports Online.

File Name / Version:
MSMQ.dll / 6.0.10004.268

* 3099727 (Tracking ID: 3098019)

SYMPTOM:
[Issue 1] An online Lanman resource takes 1 minute or longer to go offline. [Issue 2] A newly created Lanman resource appears ONLINE even though the service group is not brought online.

DESCRIPTION:
The Lanman agent does not correctly identify the virtual name (configured under the Lanman resource) that is being monitored or taken offline on a cluster node. This happens if the following conditions are satisfied on the node where the operation is being performed: (a) NetBIOS over TCPIP (NetBT) is disabled on the network adapter. (b) The first few characters of the virtual name (for example, Site10_Svr_ExchApp) are identical to any other virtual name (configured under another Lanman resource) or physical computer name registered on the node (for example, Site10_Svr). No issues occur if the first few characters of the virtual name configured under the Lanman resource (for example, Site10_Svr_ExchApp) are not identical to any other computer name registered on the node (for example, Site10_Svr_EV or Site11_Svr).

RESOLUTION:
The Lanman agent has been updated to correctly identify the Lanman resource on which an operation is being performed; it now reports the accurate resource status.

File Name / Version:
Lanman.dll / 6.0.10005.269

* 3054751 (Tracking ID: 3053279)

SYMPTOM:
Windows Server Backup fails to perform a system state backup with the following error: Error in backup of C:\program files\veritas\\cluster server\ during enumerate: Error [0x8007007b] The filename, directory name, or volume label syntax is incorrect.

DESCRIPTION:
The backup operation fails because the VCS service image path contains an extra backslash (\) character, which Windows Server Backup is unable to process.

RESOLUTION:
This issue has been fixed by removing the extra backslash character from the VCS service image path. 
This hotfix changes the following registry keys: 
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Had\ImagePath 
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\HADHelper\ImagePath 
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Cmdserver\ImagePath 
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\VCSComm\ImagePath

NOTE: 
#1. This hotfix supersedes Hotfix_6_0_10006_3054751a, which was released earlier.

* 3164649 (Tracking ID: 3164604)

SYMPTOM:
Issue 1:
Users are unable to add or modify custom settings on printers clustered under VCS.

Issue 2:
The memory usage of the PrintSpool agent increases continuously. This might result in a low virtual memory condition, following which the system reboots and a message similar to following error appears in the Event Viewer Log: Log Name: System Source: Microsoft-Windows-Resource-Exhaustion-Detector Event ID: 2004 Task Category: Resource Exhaustion Diagnosis Events Level: Warning Keywords: Events related to exhaustion of system commit limit (virtual memory). User: SYSTEM Computer: <system_name> Description: Windows successfully diagnosed a low virtual memory condition. The following programs consumed the most virtual memory: VCSAgDriver.exe (<process_ID>) consumed <number_of_bytes> bytes

DESCRIPTION:
Issue 1:
When a printer with custom settings (like printing shortcuts) is added to VCS clustered virtual name, the printer settings become unavailable. This issue occurs because any customizations to printer settings are not recognized after a printer is clustered.

Issue 2:
The memory leak occurs only while adding or deleting printers, or while modifying printer settings.

RESOLUTION:
Issue 1:
This hotfix addresses the issue. However, for the hotfix to work, you need to perform the following tasks manually. Create the following DWORD registry key and set its value to 1. - For global configurations, create: HKEY_LOCAL_MACHINE\SOFTWARE\VERITAS\VCS\BundledAgents\PrintSpool\__GLOBAL__\PrinterDriverSupport - For per resource configurations, create: HKEY_LOCAL_MACHINE\SOFTWARE\VERITAS\VCS\BundledAgents\PrintSpool\ <resource_name>\PrinterDriverSupport After creating this key, do one of the following: - If a PrintSpool resource if offline, bring it online. - If a PrintSpool resource is already online, probe it. Upon installing the hotfix, the PrintSpool agent creates the 'HKEY_LOCAL_MACHINE\Cluster' key. Make sure that the key exists, and then add printers to the virtual name. NOTE: Existing printers will continue to have this issue. Symantec recommends deleting and re-creating such printer resources. CAUTION: This hotfix may cause unexpected effects on other applications that can be clustered (or are already clustered) on the same node.

Issue 2:
This hotfix fixes the memory leak issue.

File Name / Version:
PrintSpool.dll / 6.0.10019.286

* 3113887 (Tracking ID: 3111144)

SYMPTOM:
After every 49.7 days, VCS logs report that the Global Counter VCS attribute is not updated. This error is reported on all nodes in the cluster, except the node with the lowest node ID value. The 'GloablCounter not updated' error also appears in the Event Viewer.

DESCRIPTION:
The error indicates that there could be communication issues between the cluster nodes. However, this error is incorrectly reported, and no communication issues actually occur.

RESOLUTION:
This issue has been fixed and the error is no longer reported at that odd interval.

File Name / Version:
had.exe / 6.0.10010.277
hacf.exe / 6.0.10010.277

* 3061942 (Tracking ID: 3061941)

SYMPTOM:
Two issues related to storage migration of a volume with SmartMove enabled.

DESCRIPTION:
In some cases, the following two issues are observed when storage migration is performed for a volume (with operations, such as subdisk move, mirror resync, mirror attach, etc.) and SmartMove is enabled: 1. If the VCS MountV resource for the volume is brought offline, then the migration task completes abnormally and the volume may report data corruption. 2. If the data migration is performed for a volume of size 2 TB or greater, then the task never reaches 100% completion because of integer overflow while handling large offsets

RESOLUTION:
The two issues mentioned above have been fixed. The first has been fixed by adding correct handling of error conditions so that the task is aborted if the volume is taken offline. The second issue has been fixed by using a 64-bit variable for that purpose.

File Name / Version:
vxconfig.dll / 6.0.10001.308

* 3099805 (Tracking ID: 3111073)

SYMPTOM:
SFW and SFWHA 6.0.1 unable to identify Hitachi HUS VM LUNs as thin reclaimable.

DESCRIPTION:
This issue is regarding SFW and SFW HA 6.0.1 not able to identify the Hitachi Unified Storage VM (HUS VM) LUNs as thin reclaimable LUNs because thin provisioning and storage reclamation is not supported on Hitachi HUS VM LUNs.

RESOLUTION:
The issue has been resolved by enhancing ddlprov.dll to add thin provisioning and storage reclamation support for the Hitachi HUS VM array.

File Name / Version:
ddlprov.dll / 6.0.10003.308

* 3124269 (Tracking ID: 3155620)

SYMPTOM:
Tagging of snapshot disks fails during the fire drill operation, because of which disk import also fails.

DESCRIPTION:
This issue occurs while performing the fire drill operations in case of hardware replication agents, which involve tagging of snapshot disks so that they can be imported separately from the original disks. Because of an issue with SFW, it does not write tags to the disks, and also proceeds without giving any error. Then, the import operation on the snapshot disks also fails because there are no disks present with the specified tag.

RESOLUTION:
This was an existing issue where SFW did not write to disks that are marked as read-only. The issue has been resolved by allowing the fire drill tag to be written to a disk even if the disk is marked as read-only.

File Name / Version:
vxconfig.dll / 6.0.10004.308

* 3231600 (Tracking ID: 3231593)

SYMPTOM:
Memory leak occurs for SFW VSS provider while taking a VSS snapshot.

DESCRIPTION:
This issue occurs during a VSS snapshot operation when VSS is loading and unloading providers. The SFW VSS provider connects to VEA database during the loading and disconnects during the unloading of providers. Because of an issue in the VEA database cleanup during the unloading, the memory leak occurs.

RESOLUTION:
This issue has been resolved so that now SFW VSS provider does not connect to and disconnect from VEA during every load and unload operation. Instead, it creates a connection at the beginning and disconnects when the Veritas VSS Provider Service (vxvssprovider.exe) is stopped.

File Name / Version:
vxvssprovider.exe / 6.0.10006.308

* 3298600 (Tracking ID: 3298597)

SYMPTOM:
The MountV resource faults when bringing a fire drill service group online at the DR site.

DESCRIPTION:
This issue occurs in case of folder mounts whose base mount is also configured under VCS and replicated. Even though the PurgeStaleMountPoints attribute for a MountV resource is set, the stale mount points are not cleared and the resource faults.

RESOLUTION:
The hotfix addresses this issue, and a MountV resource no longer fails due to stale mount points.

File Name / Version:
MountV.dll / 6.0.10013.281

* 3300849 (Tracking ID: 3300843)

SYMPTOM:
A fire drill service group fails to come online, because a required MountV resource is in the UNKNOWN state.

DESCRIPTION:
This issue occurs because the Fire Drill Wizard does not set a required MountV attribute correctly when using mount points. The VMDGResName attribute is set to the MountV resource name that this MountV resource depends on, instead of the appropriate VVRSnap resource name.

RESOLUTION:
The Fire Drill Wizard has been updated to set the correct value for the VMDGResName attribute of the MountV resource.

File Name / Version:
FireDrillVVRStep.dll / 6.0.10014.366
CCFEngine.exe.config

* 3231486 (Tracking ID: 3231484)

SYMPTOM:
The SQL Server 2008 Agent Configuration Wizard crashes with an "VCS component - SQL server 2008 Wizard has stopped working" error, after you click Next on the User Databases List panel.

DESCRIPTION:
This issue occurs while configuring a SQL Server service group (SQL Server 2008, 2008 R2, and 2012). The SQL Server 2008 Agent Configuration Wizard displays the instances of the selected SQL Server version and the corresponding installed services, on the SQL Server Instance Selection panel. If you select only the SQL Server instances and click Next, the wizard proceeds to the User Databases List panel. However, due an internal error the wizard crashes after you click Next on the User Databases List panel.

RESOLUTION:
The wizard behaviour is now modified to address this issue. Even if you select only the SQL Server instances on the SQL Server Instance Selection panel, the wizard now proceeds with the configuration without an error.

File Name / Version:
SQL2008Wizard.exe / 6.0.10009.280

* 3369234 (Tracking ID: 3369209)

SYMPTOM:
When adding the relevant services after the initial configuration, the VCS SQL Server 2008 Agent Configuration Wizard does not update the paths to the shared storage.

DESCRIPTION:
The initial SQL Server agent configuration was done, and the Analysis service was added. When the VCS SQL Server 2008 Agent Configuration Wizard was run again, although the resources were created, the paths for those resources were not in sync.

RESOLUTION:
This hotfix resolves the issue by updating the OLAP path after running the initial configuration.

File Name / Version:
SQL2008Wizard.exe / 6.0.10020.289

* 3386077 (Tracking ID: 3386071)

SYMPTOM:
VMDg resource faults and goes offline unexpectedly in a fire drill configuration.

DESCRIPTION:
This issue occurs if multiple fire drill service groups are configured and kept online. In that case, the VMDg resource may fault and go offline unexpectedly.

RESOLUTION:
A new VMDg attribute, ForFireDrill, has been introduced in this hotfix that resolves this issue. The ForFireDrill attribute defines whether the disk group being monitored by the agent is a fire drill disk group. The value 1 indicates that the disk group being monitored is a fire drill disk group. Default is 0, which means that the disk group being monitored is not a fire drill disk group. Note: After installing the hotfix and before performing a fire drill, manually set the ForFireDrill attribute as true for the existing VMDg resources that belong to the fire drill service group.

File Name / Version:
VMDg.dll / 6.0.10022.291
VMDg.xml / NA

* 3352705 (Tracking ID: 3352702)

SYMPTOM:
"vxdmpadm disk list" may display the disk name multiple times and it may crash by itself.

DESCRIPTION:
On some system, "vxdmpadm disk list" may display the disk name multiple times and sometimes it may crashes by itself. This happens due to incorrect logic in vxcmd.dll, where vxdmpadm tries to access invalid memory.

RESOLUTION:
This issue has been resolved by implementing correct logic code in vxcmd library.

File Name / Version:
vxcmd.dll / 6.0.10010.308

* 3360992 (Tracking ID: 3360987)

SYMPTOM:
Server crashes during high write I/O operations on mirrored volumes.

DESCRIPTION:
This issue occurs when heavy write I/O operations are performed on mirrored volumes. During such high I/O operations, the server crashes due to a problem managing the memory for data buffers.

RESOLUTION:
This issue has been resolved by appropriately mapping the system-address-space described by MDL for the write I/Os on mirrored volumes.

File Name / Version:
vxio.sys / 6.0.10011.308

* 3347495 (Tracking ID: 3347491)

SYMPTOM:
After a failover, VEA sometimes does not show the drive letter or mounted folder paths of a successfully-mounted volume.

DESCRIPTION:
This issue may occur after a failover when VEA sometimes does not show the drive letter or mounted folder paths of a volume even though the volume is successfully mounted with the expected drive letter or folder paths. During a failover, when a disk group gets imported, SFW mounts all volumes of the disk group by querying the mount points using Microsoft API GetVolumePathNamesForVolumeName(). Sometimes, this API fails to return the correct drive letter or mounted folder paths because of which VEA fails to update the same.

RESOLUTION:
NOTE: Please note that using the following workaround has a performance impact on the service group offline and failover operations. This happens because, during the service group offline or failover operation, the performance of the disk group deport operation is impacted by "n/2" seconds maximum, where "n" is the number of volumes in the disk group. To resolve this issue, the operation needs to be retried after a few milliseconds so that the Microsoft API GetVolumePathNamesForVolumeName() returns correct information. As a workaround, a new retry logic is added to the GetVolumePathNamesForVolumeName() API so that it retries the operation in case the mount path returned is empty. It will retry after every 100 milliseconds for "n" number of attempts (5 by default), which can be configured using the registry. This retry logic is disabled by default.

To use the workaround, do the following:
1. Enable the retry logic by changing the value of the registry entry "RetryEnumMountPoint" from 0 to 1 under the registry key 
- HKEY_LOCAL_MACHINE\SOFTWARE\VERITAS\VxSvc\CurrentVersion\VolumeManager
2. Configure the number of retry attempts by changing the value of the registry entry "RetryEnumMPAttempts" under the registry key
- HKEY_LOCAL_MACHINE\SOFTWARE\VERITAS\VxSvc\CurrentVersion\VolumeManager


File Name / Version:
mount.dll / 6.0.10012.308

* 3450291 (Tracking ID: 3450059)

SYMPTOM:
SFW cannot form correct Enclosure for Hitachi Unified Storage 150 arrays

DESCRIPTION:
This issue occurs for Hitachi Unified Storage 150 disk arrays. Because SFW does not provide Enclosure support for Hitachi Unified Storage 150 arrays, SFW cannot form correct Enclosure for this array. Because of this, VEA GUI incorrectly shows disks of two different Enclosures under one Enclosure. Moreover, mirror across disks by Enclosure cannot be performed.

RESOLUTION:
This issue has been resolved by enhancing SFW with Enclosure support for Hitachi Unified Storage 150 arrays.

File Name / Version:
Hitachi.dll / 6.0.10014.308

* 3458775 (Tracking ID: 3458773)

SYMPTOM:
After a VxSVC restart on a fast failover configuration, VMDg/MountV resources may fault and failover to other cluster nodes may also fail

DESCRIPTION:
This issue occurs on restart of the VxSVC service after RO to RW conversion. After the service restarts, re-import of fast failover enabled disk groups may fail, which makes the VMDg or their dependent MountV resources to fault. As the reservation thread is not stopped in this scenario, failover of cluster disk groups to other cluster nodes fail with reservation errors.

RESOLUTION:
This issue has been resolved by ignoring the Host ID check for fast failover enabled disk groups. Therefore, the disk group re-import is successful. If re-import fails for any other reason, then the reservation thread is stopped so that failover to other nodes is successful.

File Name / Version:
vxconfig.dll / 6.0.10016.308

* 3456751 (Tracking ID: 3456746)

SYMPTOM:
VxSvc services crashes with heap corruption in VRAS.dll

DESCRIPTION:
VRAS decided to discard a malformed packet it received, since the size of the packet was too large. It encountered an issue while freeing the IpmHandle pointer and crashed eventually.

RESOLUTION:
This hotfix resolves the crash which occurred during the handling of malformed packet.

File Name / Version:
vras.dll / 6.0.10017.308

* 3460423 (Tracking ID: 3460421)

SYMPTOM:
The Primary node hangs if TCP and compression are enabled.

DESCRIPTION:
During replication, this issue occurs if TCP and compression of data are enabled and the resources are low at the Secondary node. Because of low resources, decompression of data on the Secondary fails repeatedly, causing the TCP buffer to fill up. In such case, if network I/Os are performed on the Primary and a transaction is initiated, then the Primary node hangs.

RESOLUTION:
The issue of system hang caused by VVR is resolved in this hotfix.

File Name / Version:
vxio.sys / 6.0.10018.308

* 3447110 (Tracking ID: 3424478)

SYMPTOM:
Two scenarios where missing disks cannot be removed from a disk group

DESCRIPTION:
The issue where missing disks cannot be removed from a disk group occurs in the following two scenarios: 1. When you try to remove a missing disk from a disk group using the vxdg rmdisk command. In the command, you need to mandatorily provide the name of disk group that the missing disk needs to be removed from. Despite providing the correct disk group name, the command fails because of a bug in the internal check performed for the disk group name. 2. When there are even number of disks in a disk group with half of the disks missing. In this case, if there are any volumes on the non-missing disks, then removing the missing disks is not allowed. If you try to remove them, then it fails with the "Cannot remove last disk in dynamic disk group" error. This happens because the operation to remove disks incorrectly compares the number of disks to be removed with that of the non-missing disks. If the number is equal, then the operation tries to remove the complete disk group. However, the presence of volume resources prevents the removal of the disk group and the intended missing disks as well.

RESOLUTION:
The issue in both the scenarios has been resolved as follows: 1. The issue in the first scenario has been resolved by modifying the way a missing disk can be removed from a disk group. While using the vxdg rmdisk command, you can remove a missing disk from a disk group either by specifying only its display name (for example, "Missing Disk (disk#)") or by specifying both its internal name and name of the disk group to which it belongs. 2. The issue in the second scenario has been resolved so that the operation to remove disks now compares the number of disks to be removed with the total number of disks in the disk group and not with the number of non-missing disks.

File Name / Version:
vxconfig.dll / 6.0.10019.308
vxdg.exe / 6.0.10019.308

* 3314700 (Tracking ID: 3279413)

SYMPTOM:
In a network using tagged VLAN configuration, the VCS IP agent may assign the virtual IP address to an incorrect network interface, if the interfaces have same MAC address.

DESCRIPTION:
In tagged VLAN network configurations; there are multiple independent logical network interfaces that are created within a physical network interface or a teamed network interface. Each of these logical network interfaces may have the same MAC address.
During the application service group configuration, the VCS application configuration wizard enables you to provide the virtual IP address for the application to be configured. The wizard also enables you  to select an interface to which the specified virtual IP address should be assigned.
After you select the required interface, the wizard internally sets the MAC address of the selected interface as the value of the 'MACAddress' attribute of the IP agent. While selecting the interface, if you choose any of the logical interfaces that share the MAC address with other logical interfaces, then the IP agent may assign the specified virtual IP address to an interface other than the one selected.
This issue occurs because the IP agent uses the MAC address of the interface to identify and assign the IP address. Since the MAC address of all the logical interfaces is identical, the agent may fail to identify the selected interface and as a result assign the IP address to an incorrect interface.

RESOLUTION:
The IP agent currently uses the MAC address specified in the 'MACAddress' attribute to identify the interface to which the IP address should be assigned. The agent is now enhanced to use either the MAC address or the interface name as the parameters to identify the interface.
To identify the interface based on its name, you must edit the 'MACAddress' attribute of the IP agent, after you configure the application service group. 
Use the VCS Java Console to edit the 'MACAddress' attribute and enter the interface name instead of the MAC address. 
For example, 
MACAddress = 'InterfaceName' (without quotes)

Notes:
- After you specify the interface name as the 'MACAddress' attribute value, if you want to use the VCS wizards to modify any settings, then you must first reset the value of the 'MACAddress' attribute to use the MAC address of the interface.
Failing this, the VCS wizard may fail to identify and populate the selected interface. 
Use the VCS Java Console to edit the attribute values.

- If you change the interface name, you must update the 'MACAddress' attribute value to specify the new name. 
Failing this, the IP resource will go in an UNKNOWN state.

- While editing the 'MACAddress' attribute to specify the interface name, you must specify the name of only one interface.

- If you manually edit the attribute value in main.cf, you must enter the interface name in double quotes.

File Name / Version:
IP.dll / 6.0.10016.292

* 3314705 (Tracking ID: 3279413)

SYMPTOM:
In a network using tagged VLAN configuration, the VCS NIC agent may monitor an incorrect network interface, if the interfaces have same MAC address.

DESCRIPTION:
In tagged VLAN network configurations; there are multiple independent logical network interfaces that are created within a physical network interface or a teamed network interface. Each of these logical network interfaces may have the same MAC address.  
During the application service group configuration, the VCS application configuration wizard enables you to select an interface for each VCS cluster system and internally sets the MAC address of the selected interface as the value of the 'MACAddress' attribute of the NIC agent. While selecting the interface, if you choose any of the logical interfaces that share the MAC address with other logical interfaces, then the NIC agent may begin to monitor an interface other than the selected one.
This issue occurs because the NIC agent uses the MAC address of the interface to monitor and determine the status. Since the MAC address of all the logical interfaces is identical, the agent may fail to identify the selected interface.

RESOLUTION:
The NIC agent currently uses the MAC address specified in the 'MACAddress' attribute to monitor and determine the interface status. The agent is now enhanced to use either the MAC address or the interface name to monitor and determine the status.
To monitor the interface based on the its name, you must edit the 'MACAddress' attribute of the NIC agent, after you configure the application service group.
Use the VCS Java Console to edit the 'MACAddress' attribute and enter the interface name instead of the MAC address. 
For example, 
MACAddress = 'InterfaceName' (without quotes)

Notes: 

- After you specify the interface name as the 'MACAddress' attribute value, if you want to use the VCS wizards to modify any settings, then you must first reset the value of the 'MACAddress' attribute to the MAC address of the interface.
Failing this, the VCS wizard may fail to identify and populate the selected interface. 
Use the VCS Java Console to edit the attribute values.

- If you change the interface name, you must update the 'MACAddress' attribute value to specify the new name. 
Failing this, the NIC resource will go in an UNKNOWN state.

- While editing the 'MACAddress' attribute to specify the interface name, you must specify the name of only one interface.  

- If you manually edit the attribute value in main.cf, you must enter the interface name in double quotes.

File Name / Version:
NIC.dll / 6.0.10017.292

* 3062860 (Tracking ID: 3062849)

SYMPTOM:
If LLT is running alongside a utility like Network Monitor, a system crash (BSOD) occurs when shutting down Windows.

DESCRIPTION:
The system crashes due to a time out that occurs while waiting for LLT to unbind from the adapters during a shutdown operation. This issue occurs when LLT is configured over Ethernet.

RESOLUTION:
The LLT driver has been updated to properly handle unbind calls so that Windows can shut down gracefully.

File Name / Version:
llt.sys / 6.0.10003.267

* 3499335 (Tracking ID: 3497449)

SYMPTOM:
Cluster disk group loses access to majority of its disks due to a SCSI error

DESCRIPTION:
This issue occurs while processing a request to renew or query the SCSI reservation on a disk belonging to a cluster disk group. Because of the following error in the SCSI command, the operation fails: Unit Attention - inquiry parameters changed (6/3F/03) Because of this, the cluster disk group loses access to majority of its disks.

RESOLUTION:
This issue has been resolved by retrying the SCSI reservation renew or query request.

File Name / Version:
vxio.sys / 6.0.10020.308

* 3524590 (Tracking ID: 3524586)

SYMPTOM:
VEA hangs and eventually crashes when a user tries to access "View Historic Bandwidth Usage" graph.

DESCRIPTION:
In the VVR GUI, if the "View Historic Bandwidth Usage" is launched from RDS, and the historic dataset is large in size, then the VEA hangs and crashes after some-time.

RESOLUTION:
This issue is fixed by handling third-party objects, which consume a lot of memory. Certain enhancements are also implemented when a user refreshes the graph.

File Name / Version:
vvr.jar / N/A

* 3521509 (Tracking ID: 3521503)

SYMPTOM:
Issue 1:
A Lanman resource faults with the error code: [2, 0x0000006F].

Issue 2:
After you upgrade VCS or SFW HA from 5.1 SP2 to 6.0.1, the Lanman resources fault when checking whether the virtual server name is alive. 

NOTE: Fix for issue #1 was earlier released as Hotfix_6_0_10023_3492551. It is now included in this hotfix (Hotfix_6_0_10025_3521509).

DESCRIPTION:
Issue 1:
This occurs due to an internal issue with the Lanman agent.

Issue 2:
This issue occurs if your setup contains DNS entries that point to a virtual IP address that is online. The Lanman agent faults when it performs a duplicate name check if a flag 'DisablePingCheckForDND' is specified in the registry.

RESOLUTION:
Issue 1:
The Lanman agent has been updated so that the Lanman resource does not fault due to this issue.

Issue 2:
This hotfix addresses the issue by updating the Lanman agent to skip the duplicate name check. ADDITIONAL TASKS: After you apply this hotfix, perform the following tasks on all the nodes in the cluster. 1. Open the registry editor. 2. Navigate to the registry key: 'HKLM\SOFTWARE\VERITAS\VCS\BundledAgents\Lanman\<Lanman_Resource_Name>' or 'HKLM\SOFTWARE\VERITAS\VCS\BundledAgents\Lanman\__GLOBAL__' 3. Create the DWORD key 'DisablePingCheckForDND' with the value '1'. Note: If you create this tunable parameter under the '__GLOBAL__' key, it applies to all the Lanman resources configured in the cluster.

File Name Name / Version
Lanman.dll / 6.0.10025.297

* 3644019 (Tracking ID: 3644017)

SYMPTOM:
When VxExplorer gathers the GAB/LLT logs, the program crashes or causes a system crash.

DESCRIPTION:
This issue occurs when the lltshow utility attempts to fetch information about the LLT nodes or links using the following options:
lltshow -n -1
lltshow -l -1
VxExplorer executes these commands when gathering the logs. Therefore, you might encounter the issue when you run VxExplorer with the GAB/LLT option.

RESOLUTION:
This hotfix addresses the issue by updating the lltshow utility to identify the correct sizes of the LLT nodes and links.

FILE / VERSION:
lltshow.exe / 6.0.10028.300

* 3615729 (Tracking ID: 3615728)

SYMPTOM:
VMwareDisks agent supports only 'persistent' virtual disk mode.

DESCRIPTION:
The VMwareDisks agent currently supports only 'persistent' virtual disk mode.
If you have configured the disk in 'independent' mode, in the event of a failover, the VMwareDisks agent sets the mode to 'persistent'.

RESOLUTION:
The behaviour of VMwareDisks agent is modified through this hotfix. 
The VMwareDisks agent now supports 'independent' disk mode.
An attribute 'VirtualDiskMode' is now added to the VMwareDisks agent's attributes. 
You can set the value of this attribute to 'persistent', 'independent_persistent' or 'independent_nonpersistent'.
By default the value is set to 'persistent'. You must modify the value after you configure application monitoring.
Note: The VMwareDisks agent does not detect the mode in which the disk is configured. After a fail over the disk is attached in the mode as that defined 
in the attribute value. 
For more details about the disk modes, refer to VMware documentation.

To modify the value use the VCS Java Console or perform the following steps through command line:
1.	If you have configured the disk in 'persistent' mode, bring the service group offline.
hagrp -offline service_group -sys system_name
2.	Change the cluster configuration to read-write mode
haconf -makerw
3.	Modify the attribute value
hares -modify resource_name VirtualDiskMode independent_persistent
4.	Save config
haconf -dump
5.	Change the cluster configuration to read-only mode
haconf -makero  

FILE / VERSION:
VMwareDisks.dll 6.0.10027.299
VMwarelib.dll 6.0.10027.299
VMwareDisks.xml

* 3623654 (Tracking ID: 3623653)

SYMPTOM:
Disk group import fails with the following error: "Unexpected kernel error in configuration update"

DESCRIPTION:
When the preferred plex for a volume is not set, and a Data Control Map (DCM) plex already resides on an Solid State Device (SSD), disk group import fails with the following error: "Unexpected kernel error in configuration update". This happens because the DCM plex gets set as the preferred plex.

RESOLUTION:
The issue has been resolved by modifying the code such that the DCM plex is not considered for preferred plex, even if the preferred plex is not set.

File Name / Version:
vxconfig.dll / 6.0.10025.308

* 3610931 (Tracking ID: 3575793)

SYMPTOM:
When the Primary server disconnects the RLINK, Volume Replicator causes I/O delays.

DESCRIPTION:
While sending the data acknowledgement to the Primary server, if a timeout occurs, the Primary disconnects the RLINK and initiates the error handler. While the error handler is active, Volume Replicator causes I/O delays on the Primary volume.

RESOLUTION:
The code has been tweaked to resolve the issue. 

File Name / Version:
vxio.sys / 6.0.10023.308

* 3579881 (Tracking ID: 3579878)

SYMPTOM:
When you take the replication service group offline on the Secondary host, the Secondary host stops responding.

DESCRIPTION:
After Volume Replicator sends the writes to the data volumes on the Secondary, it sends a data acknowledgement to the Primary. If this data acknowledgement remains stuck in the queue, and another write is executed simultaneously, the Secondary stops responding.

RESOLUTION:
This issue has been resolved by ensuring that the data acknowledgement messages do not wait in the queue.

FILE / VERSION:
vxio.sys/6.0.10022.308

* 3736357 (Tracking ID: 3735641)

SYMPTOM:
If half the number of disks that are available in the cluster quorum disk group are disconnected, the quorum disk group faults and the failover cluster deports it.

DESCRIPTION:
In the event of a change in the cluster configuration, failover cluster updates the cluster configuration data on the disks that form the quorum disk group.
During this update, if half the number of disks that are available in the cluster quorum disk group are disconnected, then the quorum disks group faults and the failover cluster deports it.

RESOLUTION:
This hotfix resolves the issue by modifying SFW behaviour.

File Name / Version:
vxio.sys / 6.0.10026.308

* 3746359 (Tracking ID: 3746355)

SYMPTOM:
When you split a disk group that has Fast Mirror Resync (FMR) volumes, the disk group import fails.

DESCRIPTION:
When you split a disk group with FMR volumes, the vxio driver tries to log errors with stale information on the disk. Normally, the logging does not succeed when the disk group is split. However, in some cases, the logging might be successful, causing the disk group import to fail.

RESOLUTION:
This hotfix resolves the issue by ignoring the disabled DCO volumes.

File Name / Version:
vxio.sys / 6.0.10027.308

* 3771896 (Tracking ID: 3771877)

SYMPTOM:
Storage Foundation does not provide array specific support for Infinidat arrays other than DSM.

DESCRIPTION:
Storage Foundation does not provide any array specific support for Infinidat arrays, except DSM. As a result, Storage Foundation is unable to perform any operations related to enclosures, thin provisioning reclamation and track alignment on the LUNS created on Infinidat arrays.

RESOLUTION:
This hotfix addresses the issue by providing support for enclosures, thin provisioning reclamation and track alignment for Infinidat arrays.

Known issue: When the reclaim operation for the disk group is in progress and you disconnect a disk path, the reclaim operation fails for last disk in the disk group.
Workaround: Retry the disk group reclaim operation.

FILE / VERSION:
NFINIDAT.dll / 6.0.10029.308
ddlprov.dll / 6.0.10029.308

* 3694213 (Tracking ID: 3773915)

SYMPTOM:
The Lanman resource fails to come online due to certain DNS settings.

DESCRIPTION:
This issue occurs if your setup uses CNAME records instead of  HOST (A) records for the virtual name. The Lanman agent faults when it performs a duplicate name check, because it incorrectly looks at non-HOST records.

RESOLUTION:
This hotfix addresses the issue by updating the Lanman agent to look at only the HOST records from a DNS query.

FILE / VERSION:
Lanman.dll / 6.0.10029.301



INSTALLING THE PATCH
--------------------
What's new in this CP
=====================|

The following hotfixes have been added in this CP:
 - Hotfix_6_0_10026_308_3736357
 - Hotfix_6_0_10027_308_3746359
 - Hotfix_6_0_10029_308_3771896
 - Hotfix_6_0_10029_3694213

For more information about these hotfixes, see the "FIXED_INCIDENTS" section in this Readme.


Install instructions
====================|

Download the appropriate cumulative public patch (CP) executable file to a temporary location on your system.

Each cumulative public patch includes the individual hotfixes that contain enhancements and fixes related to reported issues.
See the "FIXED_INCIDENTS" section for details.

Before you begin
----------------:
[1] Ensure that the logged-on user has privileges to install the CP on the systems.

[2] One or more hotfixes that are included with this CP may require a reboot.
Before proceeding with the installation ensure that the system can be rebooted.

[3] Symantec recommends that you close the Cluster Manager (Java Console) and the Veritas Enterprise Administrator (VEA) Console before installing this CP.

[4] Ensure that you close the Windows Event Viewer before proceeding with the installation.

[5] Before installing CP, ensure that the Startup type of the VCS Authentication Service is set to Automatic. This is applicable only if you have configured a secure cluster. 

[6] Before installing CP on Windows Server Core systems, ensure that Visual Studio 2005 x86 redistributable is installed on the systems.



To install the CP in the silent mode
-----------------------------------:

Perform the following steps:

[1] Double-click the CP executable file to start the CP installation. 

The installer performs the following tasks:
    - Extracts all the individual hotfix executable files
      The files are extracted at %commonprogramfiles(x86)%\Veritas Shared\WxRTPrivates\<HotfixName>
    - Runs the pre-install tasks
    - Installs all the hotfixes sequentially
    - Runs the post-install tasks
The installation progress status is displayed in the command window.

[2] After all the hotfixes are installed, the installer prompts you to restart the system.
Type Y to restart the system immediately, or type N to restart the system later. 
You must restart the system for the changes to take effect.

Note that the installer prompts for a system restart only if hotfixes that require a reboot are included in the CP and are installed.

To install the CP using the command line
----------------------------------------:

Use the VxHFBatchInstaller.exe utility to install a CP from the command line.
The syntax options for this utility are as follows:

vxhfbatchinstaller.exe /CP:<CPName> [/PreInstallScript:<PreInstallScript.pl>] [/silent [/forcerestart]]

where,
    - CPName is the cumulative patch executable file name without the platform, architecture, and .exe extension.
For example, if CP executable name is CP7_SFWHA_601_W2K8_x64.exe, specify it as CP7_SFWHA_601.

    - PreInstallScript.pl is the Perl script that includes the pre-installation steps. These steps forcefully kill the required services and processes in case a graceful stop request does not succeed.
    Symantec recommends that you use this option and script only in case the CP installer fails repeatedly while performing the pre-installation tasks.

    - /silent indicates the installation is run in a non-verbose mode; the installer does not prompt for any inputs during the installation.

    - /forcerestart indicates that the system is automatically restarted, if required, after the installation is complete.


Perform the following steps:
[1] From the command prompt, navigate to the directory where the CP executable file is located and then run the file to extract the contents to a default location on the system. 
The installer displays a list of hotfixes that are included in the CP.
    - On 64-bit systems, the hotfixes executable files are extracted to:
      "%commonprogramfiles(x86)%\Veritas Shared\WxRTPrivates\<CPName>"
      
[2] In the same command window, run the following command to begin the CP installation in the silent mode:
vxhfbatchinstaller.exe /CP:<CPName> /silent

For example, to install a SFW 6.0.1 CP for Windows Server 2008, the command is:
vxhfbatchinstaller.exe /CP:CP7_SFW_601 /silent

The installer performs the following tasks:

    - Extracts all the individual hotfix executable files
      The files are extracted at %commonprogramfiles(x86)%\Veritas Shared\WxRTPrivates\<HotfixName>
    - Runs the pre-install tasks
    - Installs all the hotfixes sequentially
    - Runs the post-install tasks
The installation progress status is displayed in the command window.

[3] After all the hotfixes are installed, the installer displays a message for restarting the system.
You must restart the system for the changes to take effect.

Note that the installer prompts for a system restart only if hotfixes that require a reboot are included in the CP and are installed. The installer automatically restarts the system if you had specified the /forcerestart option in step 2 earlier.

VxHFBatchInstaller usage examples
---------------------------------:

[+] Install CP in silent mode, restart automatically:
vxhfbatchinstaller.exe /CP:CP7_SFWHA_601 /silent /forcerestart

Post-install steps
==================|
The following section describes the steps that must be performed after installing the hotfixes included in this CP.

Ensure that VIP_PATH environment variable is set to "C:\Program Files\Veritas\Veritas Object Bus\bin" 
and NOT to "C:\<INSTALLDIR_BASE>\Veritas Object Bus\bin". Assuming that C:\ is the default installation drive.

Known issues
============|
The following section describes the issue related to the 6.0.1 CP7:

[1] After uninstalling this CP, the Startup type of the VCS Authentication Service resets to manual mode. As a result, the service stops after a system restart, and you might encounter issues with the configuration wizards.
Workaround: After uninstalling the CP, change the Startup type of the VCS Authentication Service to Automatic.

The following section describes the issues related to the individual hotfixes that were included in the previous 6.0.1 CP1:

[1] Hotfix_6_0_10004_308_3124269, Hotfix_6_0_10003_308_3099805, and Hotfix_6_0_10001_308_3061942

 - These hotfixes were initially part of CP1 and they were re-archived in CP2 to ensure that VIP_PATH is set to correct value.

 - In CP1, after installing/un-installing these hotfixes, VIP_PATH environment variable was incorrectly getting set to "C:\<INSTALLDIR_BASE>\Veritas Object Bus\Bin" 
 instead of "C:\Program Files\Veritas\Veritas Object Bus\Bin".

-------------------------------------------------------+


REMOVING THE PATCH
------------------
NO


SPECIAL INSTRUCTIONS
--------------------
This fix is provided without warranty of any kind including the warranties of title or implied warranties of merchantability, 
fitness for a particular purpose and non-infringement. Symantec disclaims all liability relating to or arising out of this fix. 
It is recommended that the fix be evaluated in a test environment before implementing it in your production environment. 
When the fix is incorporated into a Storage Foundation for Windows maintenance release, the resulting Hotfix or Service Pack 
must be installed as soon as possible. Symantec Technical Services will notify you when the maintenance release (Hotfix or Service Pack) 
is available if you sign up for notifications from the Symantec support site http://www.symantec.com/business/support and/or 
from Symantec Operations Readiness Tools (SORT) http://sort.symantec.com.

Additional notes
================|

[+] To confirm the list of cumulative patches installed on a system, run the following command from the directory where the CP files are extracted:
vxhfbatchinstaller.exe /list

The output of this command displays a list of cumulative patches and the hotfixes that are installed as part of a CP. 
This command also displays the hotfixes that are included in a CP but are not installed on the system.

[+] To confirm the installation of the hotfixes, perform one of the following:
    - Run the following command:
      vxhf.exe /list
      The output of this command lists the hotfixes installed on the system.
    - In the Windows Add/Remove program, click "View installed updates" to view the list of the hotfixes installed on the system.

[+] To confirm the latest cumulative patch installed on a system, run the following command from the directory where the CP files are extracted:
vxhfbatchinstaller.exe /cplevel

The output of this command displays the latest CP that is installed, the CP status, and a list of all hotfixes that were a part of the CP but not installed on the system.

[+] The CP installer (vxhfbatchinstaller.exe) creates and stores logs at:
"%allusersprofile%\Veritas\VxHF\VxHFBatchInstaller.txt"

[+] The hotfix installer (vxhf.exe) creates and stores logs at:
"%allusersprofile%\Veritas\VxHF\VxHFBatchInstaller.txt"

[+] For general information about the hotfix installer (vxhf.exe), please refer to the following technote:
http://www.symantec.com/docs/TECH73446

[+] To view a list of hotfixes already installed on a system, please refer to the steps mentioned in the following technote:
http://www.symantec.com/docs/TECH73438

[+] For information on uninstalling a hotfix, please refer to the following technotes:
http://www.symantec.com/docs/TECH225604
http://www.symantec.com/docs/TECH73443

[+] For general information about the CP, please refer to the following technote:
http://www.symantec.com/docs/TECH209086


OTHERS
------
NONE