This readme has the following sections: =====================| * Date * OS/Version * Packages * Etrack Incidents * Fixes Applied for Products * What's new in this CP * Install instructions * - Before you begin * - To install in the verbose mode * - To install in the non-verbose (silent) mode * Post-install steps * Known issues * Errors/Problems fixed * Additional notes * Disclaimer =====================| Date: 2013-12-20 OS: Windows OS Version: 2008, 2008 R2 Packages: =============================================== Architecture/OS Windows Server 2008 / 2008 R2 =============================================== x64 CP10_SFWHA_60_W2k8_x64.exe ----------------------------------------------- Etrack Incidents: 2715104, 2763375, 2705769, 2650336, 2704183, 2695917, 2722108, 2730657, 2762153, 2740833, 2761898, 2752096, 2744349, 2677127, 2783096, 2810516, 2755133, 2845295, 2593039, 2869461, 2877405, 2871478, 2885211, 2791125, 2856969, 2786159, 2862335, 2882535, 2864298, 2897781, 2898414, 2904593, 2919276, 2913240, 3053280, 3062856, 2912682, 3105024, 3111155, 3122364, 3082932, 3137880, 3160235, 3202548, 3210766, 3226691, 2649700, 3252941, 3300658, 3300654, 3377535, 3378041 Fixes Applied for Products ==========================| Storage Foundation and High Availability Solutions (SFW HA) 6.0 for Windows What's new in this CP =====================| The following hotfixes have been added in this CP: - Hotfix_6_0_00033_362_3377535 - Hotfix_6_0_00034_362_3378041 The following hotfix has been removed from this CP: - Hotfix_6_0_00014_2864298 For more information about these hotfixes, see the "Errors/Problems fixed" section in this readme. Install instructions ====================| Download the appropriate cumulative public patch (CP) executable file to a temporary location on your system. You can install the CP in a verbose mode or in a non-verbose mode. Instructions for both options are provided below. Each cumulative public patch includes the individual hotfixes that contain enhancements and fixes related to reported issues. See "Errors/Problems Fixed" section for details. Before you begin ----------------: [1] Ensure that the logged-on user has privileges to install the CP on the systems. [2] One or more hotfixes that are included with this CP may require a reboot. Before proceeding with the installation ensure that the system can be rebooted. [3] Symantec recommends that you close the Cluster Manager (Java Console) and the Veritas Enterprise Administrator (VEA) Console before installing this CP. [4] Ensure that you close the Windows Event Viewer before proceeding with the installation. [5] Before installing CP, ensure that the Startup type of the VCS Authentication Service is set to Automatic. This is applicable only if you have configured a secure cluster and installed CP or Agent Pack 2Q2012 on your system. [6] Before installing CP on Windows Server Core systems, ensure that Visual Studio 2005 x86 redistributable is installed on the systems. To install in the verbose mode ------------------------------: In the verbose mode, the cumulative public patch (CP) installer prompts you for inputs and displays the installation progress status in the command window. Perform the following steps: [1] Double-click the CP executable file to extract the contents to a default location on the system. The installer displays a list of hotfixes that are included in the CP. On 64-bit systems, the hotfixes executable files are extracted to: "%commonprogramfiles(x86)%\Veritas Shared\WxRTPrivates\" The installer also lists the hotfixes that require a reboot of the system after the installation. If system reboot is not an option at this time, you can choose not to install these hotfixes. In such a case, exit the installation and then launch the CP installer again from the command line using the /exclude option. See "To install in a non-verbose (silent) mode" section for the syntax. [2] When the installer prompts whether you want to continue with the installation; type Y to begin the hotfix installation. The installer performs the following tasks: - Extracts all the individual hotfix executable files On 64-bit systems the files are extracted at %commonprogramfiles(x86)%\Veritas Shared\WxRTPrivates\ - Runs the pre-install tasks - Installs all the hotfixes sequentially - Runs the post-install tasks The installation progress status is displayed in the command window. [3] After all the hotfixes are installed, the installer prompts you to restart the system. Type Y to restart the system immediately, or type N to restart the system later. You must restart the system for the changes to take effect. Note that the installer prompts for a system restart only if hotfixes that require a reboot are included in the CP and are installed. To install in the non-verbose (silent) mode -------------------------------------------: In the non-verbose (silent) mode, the cumulative public patch (CP) installer does not prompt you for inputs and directly proceeds with the installation tasks. The installer displays the installation progress status in the command window. Use the VxHFBatchInstaller.exe utility to install a CP from the command line. The syntax options for this utility are as follows: vxhfbatchinstaller.exe /CP: [/Exclude:,...] [/PreInstallScript:] [/silent [/forcerestart]] where, - CPName is the cumulative public patch executable file name without the platform, architecture, and .exe extension. For example, if CP executable file name is CP10_SFWHA_60_W2K8_x64.exe, specify it as CP10_SFWHA_60. - HF1.exe, HF2.exe,... represent the executable file names of the hotfixes that you wish to exclude from the installation. Note that the file names are separated by commas, with no space after a comma. The CP installer skips the mentioned hotfixes during the installation. - PreInstallScript.pl is the Perl script that includes the pre-installation steps. These steps forcefully kill the required services and processes in case a graceful stop request does not succeed. Symantec recommends that you use this option and script only in case the CP installer fails repeatedly while performing the pre-installation tasks. - /silent indicates the installation is run in a non-verbose mode; the installer does not prompt for any inputs during the installation. - /forcerestart indicates that the system is automatically restarted, if required, after the installation is complete. Perform the following steps: [1] From the command prompt, navigate to the directory where the CP executable file is located and then run the file to extract the contents to a default location on the system. The installer displays a list of hotfixes that are included in the CP. - On 64-bit systems, the hotfixes executable files are extracted to: "%commonprogramfiles(x86)%\Veritas Shared\WxRTPrivates\" The installer also lists the hotfixes that require a reboot of the system after the installation. If system reboot is not an option at this time, you can choose not to install these hotfixes. In such a case, launch the CP installer from the command line using the /exclude option. [2] When the installer prompts whether you want to continue with the installation; type N to exit the installer. [3] In the same command window, run the following command to begin the CP installation in the non-verbose mode: vxhfbatchinstaller.exe /CP: /silent For example, to install a SFW HA 6.0 x64 CP for Windows Server 2008, the command is: vxhfbatchinstaller.exe /CP:CP10_SFWHA_60 /silent The installer performs the following tasks: - Extracts all the individual hotfix executable files On 64-bit systems the files are extracted at %commonprogramfiles(x86)%\Veritas Shared\WxRTPrivates\ - Runs the pre-install tasks - Installs all the hotfixes sequentially - Runs the post-install tasks The installation progress status is displayed in the command window. [4] After all the hotfixes are installed, the installer displays a message for restarting the system. You must restart the system for the changes to take effect. Note that the installer prompts for a system restart only if hotfixes that require a reboot are included in the CP and are installed. The installer automatically restarts the system if you had specified the /forcerestart option in step 3 earlier. VxHFBatchInstaller usage examples ----------------------------------: [+] Install CP in silent mode, exclude hotfixes HotFix_6_0_00001_2763375_w2k8_x64.exe and Hotfix_6_0_00002_362_2705769_w2k8_x64.exe: vxhfbatchinstaller.exe /CP:CP10_SFWHA_60/Exclude:HotFix_6_0_00001_2763375_w2k8_x64.exe, Hotfix_6_0_00002_362_2705769_w2k8_x64.exe /silent [+] Install CP in silent mode, restart automatically: vxhfbatchinstaller.exe /CP:CP10_SFWHA_60 /silent /forcerestart Post-install steps ==================| The following section describes the steps that must be performed after installing the hotfixes included in this CP. Note that these steps are applicable only to the hotfixes listed in this section. [1] HotFix_6_0_00007_2722108 Perform the following steps only if you have installed HotFix_6_0_00007_2722108 as part of the CP installation. Setting the DisableServerNameScoping and DisableStrictVirtualNameCheck registry keys Perform the following steps to create and set the DisableServerNameScoping and DisableStrictVirtualNameCheck registry keys that are used by the FileShare agent to support non-scoped file shares and disable virtual name availability as a requirement for file shares to be available on the network. Caution: Incorrectly editing the registry may severely damage your system. Make a backup copy before making changes to the registry. To configure DisableServerNameScoping and DisableStrictVirtualNameCheck registry parameters: 1. To open the Registry Editor, click Start > Run, type regedit, and then click OK. 2. In the registry tree (on the left), navigate to HKLM\SOFTWARE\VERITAS\VCS\BundledAgents. 3. Click Edit > New > Key and create a key by the name Lanman, if it does not exist already. 4. Select Lanman and then click Edit > New > Key and create a key by the name __GLOBAL__. (underscore-underscore-GLOBAL-underscore-underscore) 5. Select __GLOBAL__ and add a DWORD type and specify Value name as DisableServerNameScoping and Value data as 1. The value 1 indicates that the FileShare agent supports non-scoped file shares on Windows Server 2008 and 2008 R2 systems. 6. Select __GLOBAL__ and add a DWORD type and specify Value name as DisableStrictVirtualNameCheck and Value data as 1. Setting this registry value to 1 allows the FileShare agent to make the file shares available even if the virtual name is not available on the network. You must create the DisableStrictVirtualNameCheck key parallel to the file share scoping key DisableServerNameScoping. 7. Save and exit the Registry Editor. -------------------------------------------------------+ [2] Hotfix_6_0_00003_2650336, HotFix_6_0_00008_2762153, HotFix_6_0_00015_2897781, Hotfix_6_0_00013_2882535 and Hotfix_6_0_00023_3122364 If the cluster is not configured before installing this hotfix, the post-install task of modifying the type will fail. In that case, manually run the batch files for the hotfixes after cluster is configured to update the type. You can find the batch file name from the location where the hotfix is extracted. -------------------------------------------------------+ Known issues ============| The following section describes the issues related to the individual hotfixes that are included in this CP: [1] Hotfix_6_0_00009_362_2761898 and HotFix_6_0_00008_2762153 - These Hotfixes are not qualified for the Microsoft Hyper-V guest operating systems. - Fast failover and Intelligent Monitoring Framework (IMF) are not supported. - To install or uninstall a Hotfix using the Hotfix installer, the corresponding VxHF.exe or VxHFW.exe must be invoked from the location where the Hotfix files are extracted. - If a split-brain occurs, the application service group may not be available on any of the cluster nodes. - If there are one or more resources of the VMNSDg type configured, then the resource type definitions for VMNSDg do not get deleted during the SFW HA Hotfix (2762153) uninstallation. -------------------------------------------------------+ [2] The post-installation task of starting the Plugin-Host service fails on Windows Server Core system. The Plugin-Host service requires that the .NET Framework is present on the system where the service is to be started. As the .NET Framework is not supported on the Windows Server Core systems, the service cannot be started on these systems. Hence for the hotfixes that have the post-installation task of starting the Plugin-Host service, the task fails. This is expected behavior and can be safely ignored. -------------------------------------------------------+ Errors/Problems fixed =====================| The fixes and enhancements that are included in this cumulative public patch (CP) are as follows: [1] Hotfix name: Hotfix_3_4_531_0_2715104 Symptom: VEA GUI prompts for login credentials when a logged-in user or domain administrator connects to VEA server. Description: In Veritas Storage Foundation for Windows 6.0, this issue occurs when a logged-in user or domain administrator connects to a Veritas Enterprise Administrator (VEA) server through the VEA GUI. The VEA GUI prompts for login credentials each time the user connects to a VEA server. This happens if the DCOM-based authentication in VEA GUI fails and the existing 32-bit midlman.dll for VEA server cannot be loaded into the 64-bit VEA VxSVC service. Resolution: The DCOM-based authentication is enabled in VEA GUI and, in addition to the 32-bit midlman.dll, a 64-bit midlman64.dll is added for the server component so that it is loaded into VxSVC service. Note: This hotfix is applicable to server components only. A separate hotfix is available for client components. Contact Symantec Technical Support for more details. Binary / Version: midlman64.dll / 3.4.531.0 ci.jar / 3.4.29.0 cicustom.jar / 3.4.29.0 obCommon.jar / 3.4.29.0 OBGUI.jar / 3.4.29.0 -------------------------------------------------------+ [2] Hotfix name: HotFix_6_0_00001_2763375 Symptom: The Fire Drill, Disaster Recovery, and Quick Recovery wizards, and the Solutions Configuration Center did not support Microsoft SQL Server 2012. Description: Need to provide support for SQL Server 2012 in the Fire Drill, Disaster Recovery, and Quick Recovery wizards, and the Solutions Configuration Center. Resolution: Added support for SQL Server 2012 in the Fire Drill, Disaster Recovery, and Quick Recovery wizards, and the Solutions Configuration Center. Note 1: For the Quick Recovery wizard to function properly with SQL Server 2012, you must apply Hotfix_6_0_00011_363_2744349 along with this hotfix. Note 2: This hotfix is applicable to server components only. A separate hotfix is available for client components. Contact Symantec Technical Support for more details. Binary / Version: DRPluginProxy.dll / 6.0.1.709 FireDrillStep.dll / 6.0.1.709 qrpages.dll / 6.0.0.701 QuickRecovery.dll / 6.0.1.709 CCFEngine.exe.config / NA -------------------------------------------------------+ [3] Hotfix name: Hotfix_6_0_00002_362_2705769 Symptom: The Quick Recovery Configuration Wizard fails to create snapshot schedules on ApplicationHA-managed node. Description: When Veritas Storage Foundation for Windows (SFW) 6.0 is installed on the Symantec ApplicationHA 6.0 managed node, the Quick Recovery Configuration Wizard fails to create snapshot schedules for the application. The wizard displays the following error messages: - Failed to Create, Modify or Delete volume settings for one or more schedules - Failed while getting cluster related information for the application Resolution: The VxSnapSchedule.dll file has been updated to address the issue. Binary / Version: VxSnapSchedule.dll / 6.0.00002.362 -------------------------------------------------------+ [4] Hotfix name: Hotfix_6_0_00003_2650336 Symptom: This hotfix addresses an issue related to the VCS PrintSpool agent where information about printers that were newly added in the virtual server context is lost in case of a system crash or unexpected failures that require a reboot. Description: The PrintSpool agent stores printer information to the configuration during its offline function. Therefore if printers are added in the virtual server context, but the printspool resource or the printshare service group is not gracefully taken offline or failed over, the new printer information does not get stored to the disk. In such a case, if the node where the service group is online hangs, shuts down due to unexpected failures, or reboots abruptly, all the new printer information is lost. The printshare service group may also fail to come online again on the node. Resolution: This issue has been fixed in the PrintSpool agent. A configurable parameter now enables the agent to save the printer information periodically. A new attribute, 'SaveFrequency', is added to the PrintSpool agent. SaveFrequency specifies the number of monitor cycles after which the PrintSpool agent explicitly saves the newly added printer information to the cluster configuration. The value 0 indicates that the agent does not explicitly save the information to disk. It will continue to save the printer information during its offline function. The default value is 5. Binary / Version: PrintSpool.dll / 6.0.3.416 PrintSpool.xml / NA -------------------------------------------------------+ [5] Hotfix name: Hotfix_6_0_00004_364_2704183 Symptom: In some cases, Microsoft Failover Cluster disk resources fault when a disk group comes online. Description: In some cases, while importing a disk group, Storage Foundation for Windows (SFW) clears the reservation of all the disks except those belonging to the imported disk groups. It also clears the reservation of offline and basic disks. However, Microsoft Failover Cluster disk resources, which contains the basic disks, do not try to re-reserve the disks and, therefore, they fault. The following are some of the scenarios in which this may happen: 1. A new disk group is created. 2. The Veritas Enterprise Administrator (VEA) VxSVC service on a node or the node itself crashes, and a failover is triggered. 3. The storage is disconnected from one of the nodes and a failover is triggered. Resolution: For the first scenario above, no reservation clearing is required, and it has been fixed. For the other scenarios where reservations need to be cleared, SFW skips the disks that are offline or basic. Binary / Version: vxconfig.dll / 6.0.4.363 -------------------------------------------------------+ [6] Hotfix name: HotFix_6_0_00006_2695917 Symptom: This hotfix addresses an issue where the VCS engine log is flooded with information messages in case a of network congestion. Description: MOMUtil.exe is a VCSAPI client. It uses VCS APIs to get information from VCS and present that to SCOM Server. MOMUtil.exe is invoked after every 5 minutes as part of VCS monitoring. This connects to the VCS Cluster running on the system and tries to fetch the information to display the current status of service groups. In case of a network congestion, the engine is unable to send the data to the client immediately. In this scenario, the following information message is logged after every 5 minutes: Could not send data to peer MOMUtil.exe at this point; received error 10035 (Unknown error), will resend again The engine successfully sends data to the client (MOMUtil.exe) after some time. By this time, the engine log is flooded with the information messages. Resolution: The engine has been modified to print the information messages as debug messages. So the messages will appear in the engine log only if DBG_IPM debug level is set. If you need to log the information messages in the engine log, run the following command to set the debug level DBG_IPM: halog -addtags DBG_IPM Binary / Version: had.exe / 6.0.6.425 hacf.exe / 6.0.6.425 -------------------------------------------------------+ [7] Hotfix name: HotFix_6_0_00007_2722108 Symptom: This hotfix addresses an issue related to the VCS FileShare agent wherein non-scoped file shares are not accessible using virtual server name or IP address if NetBIOS and WINS are disabled. Description: VCS FileShare and Lanman agents can support non-scoped file shares on Windows Server 2008 and 2008 R2 systems if the DisableServerNameScoping registry is created and set to 1. The VCS FileShare agent depends on NetBIOS and DNS to resolve the virtual name. If NetBIOS and WINS are disabled and the DNS is not updated, the agent is unable to resolve the virtual name. This may typically occur when the file share service groups are configured to use localized IP addresses. When the service group is switched or failed over, the virtual name to IP address mapping changes. In such a case if WINS database and the DNS are not updated, the agent is unable to resolve the virtual name. As a result the fileshare resources fault and the shares become inaccessible. The following message is seen in the agent log: VCS INFO V-16-10051-10530 FileShare::online:Failed to access the network path (\\virtualname) Resulotion: The FileShare agent is enhanced to address this issue. The FileShare agent behavior can be controlled using the following registry keys: - HKLM\SOFTWARE\VERITAS\VCS\BundledAgents\Lanman\__Global__\DisableServerNameScoping Set the DisableServerNameScoping key to enable non-scoped file share support. - HKLM\SOFTWARE\VERITAS\VCS\BundledAgents\Lanman\__Global__\DisableStrictVirtualNameCheck Set the DisableStrictVirtualNameCheck key to have the FileShare agent make the file shares accessible irrespective of whether or not the virtual name is resolvable. You must manually create these registry keys after installing this hotfix. See the topic "Setting the DisableServerNameScoping and DisableStrictVirtualNameCheck registry keys" in this readme for more details. Notes: - The registry key DisableVirtualNameCheck will take effect only if DisableServerNameScoping key value is set to 1. - This FileShare agent behavior is applicable only for non-scoped file shares. - In the earlier procedure you created these registry keys at the global level. If there are multiple file share service groups that are to be used in the non-scoped mode, you may have to create these registry keys manually for each virtual server and then set the values. You must create these keys at the following location: HKLM\SOFTWARE\VERITAS\VCS\BundledAgents\Lanman\\ Here should be the virtual computer name assigned to the file share server. This is the VirtualName attribute of the lanman resource in the file share service group. - In case these registry parameters are configured at a global level and also configured for individual file share virtual servers, the registry settings for individual virtual servers takes precedence. - You must create this key only for lanman resources that are part of VCS file share service groups. Configuring this key for lanman resources that are part of other VCS service groups may result in an unexpected behavior. Binary / Version: Fileshare.dll / 6.0.7.433 -------------------------------------------------------+ [8] Hotfix name: Hotfix_6_0_00007_362_2730657a Symptom: This hotfix addresses an issue where volumes created on LUNs exposed from a Hitachi Open-V array are not track aligned. Description: LUNs belonging to a Hitachi array are being discovered by SFW with ProductID as OPEN instead of OPEN-V. This causes track alignment to ignore the arrays and set the Veritas Disk ID and Product ID (VDID/PID) as DEFAULT. The track alignment is set to the default offset of 64 instead of the recommended value of 128. As a result volumes created on this array are not track aligned. Resolution: This issue has been addressed in the SFW binary Hitachi.dll that discovers Hitachi arrays. SFW now correctly discovers and identifies Hitachi Open-V arrays. Binary / Version: Hitachi.dll / 6.0.7.362 -------------------------------------------------------+ [9] Hotfix name: Hotfix_6_0_00009_362_2761898 Symptom: VCS does not support local attached storage in RDC\ GCO DR with single node cluster environments. Description: The VCS VMDg agent works only with clustered dynamic diskgroups, and is designed to use SCSI reservations. In some environments, SCSI support might not be available with certain local attached storage. There are a few common HA/DR configuration scenarios with non-SCSI storage, such as: - Replicated Data Cluster (RDC) configuration - Global Cluster Option (GCO) Disaster Recovery (DR) configuration with single node clusters at each site Hence VMDg agent cannot be an acceptable solution. In either of these common scenarios, the hotfix should help to accommodate a local, non-SCSI storage configuration based on the type of storage utilized, including: - Local/DAS that do not support SCSI - Use of Internal PCI-based SSD drives that do not support SCSI Resolution: The two hotfixes, Hotfix_6_0_00009_362_2761898 and HotFix_6_0_00008_2762153, contain a new VCS agent called VMNSDg (Volume Manager Non-Shared Diskgroup), which will help you in managing dynamic disk groups created on non-shared storage that work without reservation. Hotfix_6_0_00009_362_2761898 is the SFW Hotfix. It provides the necessary Volume Manager interfaces that the new VMNSDg agent can use. Note: While installing the Hotfixes, you must install the SFW Hotfix (Hotfix_6_0_00009_362_2761898) first, and then the SFW HA Hotfix (HotFix_6_0_00008_2762153). Binary / Version: cluscmd.dll / 6.0.9.362 cluster.dll / 6.0.9.362 cluster_msgs.dll / 6.0.9.362 vras.dll / 6.0.9.362 -------------------------------------------------------+ [10] Hotfix name: HotFix_6_0_00008_2762153 Symptom: VCS does not support local attached storage in RDC\ GCO DR with single node cluster environments. Description: The VCS VMDg agent works only with clustered dynamic diskgroups, and is designed to use SCSI reservations. In some environments, SCSI support might not be available with certain local attached storage. There are a few common HA/DR configuration scenarios with non-SCSI storage, such as: - Replicated Data Cluster (RDC) configuration - Global Cluster Option (GCO) Disaster Recovery (DR) configuration with single node clusters at each site Hence VMDg agent cannot be an acceptable solution. In either of these common scenarios, the hotfix should help to accommodate a local, non-SCSI storage configuration based on the type of storage utilized, including: - Local/DAS that do not support SCSI - Use of Internal PCI-based SSD drives that do not support SCSI Resolution: The two hotfixes, Hotfix_6_0_00009_362_2761898 and HotFix_6_0_00008_2762153, contain a new VCS agent called VMNSDg (Volume Manager Non-Shared Diskgroup), which will help you in managing dynamic disk groups created on non-shared storage that work without reservation. HotFix_6_0_00008_2762153 is the SFW HA Hotfix. It packages the VMNSDg agent and its supporting VCS files. Binary / Version: vcs_w2k3bagents_msgs_en.dll / 6.0.8.478 VCSConfig.dll / 6.0.8.478 VMNSDg.dll / 6.0.8.478 -------------------------------------------------------+ [11] Hotfix name: Hotfix_6_0_00008_362_2740833 Symptom: VxSVC service fails to start because DDLProv provider does not load. Description: This issue occurs when the VEA VxSVC service is loading the SFW providers during startup. During this, the DDLProv provider fails to load while copying the SCSI mode page 0x31 response for the connected Fujitsu DX400 disks. This happens because of the provider's incorrect buffer size. Resolution: This issue has been resolved by modifying the buffer size of the DDLProv provider. Binary / Version: ddlprov.dll / 6.0.8.362 -------------------------------------------------------+ [12] Hotfix name: Hotfix_6_0_00010_363_2752096 Symptom: Memory leak occurs in the SFW components vxvds.exe, vxvdsdyn.exe; the vxvdsdyn.exe component crashes. Description: This issue occurs if you repeatedly perform the create and delete volume operations multiple times. Because of this, memory leak occurs in the VDS Dynamic software provider (vxvds.exe) and Super VDS provider (vxvdsdyn.exe) components. Also, the Super VDS provider (vxvdsdyn.exe) component crashes. Resolution: Most of the memory leak issues have been fixed in the vxvds.exe and vxvdsdyn.exe components. The components have been updated to address the memory leak issue. Also, the vxvdsdyn.exe component has been updated so that it does not crash. Binary / Version: vxcmd.dll / 6.0.00010.363 vxvds.exe / 6.0.00010.363 vxvdsdyn.exe / 6.0.00010.363 mount.dll / 6.0.00010.363 vxvm.dll / 6.0.00010.363 -------------------------------------------------------+ [13] Hotfix name: Hotfix_6_0_00011_363_2744349 Symptom: This hotfix addresses several issues where SFW snapshot and scheduling operations fail for Microsoft SQL Server 2012 databases. Description: The following SFW issues may occur for SQL Server 2012 databases: - While running the restore operation with the automatic log replay, the vxsnap command fails to bind to the SQL Server instance and as a result the restore operation is unable to complete and the SQL database remains in the restoring state. The following error is logged: Failed to complete the operation V-76-58657-1059: Error trying to bind to the SQL Server Instance. This issue occurs because SFW is unable to discover a change in a SQL registry value created during SQL Server installation. - Schedules created using the Quick Recovery (QR) Configuration Wizard fail to execute. - Schedules created using the Quick Recovery (QR) Configuration Wizard or the Veritas Enterprise Administrator (VEA) are not replicated across cluster nodes. Resolution: These issues have been fixed in the updated SFW binaries. For the changes in this hotfix to take effect, ensure that the [NT AUTHORITY\SYSTEM] account is granted "sysadmin" server role (from SQL Management Studio Console). Binary / version: vssprov.dll / 6.0.00011.363 vxsnapschedule.dll / 6.0.00011.363 vxschedservice.exe / 6.0.00011.363 -------------------------------------------------------+ [14] Hotfix name: Hotfix_6_0_00001_362_2677127 Symptom: This hotfix addresses an issue related to the SFW component, mount.dll, that causes the Veritas Enterprise Administrator (VxSvc) Service to crash if pagefile is not configured. Description: This issue occurs when the VxSVC service starts without a configured pagefile. The vxsvc.exe service crashes because of an exception in mount.dll. Resolution: VxSvc service used to crash because an empty string variable was passed to free function. The component has been updated to address the issue. Binary / Version: mount.dll / 6.0.00001.362 -------------------------------------------------------+ [15] Hotfix name: Hotfix_6_0_00015_362_2783096 Symptom: The Storage Migration Wizard pages appear truncated. Description: This hotfix addresses an issue where the Storage Migration Wizard pages appear truncated, as a result it is not possible to perform the storage migration tasks using the wizard. This issue is observed on operating systems where the locale is set to Japanese. Resolution: The hotfix fixes the Storage Migration Wizard binaries. Binary / Version VxVmCE.jar / NA -------------------------------------------------------+ [16] Hotfix name: Hotfix_6_0_00016_362_2810516 Symptom: Snapback operation causes too many blocks to resynchronize for the snapback volume. Description: During a snapback operation, SFW determines the blocks to be synced using the Disk Change Object (DCO) logs, which maintain information about the changed blocks. It syncs all the changed blocks from the original volume to the snapback volume and, because of a bug, also updates the DCO logs so that the resynced blocks appear modified on the original volume. Now, if there is another volume on which a snapback operation needs to be performed, then it copies extra blocks on the new snapback volume. For example, consider the following scenario: 1. Create two snapshots of volume "F" (snapshots "G" and "H"). 2. Make "G" writable, and copy files to "G". 3. Snapback "G" using data from "F". 4. Snapback "H" using data from "F". In the above scenario, in Step 4, there should be nothing to resynchronize as no changes are made to "F" and "H". However, it is observed that the number of blocks to be synchronized is the same as in Step 3. This is because the blocks resynced in Step 3 are treated as modified. This causes the extra blocks to be resynced. Resolution: For a snapback operation, corrected the DCO log handling to not treat the resynced blocks as modified. Binary / Version vxio.sys / 6.0.16.362 vvxconfig.dll / 6.0.16.362 -------------------------------------------------------+ [17] Hotfix name: Hotfix_6_0_00017_362_2845295 Symptom: VxSVC service crashes during disk group import and during the subsequent restart. Description: This issue occurs while trying to import a disk group. The maximum buffer size value of the "LastUpdatedDgList" key is set to 1024 at the following registry path: HKLM\SOFTWARE\VERITAS\VxSvc\CurrentVersion\VolumeManager\CBR If you have several disk groups with lengthy names and the combined length of all the disk group names is greater than 1024, then it results in a buffer overrun in the Volume Manager for Windows (vxvm) provider specific to the VxCBR code. Because of this, the Veritas Storage Foundation for Windows (VxSVC) service crashes. The service crashes again when you try to restart it. Resolution: This issue has been resolved by allocating buffer dynamically. Binary / Version: vxvm.dll / 6.0.00017.362 -------------------------------------------------------+ [18] Hotfix name: Hotfix_6_0_00018_362_2593039 Symptom: Provisioned size of disks is reflected incorrectly in striped volumes after thin provisioning reclamation. Description: This hotfix addresses an issue where thin provisioning (TP) reclamation in striped volumes shows incorrect provisioned size for disks. This issue is observed on Hitachi and HP arrays. Resolution: This issue has been fixed by increasing the map size for striped volumes. Binary / Version: vxconfig.dll / 6.0.00018.362 -------------------------------------------------------+ [19] Hotfix name: Hotfix_6_0_00019_362_2869461 Symptom: Veritas VDS software provider log file grows to a very large size. Description: This hotfix addresses an issue where the Veritas Virtual Disk Service (VDS) software provider keeps logging in the log file without checking its size or recycling it. This results in a very large log file. Resolution: The hotfix fixes the issue with the help of the following two registry keys that tune the VDS software provider logging: 1. MAXSIZE gives the maximum size of an individual log file in KB units. A backup log file is created when the log file exceeds the default size. The default value of MAXSIZE is 16384 KB. However, you can customize the default value by running the command Regedit to open the registry editor. Locate the MAXSIZE of the VDS software provider in the registry under the following key: SOFTWARE\Veritas\VxSvc\CurrentVersion\Tracing\vds\MaxSize 2. MAXFILES gives the maximum number of log files that can be present at a time. The old log files are deleted once they exceed the limit of maximum number of files. The default value of MAXFILES is 5. However, you can customize the default value by running the command Regedit to open the registry editor. Locate the MAXFILES of the VDS software provider in the registry under the following key: SOFTWARE\Veritas\VxSvc\CurrentVersion\Tracing\vds\MaxFiles The Veritas VDS software provider logs are found at: Location: %vmpath%/log Filenames: vxvds.log and vxvdsdyn.log Binary / Version: vdsprovutil.dll / 6.0.00019.362 -------------------------------------------------------+ [20] Hotfix name: Hotfix_6_0_00020_362_2877405 Symptom: svchost.exe crashes because of a fault in shsvcs.dll. Description: This hotfix addresses the issue where svchost.exe crashes because of a fault in shsvcs.dll. This issue occurs when shsvcs calls COM interface to query disk extents, causing the WMI and Hardware shell detection services to terminate unexpectedly. Resolution: The hotfix fixes the binaries that caused svchost.exe to crash. Binary / Version: vxvds.exe / 6.0.00020.362 vxvdsdyn.exe / 6.0.00020.362 -------------------------------------------------------+ [21] Hotfix name: Hotfix_6_0_00022_362_2871478 Symptom: Volume shrink operation in SFW does not work correctly for the "New volume size" option. Description: In Storage Foundation for Windows (SFW), this issue occurs if you are using the "New volume size" option of the online volume shrink feature. When you provide the new size for the volume in the "New volume size" box, the volume shrink operation incorrectly shrinks the volume by the new volume size instead of shrinking it by the difference of the current and the specified new size of the volume. Resolution: This issue has been resolved by correcting the online volume shrink functionality. Binary / Version: VxVmCE.jar / NA -------------------------------------------------------+ [22] Hotfix name: Hotfix_6_0_00023_362_2885211 Symptom: In some cases, Windows prompts to format the volume created using VEA. Description: In SFW, while creating a volume using the New Volume Wizard from the Veritas Enterprise Administrator (VEA), if you choose to assign a drive letter or mount the volume as an empty NTFS folder, then Windows wrongly prompts to format the volume. This happens because, as part of volume creation, SFW creates RAW volumes, assigns mount points, and then proceeds with the formatting of volumes. Therefore, mounting RAW volumes explicity causes Windows to display the volume format dialog box. However, the volume is successfully created, and you can close the dialog box displayed by Windows and access the volume. Resolution: This issue has been resolved. SFW now assigns drive letters or mount paths only after the volume formatting task is completed. Binary / Version: vxvm.dll / 6.0.00023.362 -------------------------------------------------------+ [23] Hotfix name: Hotfix_3_4_545_0_2791125 Symptom: Memory leak occurs in the SFW component vxsvc.exe. Description: This issue occurs in SFW when you perform an action repeatedly and multiple times. Because of this, memory leak occurs in actionprovider.dll and, therefore, in vxsvc.exe. Resolution: This issue has been resolved by fixing the memory leak in actionprovider.dll and deleting the event notification jobs immediately on completion. Binary / Version: vxvea3.dll / 3.4.545.0 actionprovider.dll / 3.4.545.0 -------------------------------------------------------+ [24] Hotfix name: HotFix_6_0_00011_2786159 Symptom: Issue 1: This hotfix addresses an issue related to the VCS Cluster Manager (Java Console) where the only the first user in the Cluster Administrator's group is treated as the cluster administrator. All other users in the group do not get cluster administrator privileges and are treated as Guest users. Issue 2: This hotfix addresses an issue related to the VCS Cluster Manager (Java Console) where a service group switch or failover operation to the secondary site fails due to a user privilege error. Fix for issue #1 was earlier released as Hotfix_6_0_00002_2649700. It is now included in this hotfix. Description: Issue 1: If you add multiple users as Cluster Administrators, the VCS Java Console only treats the first user in the list as an administrator. All the other users are treated as Guest users even though the users are listed as Cluster Administrators in the cluster configuration file. Issue 2: This issue occurs in secure clusters set up in a disaster recovery (DR) environment. When you switch or failover a global service group to the remote site, the operation fails with the following error: Error when trying to failover GCO Service Group. V-16-1-50824 At least Group Operator privilege required on remote cluster. This error occurs even if the user logged on to the Java Console is a local administrator, which by default has the Cluster Administrator privilege in the local cluster. This issue occurs because the local administrator is not explicitly added to the cluster admin group at the remote site. During a switch or a failover, the Java Console is unable to determine whether the logged-on user at the local cluster has any privileges on the remote cluster and hence fails the operation. If you use the Java Console to grant the local administrator with operator or administrator privileges to the remote cluster, the Java Console only assigns guest privileges to that user. Resolution: Issue 1: This issue has been fixed in the Java Console. Issue 2: This issue is fixed in the Java Console. The Java Console now allows you to assign the local administrator at the local cluster with cluster admin or cluster operator privileges in the remote cluster. This change is applicable only on Windows. Binary Name / Version VCSGui.jar / NA -------------------------------------------------------+ [25] Hotfix name: Hotfix_6_0_00012_2862335 Symptom: After a reboot of the passive node, all the disk groups which are marked with fast failover, remain in deported none state. Description: Fast failover improves the failover time for the storage stack configured in service groups in a clustered environment. During normal failover, SFW performs a complete disk group deport operation on the active node followed by a Read/Write import operation on a passive node. With fast failover, instead of performing deport and import operations, SFW now performs only a mode change for the disk group. The disk group state on the passive node is changed from Read-Only to Read/Write. A mode change (Read-Only to Read/Write) is a much faster operation compared to a full deport and import (Deport None to Import Read/Write) and thus results in faster disk group failovers. But after a reboot of the passive node, sometimes all of the disk groups which are marked with fast failover, may remain in Deport None state. This causes the failover to take more time than a fast failover. Resolution: This issue has been fixed in this hotfix. Even after the reboot of the passive node, the disk groups marked for fast failover will now be in the Read-Only state. Binary / Version: VMDg.dll / 6.0.00012.503 -------------------------------------------------------+ [26] Hotfix name: Hotfix_6_0_00013_2882535 Symptom: This hotfix includes the VCS FileShare agent that is enhanced to ensure that the file shares configured with VCS are accessible even if the LanmanResName attribute is not specified. Description: File shares configured with VCS come online if the LanmanResName attribute of the Fileshare resource is specified. In case this attribute is not specified, the Fileshare resource would go into an unknown state and would not come online. The Fileshare resource should have the ability to share the folder under physical server context even if LanmanResName attribute is not specified. Resolution: The FileShare agent is enhanced to address this issue. Binary / Version: FileShare.dll / 6.0.00013.509 -------------------------------------------------------+ [27] Hotfix name: HotFix_6_0_00015_2897781 Symptom: Dynamic disk groups created on external disks are not supported in VMNSDg agent. Description: Currently, the VMNSDg (Volume Manager Non-Shared Diskgroup) agent does not support the dynamic disk groups created on external disks. Resolution: To support the dynamic disk groups configured on external disks, the hotfix installer adds the SkipStorageValidation attribute to the VMNSDg agent. By default, this attribute blocks the configuration of disk groups containing external disk (shared or non-shared). In case of SCSI controllers, the disks are considered as internal (non-shared) if they are attached to the same controller as the boot/OS disk, while all the other disks are considered as external. In case of IDE controllers, all the disks are considered as internal. To allow configuration of VMNSDg resource on the dynamic disk groups with external disks, set the SkipStorageValidation attribute to True. Once the attribute is set, the agent shall not differentiate between internal or external disks. Note 1: Please use this attribute with caution. If not configured properly, this attribute can lead to data corruption in certain scenarios, such as split-brain. Note 2: This hotfix depends on Hotfix_6_0_00025_362_2919276. Binary / Version: VMNSDg.dll / 6.0.00015.524 VMNSDg.xml / 6.0.00015.524 IMFCommon.dll / 6.0.00015.524 vcs_w2k3bagents_msgs_en.dll / 6.0.00015.524 -------------------------------------------------------+ [28] Hotfix name: HotFix_6_0_00016_2898414 Symptom: This hotfix addresses an issue where the MSMQ resource failed to bind to the correct port. Description: When the MSMQ agent was brought online, the Event Viewer reported an error stating that Message Queuing failed to bind to port 1801. This error could occur due to various reasons. Even though the binding failed, the agent reported the MSMQ resource as Online. Resolution: The MSMQ agent has been enhanced to verify that the clustered MSMQ service is bound to the correct virtual IP and port. By default, the agent performs this check only once during the Online operation. If the clustered MSMQ service is not bound to the correct virtual IP and port, the agent stops the service and the resource faults. You can configure the number of times that this check is performed. To do so, create the DWORD tunable parameter 'VirtualIPPortCheckRetryCount' under the registry key 'HKLM\SOFTWARE\VERITAS\VCS\BundledAgents\MSMQ'. If this parameter is set to a value greater than 1, the agent starts the clustered MSMQ service again and verifies its virtual IP and port binding as many times. It waits 2 seconds between each verification attempt. If the clustered MSMQ service is bound to the correct virtual IP and port, the agent reports Online. Binary / Version: MSMQ.dll / 6.0.00016.534 -------------------------------------------------------+ [29] Hotfix name: Hotfix_6_0_00017_2898414 Symptom: This hotfix addresses an issue where an application resolved a virtual name incorrectly due to DNS lag. Description: This issue occurred due to a delay in resolving the DNS name. For example, MSMQ resolved a virtual name incorrectly. Therefore, when the MSMQ agent was brought online, the Event Viewer reported an error stating that Message Queuing failed to bind to the appropriate port. Resolution: The Lanman agent has been enhanced to verify the DNS lookup and flush the DNS resolver cache after bringing the Lanman resource online. You need to create the DWORD tunable parameter 'DNSLookupRetryCount' under the registry key 'HKLM\SOFTWARE\VERITAS\VCS\BundledAgents\Lanman'. If this parameter is set, the Lanman agent verifies the DNS lookup as per the parameter value. It waits 5 seconds between each verification attempt. You can also create the DWORD tunable parameter 'SkipDNSCheckFailure' under the Lanman registry key. The default value of 'SkipDNSCheckFailure' is 0 (zero), which indicates that the resources should fault if the DNS lookup fails. If this parameter value is set to 1, the resources should not fault even if the DNS lookup fails. Binary / Version: Lanman.dll / 6.0.00017.534 -------------------------------------------------------+ [30] Hotfix name: Hotfix_6_0_00024_362_2904593 Symptom: SFW is unable to discover the EMC Symmetrix LUNs as thin reclaimable, having firmware version 5876. Description: This hotfix addresses an issue where EMC Symmetrix array, after a firmware upgrade to version 5876, is unable to identify the thin provisioning reclaimable LUNs. Resolution: The hotfix enhances an SFW library to identify the EMC Symmetrix thin reclaimable LUNs having firmware version 5876. Binary / Version: ddlprov.dll / 6.0.00024.362 -------------------------------------------------------+ [31] Hotfix name: Hotfix_6_0_00025_362_2919276 Symptom: Dynamic disk groups created on external disks are not supported in VMNSDg agent. Description: Currently, the VMNSDg (Volume Manager Non-Shared Diskgroup) agent does not support the dynamic disk groups created on external disks. Resolution: This issue has been resolved by removing the cluster disk group check from SFW for the VMNSDg agent requests. Binary / Version: cluscmd.dll / 6.0.00025.362 -------------------------------------------------------+ [32] Hotfix name: Hotfix_6_0_00026_362_2913240 Symptom: MountV resource faults because SFW removes a volume due to delayed device removal request. Description: This issue occurs when SFW removes a volume in response to a delayed device removal request. Because of this, the VCS MountV resource faults. Resolution: This issue has been resolved by not disabling the mount manager interface instance if it is active when the device removal request arrives. Binary / Version: vxio.sys / 6.0.00026.362 vxconfig.dll / 6.0.00026.362 -------------------------------------------------------+ [33] Hotfix name: HotFix_6_0_00019_3053280b Symptom: Windows Server Backup fails to perform a system state backup with the following error: Error in backup of C:\program files\veritas\\cluster server\ during enumerate: Error [0x8007007b] The filename, directory name, or volume label syntax is incorrect. Description: The backup operation fails because the VCS service image path contains an extra backslash (\) character, which Windows Server Backup is unable to process. Resolution: This issue has been fixed by removing the extra backslash character from the VCS service image path. This hotfix changes the following registry keys: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Had\ImagePath HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\HADHelper\ImagePath HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Cmdserver\ImagePath HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\VCSComm\ImagePath Binary / Version: NA / NA NOTE: This hotfix supersedes HotFix_6_0_00019_3053280a, which was released earlier. -------------------------------------------------------+ [34] Hotfix name: Hotfix_6_0_00020_3062856 Symptom: If LLT is running alongside a utility like Network Monitor, a system crash (BSOD) occurs when shutting down Windows. Description: The system crashes due to a time out that occurs while waiting for LLT to unbind from the adapters during a shutdown operation. This issue occurs when LLT is configured over Ethernet. Resolution: The LLT driver has been updated to properly handle unbind calls so that Windows can shut down gracefully. Binary / Version: llt.sys / 6.0.00020.625 -------------------------------------------------------+ [35] Hotfix name: Hotfix_3_4_564_0_2912682 Symptom: In some cases, capacity monitoring email notification fails Description: This issue occurs when the capacity monitoring feature is enabled so that when the used disk space on a volume reaches a user-specified threshold, an email alert message is sent. However, for some SMTP servers, the email notification for capacity monitoring fails with the following error message even if the provided email addresses of the recipients were valid: No valid recipients. Because Action Provider does not enclose recipient email addresses in angle brackets (< >) in the RCPT command, the SMTP server rejects the email addresses though they are valid. Resolution: The issue has been resolved by modifying Action Provider so that it now encloses recipient email addresses in angle brackets. Binary / Version: actionprovider.dll / 3.4.564.0 -------------------------------------------------------+ [36] Hotfix name: Hotfix_6_0_00021_3105024c Symptom: The Disaster Recovery (DR) wizard and the Fire Drill (FD) wizard fail when you run them in an environment that contains the hardware replication setup. Description: When large number of LUNs are discovered in a hardware replication setup, buffer overflow occurs that causes the Plugin Host service used by the DR and FD wizards to crash. Resolution: This issue has been fixed and the DR and FD wizards can now successfully complete their respective configurations. Binary / Version: CommonUtility.dll / 6.0.00021.733 clusffconfigrator.exe.config / NOTE: #1. This hotfix depends on HotFix_6_0_00001_2763375. #2. This hotfix supersedes Hotfix_6_0_00021_3105024, Hotfix_6_0_00021_3105024a, and Hotfix_6_0_00021_3105024b which were released earlier. -------------------------------------------------------+ [37] Hotfix name: Hotfix_6_0_00022_3111155 Symptom: After every 49.7 days, VCS logs report that the Global Counter VCS attribute is not updated. This error is reported on all nodes in the cluster, except the node with the lowest node ID value. The 'GloablCounter not updated' error also appears in the Event Viewer. Description: The error indicates that there could be communication issue between the cluster nodes. However, this error is incorrectly reported, and no communication issue actually occurs. Resolution: This issue has been fixed and the error is no longer reported at that odd interval. Binary / Version: had.exe / 6.0.22.628 hacf.exe / 6.0.22.628 -------------------------------------------------------+ [38] Hotfix name: Hotfix_6_0_00023_3122364 Symptom: Issue 1: This hotfix addresses issues related to the VCS MountV agent where the RegRep resources configured on MountV resources fail to come online, and mount point folders remain accessible even after MountV takes the volumes offline. Issue 2: MountV resources do not come online when performing a fire drill. Fix for issue #1 was earlier released as HotFix_6_0_00010_2856969. It is now replaced by this hotfix. Description: Issue 1: The following issues are observed with the VCS MountV agent: a) The Regrep resource performs file system operations (for example, createdir, write-to-file) on the mount point on which it is configured. During an Online operation of the service group, though the MountV resource reports ONLINE, the RegRep resource fails to perform the file system operations with an error (Device Not Ready). Analysis indicates that the file system was not fully usable even after a successful mount operation. b) The MountV agent can mount a volume as an NTFS folder. If this volume is unmounted, the folder remains accessible for writes by users and processes. This creates an open handle that can cause the MountV online operation to fail on the failover target node. Issue 2: This issue occurs as an indirect result of volume GUID caching. The MountV agent caches volume GUIDs for subsequent monitor operations. Each time a fire drill is performed, new volumes are created and new volume GUIDs are assigned to them. The MountV agent is unable to identify these new volumes, because it does not find their GUIDs in the cache. Therefore, even though the volumes (or mount paths) resources are available, the status of the corresponding MountV resources is not reported as ONLINE. Resolution: Issue 1: For a) - An additional file system check has been implemented in the MountV agent. This check ensures that the file system is accessible and usable after the mount file system operation succeeds. For b) - This hotfix adds a new attribute, BlockMountPointAccess, which defines whether the agent blocks access to the NTFS folder that is used as a folder mount point after the mount point is unmounted. For example, if C:\temp is used as a folder mount for a volume, then after the mount point is unmounted, the agent blocks access to the folder C:\temp. The value True indicates that the folder is not accessible. The default value False indicates that the folder is accessible. Issue 2: A new MountV resource attribute, ForFireDrill, has been introduced to overcome this issue. The value of this attribute indicates whether the resource is created for a fire drill operation. If the value of this attribute is set to 1, the MountV agent does not cache the volume GUID for that particular resource. The default value of this attribute is 0, which indicates that MountV should cache the volume GUID for that resource. NOTE: After installing the hotfix and before performing a fire drill, make sure that you set the ForFireDrill attribute of your existing MountV resources to the appropriate value. Binary / Version: MountV.dll / 6.0.23.629 MountV.xml / 6.0.23.629 -------------------------------------------------------+ [39] Hotfix name: Hotfix_6_0_00027_362_3082932 Symptom: Data corruption may occur if the volume goes offline while a resynchronization operation is in progress. Description: This issue occurs where SFW SmartMove is enabled and data resynchronization operations such as subdisk move, mirror resync, or mirror attach are being performed on a volume. If the volume goes offline (for example, because the MountV resource of the volume went offline) while a resync operation is in progress, the resync task completes abnormally and the volume may report data corruption. The task progress on the GUI may suddenly jump to 100%, but the actual task does not complete. This issue occurs due to improper error handling by SFW. Resolution: This issue has been fixed by adding correct handling of error conditions. Binary / Version: vxconfig.dll / 6.0.00027.362 -------------------------------------------------------+ [40] Hotfix name: Hotfix_6_0_00028_362_3137880 Symptom: Tagging of snapshot disks fails during the fire drill operation, causing disk import to fail. Description: This issue occurs while performing the fire drill operations in case of hardware replication agents, which involve tagging of snapshot disks so that they can be imported separately from the original disks. Because of an issue with SFW, it does not write tags to the disks, and also proceeds without giving any error. The import operation on the snapshot disks also fails because there are no disks present with the specified tag. Resolution: This was an existing issue where SFW did not write to disks that are marked as read-only. The issue has been resolved by allowing the fire drill tag to be written to a disk even if the disk is marked as read-only. Binary / Version: vxconfig.dll / 6.0.10004.308 -------------------------------------------------------+ [41] Hotfix name: Hotfix_6_0_00029_362_3160235 Symptom: On an EMC SRDF setup, VEA VxSVC service may crash during a rescan if there are stale records on Secondary node. Description: In an SFW HA environment with the EMC SRDF agent, this issue may occur when a rescan (automatic or manual) is performed on the Secondary node and the node holds some stale records. During the rescan, the Veritas Enterprise Administrator service (VxSVC) tries to access the stale records on the Secondary. As the records contain invalid data, VxSVC crashes. Resolution: This issue has been resolved by modifying the affected binary so that the VxSVC service now ignores the stale records. Binary / Version: vxconfig.dll / 6.0.00029.362 -------------------------------------------------------+ [42] Hotfix name: Hotfix_6_0_00030_362_3202548 Symptom: Missing disks cannot be removed from a disk group using the vxdg rmdisk command. Description: This issue occurs when you try to remove a missing disk from a disk group using the vxdg rmdisk command. In the command, you need to mandatorily provide the name of disk group that the missing disk needs to be removed from. Despite providing the correct disk group name, the command fails because of a bug in the internal check performed for the disk group name. Resolution: This issue has been resolved by modifying the way a missing disk can be removed from a disk group. With this hotfix, while using the vxdg rmdisk command, you can remove a missing disk from a disk group either by specifying only its display name (for example, "Missing Disk (disk#)") or by specifying both its internal name and name of the disk group to which it belongs. Binary / Version: vxdg.exe / 6.0.00030.362 -------------------------------------------------------+ [43] Hotfix name: Hotfix_6_0_00031_363_3210766 Symptom: On a system with multiple DRL-enabled volumes, a write I/O hang may occur on one of the volumes when write I/O operations are happening on all the volumes. Description: This issue may occur on a system that has multiple volumes with dirty region logging (DRL) added. When heavy write I/O operations are happening on the DRL-enabled volumes, the writes are processed first by the DRL module and then they are written to the disk. If the I/O writes exceed the maximum limitation of I/Os that the DRL module can process, then the next I/O write for a DRL-enabled volume gets stuck in the retry queue of the volume and subsequently hangs on the system. Resolution: This issue has been fixed by maintaining a global retry queue and restarting the write I/O operations when any write I/Os complete in the DRL module. Binary / Version: vxio.sys / 6.0.31.363 vxconfig.dll / 6.0.31.363 -------------------------------------------------------+ [44] Hotfix name: Hotfix_3_4_554_0_3226691 Symptom: VEA GUI may time out while connecting to a server. Description: In some cases, when you try to log on to the VEA GUI, the connection times out while it is trying to connect to the VxSVC service on an SFW server. This happens if, during the connection, the process of SSL initialization takes a long time and exceeds the connection timeout value of 30 seconds. Resolution: This issue has been resolved by optimizing the SSL initialization process so that now it happens when the VxSVC service is starting up rather than during the VEA GUI connection. Binary / Version: vxvea3.dll / 3.4.554.0 -------------------------------------------------------+ [45] Hotfix name: Hotfix_6_0_00032_362_3252941 Symptom: In a Microsoft Failover Cluster environment, creating a virtual machine fails if the volume has folder mount points. Description: In a Microsoft Failover Cluster environment, when you try to create a virtual machine where the volume has folder mount points, Microsoft Failover Cluster sends the IOCTL_DISK_GET_DRIVE_LAYOUT_EX control code. Because of an improper handling of the IOCTL control code, the virtual machine creation fails and incorrect error messages are sent for the code requests. For the boot devices, when checking for the buffer size, incorrectly the VKE_EINVAL message is sent instead of STATUS_BUFFER_TOO_SMALL. For the non-boot devices, the request is not processed and, incorrectly, the VKE_ENXIO message is sent. The following are the error message values and their descriptions: VKE_EINVAL: An invalid parameter was passed to a service or function. STATUS_BUFFER_TOO_SMALL: The buffer is too small to contain the entry. No information has been written to the buffer. VKE_ENXIO: The specified request is not a valid operation for the target device. Resolution: This issue has been fixed by correcting the handling of the IOCTL control code. Now, for the boot devices, when checking for the buffer size, the STATUS_BUFFER_TOO_SMALL message is sent if the supplied buffer is small. And for the non-boot devices, the request is processed successfully by passing the call to the next layer of drivers in the stack. Binary / Version: vxio.sys / 6.0.00032.362 -------------------------------------------------------+ [46] Hotfix name: Hotfix_6_0_00024_3300658 Symptom: When NetBios is disabled, a FileShare service group fails to come online. Description: This issue occurs if the value of the VirtualName attribute in the Lanman resource is specified in lower case characters, and NetBios is disabled. The Lanman resource faults and therefore the FileShare service group fails to come online. Resolution: The hotfix addresses this issue, and now, the FileShare service group comes online successfully. Binary / Version: FileShare.dll / 6.0.24.694 -------------------------------------------------------+ [47] Hotfix name: Hotfix_6_0_00025_3300654 Symptom: If the cluster name string in the registry and the main.cf file is not an exact match, upgrading to SSO authentication or reconfiguring SSO fails. However, the operation is incorrectly reported to be successful. Description: The upgrade or reconfiguration fails because, even though the other configuration settings are updated correctly, the SecureClus cluster attribute is not set to 1 (true). Resolution: The hotfix addresses this issue, and now you can successfully upgrade to SSO authentication or reconfigure SSO even if there is a case mismatch in the cluster name string recorded in the registry and the main.cf file. Binary / Version: VCWDlgs.dll / 6.0.25.694 -------------------------------------------------------+ [48] Hotfix name: Hotfix_6_0_00033_362_3377535 Symptom: In a disaster recovery fire drill configuration, the disks containing the snapshot of the replicating data appear offline and the SRDFSnap resource from the fire drill service group fails. Description: In a disaster recovery fire drill configuration, the disks containing the snapshot of the replicating application data are offline. This issue typically occurs in the following environment: - The disks are shared between cluster nodes on the secondary site. - The cluster systems run Windows Server 2008 or Windows Server 2008 R2 operating system. As per Windows SAN policy for shared disks, the shared disks are initialized in an Offline Shared mode. Since the disks are offline, SFW fails to tag the disks. As a result, the SRDFSnap resource from the fire drill service group fails and the service group fails to come online, on the secondary site. Resolution: SFW behaviour is now updated such that the disks are brought online before they are tagged. This enables the SRDFSnap resource to come online. Binary / Version: vxconfig.dll / 6.0.33.362 -------------------------------------------------------+ [49] Hotfix name: Hotfix_6_0_00034_362_3378041 Symptom: Server crashes during high write I/O operations on mirrored volumes. Description: This issue occurs when heavy write I/O operations are performed on mirrored volumes. During such high I/O operations, the server crashes due to a problem managing the memory for data buffers. Resolution: This issue has been resolved by appropriately mapping the system-address-space described by MDL for the write I/Os on mirrored volumes. Binary / Version: vxio.sys / 6.0.00034.362 -------------------------------------------------------+ Additional notes ================| [+] To confirm the list of patches installed on a system, run the following command from the directory where the CP files are extracted: vxhfbatchinstaller.exe /list The output of this command displays a list of patches and the hotfixes that are installed as part of a CP. This command also displays the hotfixes that are included in a CP but are not installed on the system. [+] To confirm the installation of the hotfixes, perform one of the following: - Run the following command: vxhf.exe /list The output of this command lists the hotfixes installed on the system. - In the Windows Add/Remove program, click "View installed updates" to view the list of the hotfixes installed on the system. [+] The CP installer (vxhfbatchinstaller.exe) creates and stores logs at: "%allusersprofile%\Application Data\Veritas\VxHF\VxHFBatchInstaller.txt" [+] The hotfix installer (vxhf.exe) creates and stores logs at: "%allusersprofile%\Application Data\Veritas\VxHF\VxHF.txt" [+] For general information about the hotfix installer (vxhf.exe), please refer to the following technote: http://www.symantec.com/docs/TECH73446 [+] To view a list of hotfixes already installed on a system, please refer to the steps mentioned in the following technote: http://www.symantec.com/docs/TECH73438 [+] For information on uninstalling a hotfix, please refer to the steps mentioned in the following technote: http://www.symantec.com/docs/TECH73443 Disclaimer ==========| This fix is provided without warranty of any kind including the warranties of title or implied warranties of merchantability, fitness for a particular purpose and non-infringement. Symantec disclaims all liability relating to or arising out of this fix. It is recommended that the fix be evaluated in a test environment before implementing it in your production environment. When the fix is incorporated into a Storage Foundation for Windows maintenance release, the resulting Hotfix or Service Pack must be installed as soon as possible. Symantec Technical Services will notify you when the maintenance release (Hotfix or Service Pack) is available if you sign up for notifications from the Symantec support site http://www.symantec.com/business/support and/or from Symantec Operations Readiness Tools (SORT) http://sort.symantec.com.