* * * READ ME * * * * * * Symantec Storage Foundation 6.1 * * * * * * Patch 6.1.0.600 * * * Patch Date: 2017-02-21 This document provides the following information: * PATCH NAME * OPERATING SYSTEMS SUPPORTED BY THE PATCH * PACKAGES AFFECTED BY THE PATCH * BASE PRODUCT VERSIONS FOR THE PATCH * SUMMARY OF INCIDENTS FIXED BY THE PATCH * DETAILS OF INCIDENTS FIXED BY THE PATCH * INSTALLATION PRE-REQUISITES * INSTALLING THE PATCH * REMOVING THE PATCH PATCH NAME ---------- Symantec Storage Foundation 6.1 Patch 6.1.0.600 OPERATING SYSTEMS SUPPORTED BY THE PATCH ---------------------------------------- Windows Server 2008 R2 X64 Windows 2012 X64 Windows Server 2012 R2 X64 BASE PRODUCT VERSIONS FOR THE PATCH ----------------------------------- * Symantec Storage Foundation 6.1 SUMMARY OF INCIDENTS FIXED BY THE PATCH --------------------------------------- Patch ID: SFW CP6 * 3500210 (3497449) Cluster disk group loses access to majority of its disks due to a SCSI error. * 3329771 (3511276) While creating a virtual machine, storage warnings occur and dependency is not created on Symantec storage class resources. * 3525531 (3525528) VxSVC crashes with heap corruption if paging file is disabled. * 3547461 (3547460) The Add Disk to Dynamic Disk Group wizard stops responding on the second panel if the wizard is launched from a disk without sufficient free space. * 3558216 (3456746) VxSvc services crashes with heap corruption in VRAS.dll * 3568040 (3568039) VDID module fails to generate Unique Disk ID for the Fujitsu ETERNUS array LUNs * 3592939 (3594163) Failover Cluster Manager does not display volume information for SFW resources and crashes while accessing the Shadow Copies tab for a resource * 3622272 (3622271) When you import a cluster disk group with a large number of LUNs and volumes, the server stops responding. * 3627063 (3589195) For SQL server 2014, in the Quick Recovery wizard, only the schedule for VSS Snapshots gets created. The vxsnap prepare and vxsnap create commands are not executed. * 3684124 (3684123) When you convert a basic disk with partition to a Cluster Disk Group, the Volume created on the basic group may be marked as Missing. * 3691649 (3691647) The system crashes when you perform a read/write operation on a mirrored volume with Dirty region logging (DRL). * 3709295 (3687466) In some cases, mount points configured under Microsoft Failover Cluster are lost. * 3745619 (3745616) An enclosure is not formed for LUNS created on EMC Invista (VPLEX) array. * 3755803 (3755802) The system stops responding when replication is active. * 3771885 (3771877) Storage Foundation for Windows does not provide array specific support for Infinidat arrays other than DSM. * 3553667 (3553664) The Fire Drill (FD), Disaster Recovery (DR), and Quick Recovery (QR) wizards, and the Solutions Configuration Center (SCC) do not support Microsoft SQL Server 2014. * 3752571 (3752570) Volume becomes inaccessible after you perform multiple snapshot operations followed by multiple snapback (resync from replica) operations. * 3767113 (3767110) Mirrored volumes with DCO log volumes and DRL logs are not resynchronized using DRL, after a system crash. * 3775640 (3775638) The VMDg resource in a File Server role fails to restart after a failover. * 3768192 (3819634) The storage reclamation operation may hang on an array that supports UNMAP command. * 3839102 (3839096) Storage Foundation for Windows (SFW) broadcasts the license keys over the network on the UDP port number 2164. * 3826326 (3826318) When you add a large number of disks and perform a Rescan, the Veritas Enterprise Administrator (VEA) GUI hangs on the 'Disks View'. * 3856800 (3856819) When you enable fast failover, the volume drive letters are lost during resource failover. * 3859624 (3859623) The VxSVC service crashes during the snapshot operation. * 3861094 (3861093) When you reattach a mirror plex to a volume, the system crashes (BSOD). * 3838760 (3838759) A system crashes when excessive number of large write I/O requests are received on volumes mirrored using Dirty Region Logging (DRL). * 3842738 (3842736) Onlining or offlining of a Volume Manager Disk group (VMDg) resource causes the Resource Host Monitor (RHS.exe) service to crash. * 3853963 (3854164) When the logged-on user is a nested domain user with administrative privileges, the user is not able to perform Storage Foundation for Windows (SFW) administrative tasks on the SFW 6.1 server. * 3853418 (3853417) Issue 1: The disk group created on EMC Invista devices are automatically deported during another disk group import. Issue 2: The value of the Vendor Disk ID (VDID) attribute for the EMC Invista devices is not consistent. Issue 3: The failure of the SCSI INQUIRY command in the ddlprov.dll file causes incorrect generation of VDID. * 3863759 (3862346) After reclaiming storage on disks that support thin provisioning and storage reclamation, SQL Server data might get corrupted. * 3872010 (3854164) When the logged-on user is a nested domain user with administrative privileges, the user is not able to perform Storage Foundation for Windows (SFW) administrative tasks on the SFW 6.1 server. * 3874444 (3874443) After upgrading from SFW 6.1 CP3 to CP4, the disk queue length keeps increasing. * 3880433 (3880457) Generic VDID is displayed for LUNs belonging to Dell PowerVault MD3000 array series. * 3889886 (3889885) VVR replication switches between "Active" and "Activating" state. * 3898318 (3898317) In a Microsoft Failover Cluster Manager, volumes created on a Volume Manager diskgroup are not auto-mounted after an offline/online/failover operation of VMDG resource. * 3901194 (3898322) A system may crash when a disk having volume snapshot is removed. DETAILS OF INCIDENTS FIXED BY THE PATCH --------------------------------------- This patch fixes the following incidents: Patch ID: SFW CP6 * 3500210 (Tracking ID: 3497449) SYMPTOM: Cluster disk group loses access to majority of its disks due to a SCSI error. DESCRIPTION: This issue occurs while processing a request to renew or query the SCSI reservation on a disk belonging to a cluster disk group. Because of the following error in the SCSI command, the operation fails: Unit Attention - inquiry parameters changed (6/3F/03) Because of this, the cluster disk group loses access to majority of its disks. RESOLUTION: This issue has been resolved by retrying the SCSI reservation renew or query request. File Name / Version: vxio.sys / 6.1.00001.445 * 3329771 (Tracking ID: 3511276) SYMPTOM: While creating a virtual machine, storage warnings occur and dependency is not created on Symantec storage class resources. DESCRIPTION: This issue occurs while creating a virtual machine in a Microsoft Failover Clustering environment. If the underlying volumes are managed by the VMDg or Volume Manager Shared Volume resources, the wizard shows following warning message: Disk path 'path' is not a path to storage in the cluster or to storage that can be added to the cluster. You must ensure this storage is available to every node in the cluster to make this virtual machine highly available. The wizard also does not create a resource dependency on the VMDg or Volume Manager Shared Volume resources. Moreover, on Windows Server 2008 R2 operating systems, when you refresh the virtual machine configuration, it displays a warning message and deletes any resource dependencies that you may have created manually. The issue occurs because the Microsoft code for this operation was limited to a physical disk resource. Also, after fixing the Microsoft code, it was found that the VMDg and Volume Manager Shared Volume resource types were unable to handle a few control codes. RESOLUTION: This issue has been resolved by implementing the required control codes in the VMDg and Volume Manager Shared Volume resource types. In case of VMDg, the control code returns disk signature for the first subdisk of the first volume in the disk group. File Name / Version: vxres.dll / 6.1.00003.445 vxvolres.dll / 6.1.00003.445 cluscmd.dll / 6.1.00003.445 * 3525531 (Tracking ID: 3525528) SYMPTOM: VxSVC crashes with heap corruption if paging file is disabled. DESCRIPTION: This issue occurs if the Windows paging file is disabled, because of which the Veritas Enterprise Administrator (VxSVC) service crashes causing heap corruption while loading the mount provider. This happens because the provider variable initialization was not done properly. RESOLUTION: This issue has been resolved by initializing the provider variable properly. NOTE: Additional information has been added to the hotfix installation instructions. Follow them to resolve the issue completely. File Name / Version: mount.dll / 6.1.00004.445 * 3547461 (Tracking ID: 3547460) SYMPTOM: The Add Disk to Dynamic Disk Group wizard stops responding on the second panel if the wizard is launched from a disk without sufficient free space. DESCRIPTION: This issue occurs when you launch the Add Disk to Dynamic Disk Group wizard from a disk that does not have sufficient free space. When you click "Next" on the second panel of the wizard, it gives a null pointer exception. Therefore, you cannot proceed to the next panel and the wizard needs to be closed. RESOLUTION: This issue has been fixed by modifying the code to handle null check so that now the wizard gives a proper error message instead of the null pointer exception. File Name / Version: VxVmCE.jar / N/A * 3558216 (Tracking ID: 3456746) SYMPTOM: VxSvc service crashes with heap corruption in VRAS.dll DESCRIPTION: VRAS decided to discard a malformed packet it received, since the size of the packet was too large. It encountered an issue while freeing the IpmHandle pointer and crashed eventually. RESOLUTION: This hotfix resolves the crash which occurred during the handling of malformed packet. File Name / Version: vras.dll / 6.1.00007.445 * 3568040 (Tracking ID: 3568039) SYMPTOM: VDID module fails to generate Unique Disk ID for the Fujitsu ETERNUS array LUNs DESCRIPTION: This issue occurs during generic Veritas Disk ID (VDID) formation for Fujitsu ETERNUS array LUNs. During this, the SFWVDID module incorrectly claims wrong descriptor ID and fails to generate Unique Disk ID for the array LUNs. RESOLUTION: This issue has been fixed as part of the SFWVDID binary veritas.dll, which discovers VDID generically for the given LUNs. SFW now correctly discovers VDID generically for the Fujistu ETERNUS array LUNs. File Name / Version: veritas.dll / 6.1.00008.445 * 3592939 (Tracking ID: 3594163) SYMPTOM: Failover Cluster Manager does not display volume information for SFW resources and crashes while accessing the Shadow Copies tab for a resource DESCRIPTION: On Windows Server 2012 R2 operating systems, this issue occurs while trying to view volume information for two SFW resources in the Failover Cluster Manager snap-in. Failover Cluster Manager does not display the volume information for the Volume Manager Disk Group (VMDg) and cluster-shared disk group (CSDG) resources of SFW. Moreover, it crashes when you try to access the Shadow Copies tab for an SFW resource. This happens because the SFW resources did not have the CLUS_RESSUBCLASS_STORAGE_DISK flag, which is required by Failover Cluster Manager for fetching volume information. RESOLUTION: This issue has been resolved adding the CLUS_RESSUBCLASS_STORAGE_DISK flag to the two SFW resources so that Failover Cluster Manager now fetches and displays the volume information. File Name / Version: vxres.dll / 6.1.00010.445 vxvolres.dll / 6.1.00010.445 * 3622272 (Tracking ID: 3622271) SYMPTOM: When you import a cluster disk group with a large number of LUNs and volumes, the server stops responding. DESCRIPTION: When you import a cluster disk group with a large number of LUNs and volumes, it causes a deadlock between the Mount Manager and the vxio driver, due to which the Windows server on which SFW or SFW HA is installed, stops responding. RESOLUTION: This issue has been resolved by modifying the vxio driver. FILE / VERSION: vxio.sys / 6.1.00011.445 * 3627063 (Tracking ID: 3589195) SYMPTOM: For SQL server 2014, in the Quick Recovery wizard, only the schedule for VSS Snapshots gets created. The vxsnap prepare and vxsnap create commands are not executed. DESCRIPTION: When you run the Quick Recovery wizard for SQL Server 2014, only the schedule for VSS Snapshots gets created. The vxsnap prepare and vxsnap create commands are not executed. Due to this, the scheduled snapshot does not occur. RESOLUTION: This issue has been resolved by adding support for SQL Server 2014. FILE / VERSION: vxsnapschedule.dll / 6.1.00012.445 * 3684124 (Tracking ID: 3684123) SYMPTOM: When you convert a basic disk with partition to a Cluster Disk Group, the Volume created on the basic group may be marked as Missing. DESCRIPTION: When you create a volume on a basic disk and upgrade the basic disk to an SFW dynamic disk by creating a cluster disk group on it, the volume may be marked as Missing instead of healthy. To view the correct state of the volume, you need to deport and then import the cluster disk group. Note: To upgrade a basic disk to an SFW dynamic disk, you must have minimum 16MB free space on the basic disk. RESOLUTION: This issue has been resolved by ensuring that the device name and other attributes for the VxVM volumes do not get overwritten. FILE / VERSION: ftdisk.dll / 6.1.00013.445 * 3691649 (Tracking ID: 3691647) SYMPTOM: When you create a mirrored volume with DRL and then perform a read/write operation on the volume, the system crashes (BSOD). DESCRIPTION: When you create a mirrored volume with DRL and then perform a read/write operation on the volume, the system crashes (BSOD) and the following error message is displayed: STOP: 0x000000B8 ATTEMPTED_SWITCH_FROM_DPC. RESOLUTION: This issue was because of a fault in the SFW vxio driver. It has been resolved by modifying the vxio driver. FILE / VERSION: vxio.sys / 6.1.00014.445 * 3709295 (Tracking ID: 3687466) SYMPTOM: In some cases, mount points configured under Microsoft Failover Cluster are lost. DESCRIPTION: This issue may occur while assigning mount points in a Microsoft Failover Cluster environment or during Failover. During this, Microsoft's "GetVolumePathNamesForVolumeName" function, which is used by Volume Manager Disk Group (VMDg) and volume resource mount handling, fails to return mount point information even though the mount points exist on the system. This happens because of an issue with the "GetVolumePathNamesForVolumeName" function. As a result of this behaviour, the VMDg resource removes the mount points from the cluster database during the volume arrival notification or failover. RESOLUTION: This issue has been resolved by modifying the present handling of the Microsoft function GetVolumePathNamesForVolumeName. File Name / Version: cluscmd.dll / 6.1.00019.445 vxres.dll / 6.1.00019.445 vxvolres.dll / 6.1.00019.445 mount.dll / 6.1.00019.445 * 3745619 (Tracking ID: 3745616) SYMPTOM: SFW does not form an enclosure for the LUNS created on EMC Invista (VPLEX) arrays. DESCRIPTION: SFW currently does not provide support for advanced reporting on EMC Invista (VPLEX) arrays. As a result, enclosures are not formed and a VDID is not generated for the LUNS created on these arrays. RESOLUTION: This hotfix addresses the issue by providing support for advanced reporting on EMC Invista (VPLEX) arrays. SFW now generates a VDID and forms enclosures for the LUNS created on EMC Invista (VPLEX) arrays. FILE / VERSION: emc.dll / 6.1.00016.445 * 3755803 (Tracking ID: 3755802) SYMPTOM: The system stops responding when replication is active. DESCRIPTION: When replication is active, the application I/O may hang when Volume Replicator replicates from the replicator log. This happens because at times, Volume Replicator may incorrectly allocate memory twice for data that is read back from the log, freeing it only once. If this happens, the memory module runs out of memory and causes the application I/Os to hang. RESOLUTION: The issue has been fixed by allocating memory to the READBACK memory pool only once during the read-back operation. FILE / VERSION: vxio.sys / 6.1.00017.445 * 3771885 (Tracking ID: 3771877) SYMPTOM: Storage Foundation does not provide array specific support for Infinidat arrays other than DSM. DESCRIPTION: Storage Foundation does not provide any array specific support for Infinidat arrays, except DSM. As a result, Storage Foundation is unable to perform any operations related to enclosures, thin provisioning reclamation and track alignment on the LUNS created on Infinidat arrays. RESOLUTION: This hotfix addresses the issue by providing support for enclosures, thin provisioning reclamation and track alignment for Infinidat arrays. Known issue: When the reclaim operation for the disk group is in progress and you disconnect a disk path, the reclaim operation fails for last disk in the disk group. Workaround: Retry the disk group reclaim operation. Note: When you install this hotfix, a registry key is created to add track alignment support for the Infinidat array. This registry key is not deleted when you uninstall this hotfix. FILE / VERSION: NFINIDAT.dll / 6.1.00020.445 ddlprov.dll / 6.1.00020.445 * 3553667 (Tracking ID: 3553664) SYMPTOM: Unable to configure SQL Server 2014 applications using the FD, DR, or QR wizards. DESCRIPTION: Need to provide support for SQL Server 2014 in the FD, DR, and QR wizards, and the SCC. RESOLUTION: This hotfix provides support for SQL Server 2014 in the FD, DR, and QR wizards, and the SCC. FILE / VERSION: DRPluginProxy.dll / 6.1.00007.351 QuickRecovery.dll / 6.1.00007.351 CCFEngine.exe.config / - 00_HA_Solutions.adv-xml / - Note: For the QR wizard to function properly with SQL Server 2014, you must apply Hotfix_6_1_00012_445_3627063 along with the current hotfix. * 3752571 (Tracking ID: 3752570) SYMPTOM: Volume becomes inaccessible after you perform multiple snapshot operations followed by multiple snapback (resync from replica) operations. DESCRIPTION: When you perform multiple snapshot operations followed by multiple snapback (resync from replica) operations, the Data Change Object (DCO) map for the snapshot volumes are not updated correctly. This results in the corruption of the main volume on the subsequent restore operation and the volume becomes inaccessible. RESOLUTION: This issue has been resolved by modifying the code to correctly update the DCO map during the snapback operation. File Name / Version: vxconfig.dll / 6.1.00018.445 * 3767113 (Tracking ID: 3767110) SYMPTOM: Mirrored volumes with DCO log volumes and DRL logs are not resynchronized using DRL, after a system crash. DESCRIPTION: This issue occurs for mirrored volumes with Disk Change Object (DCO) log volumes and Dirty region logging (DRL) logs in case of a system failure. Here, instead of using DRL to quickly resynchronize all the copies of a mirrored volume, the system restores all mirrors of the volume by copying the full contents of the volume between its mirrors. This is a lengthy and I/O intensive process. RESOLUTION: This hotfix resolves the issue by modifying SFW behaviour. Note: If your existing mirrored volumes have DCO log volumes and DRL logs, you must delete and then recreate the DRL logs. FILE / VERSION: vxio.sys / 6.1.22.445 vxboot.sys / 6.1.22.445 vxconfig.dll / 6.1.22.445 * 3775640 (Tracking ID: 3775638) SYMPTOM: After a failover, the VMDg resource in a File Server role hangs in an "Online Pending" state. DESCRIPTION: After a failover of VMDg resource in a File Server role, the resources are brought online in the following order: VMDg->File Server->Network name During the online process, the VMDg resource queries the Network name to recreate the file shares. However, if the Network name resource hasn't restarted until then, the VMDg resource remains in an "Online Pending" state and hangs after the resource hosting subsystem crashes due to the time out. RESOLUTION: This hotfix modifies the VMDg resource online process to address the issue. To recreate the file shares, the VMDg resource now queries the File Server resource instead of the Network name resource. After a failover, the File Server resource restarts before the VMDg resource queries it. This behaviour prevents the VMDg resource to go in an "Online Pending" state. FILE / VERSION: vxres.dll / 6.1.21.448 cluscmd.dll / 6.1.21.448 * 3768192 (Tracking ID: 3819634) SYMPTOM: The storage reclamation operation may hang and the system may become unresponsive, on an array that supports UNMAP command. DESCRIPTION: SFW wizards and CLI commands provide support for reclaiming the unused storage from the disks that support reclamation. If these disks belong to an array that supports UNMAP command, the SFW operations that are initiated for reclamation may hang. As a result, the system becomes unresponsive. This issue occurs because of a deadlock in vxio.sys. RESOLUTION: The SFW behaviour to reclaim the unused storage has been modified as part of this hotfix. SFW now overcomes the deadlock faced in the vxio.sys, during the reclamation of the unused storage. Note: This issue is seen only on Windows Server 2012/R2. FILE / VERSION: vxio.sys / 6.1.00023.445 * 3839102 (Tracking ID: 3839096) SYMPTOM: SFW broadcasts the license keys over the network on the UDP port number 2164. DESCRIPTION: Even though the network duplication check is disabled, SFW broadcasts the license keys on UDP port 2164 every 30 minutes. SFW broadcasts these license keys with LIC_CHECK_SEND in the header. RESOLUTION: This hotfix resolves the issue by disabling the broadcast if the network duplication check is disabled. FILE / VERSION: sysprov.dll / 6.1.26.445 * 3826326 (Tracking ID: 3826318) SYMPTOM: When you add a large number of disks and perform a Rescan, the VEA GUI hangs on the 'Disks View'. DESCRIPTION: This issue occurs when you add a large number of Logical Units (LUNs) or disks to the host and perform a Rescan. When you select the 'Disks View' on the VEA GUI, the GUI stops responding. RESOLUTION: This issue has been resolved by adding two VEA Refresh Timeout tunables. You can use the following tunables to control the intervals in which the view is refreshed: - Quick Refresh Timeout - It is the minimum time interval after which any change will reflect in GUI. The Quick Refresh Timeout value can range in between 20 to 2500 milliseconds. - Delayed Refresh Timeout - It is used to optimize these refreshes in case of a large number of events or notifications. If a large number of notifications come in the 'quick refresh timeout' time interval, then instead of refreshing the view in the 'quick refresh timeout' time interval, the 'delayed refresh timeout' time interval is used. The Delayed Refresh Timeout value can range in between 100 to 25000 milliseconds. For example, suppose the Quick Refresh Timeout is set at 20ms, and the Delayed Refresh Timeout is set at 100ms. When the first update notification comes, the view is refreshed after 20ms. But if more than one update notifications come in the 20ms time interval, then instead of refreshing the view after 20ms, the view is refreshed after 100ms. To access these tunables, select Preferences from the VEA Tools menu. In the dialog box that appears, select the Advanced tab. Note: The value for the VEA Refresh Timeout tunables will vary depending on your environment. Prerequisite: Before you proceed with installing the current hotfix, make sure that Hotfix_6_1_00011_445_3622272 is installed. FILE / VERSION: VxVmCE.jar OBGUI.jar ci.jar obCommon.jar * 3856800 (Tracking ID: 3856819) SYMPTOM: When you enable fast failover, the volume drive letters are lost during resource failover. DESCRIPTION: This issue occurs when the fast failover flag is set to TRUE. When a VMDg Group resource failover is triggered through a system reboot, the drive letters assigned to the volumes are intermittently lost when the disk group is imported. RESOLUTION: This issue has been resolved by making internal code changes. FILE / VERSION: cluster.dll / 6.1.00030.445 * 3859624 (Tracking ID: 3859623) SYMPTOM: The VxSVC service crashes during the snapshot operation. DESCRIPTION: During the snapshot operation, a new volume record is created in the vold for the snapshot volume. This record is incomplete until the transaction commits. During a parallel activity (like Refresh), this incomplete record may get accessed by the provider, which causes the VxSVC service to crash. RESOLUTION: This issue has been resolved by ensuring that incomplete records are not exposed. FILE / VERSION: vxconfig.dll / 6.1.00031.445 * 3861094 (Tracking ID: 3861093) SYMPTOM: When you reattach a mirror plex to a volume, the system crashes (BSOD). DESCRIPTION: This issue occurs when you enable FastResync (FR). When the re-synchronization task is performed with multiple threads, the system crashes due to memory allocation issues. RESOLUTION: This hotfix resolves the issue by modifying the multi-thread re-synchronization behaviour. FILE / VERSION: vxconfig.dll / 6.1.00032.445 * 3838760 (Tracking ID: 3838759) SYMPTOM: A system crashes when excessive number of large write I/O requests are received on volumes mirrored using DRL. DESCRIPTION: A system crashes when excessive number of I/O requests that involve more than 1MB of data write operations are received, on a volume that is mirrored using DRL. The issue occurs because of the excessive number of large write I/O requests that result in low System Page Table Entries (PTEs). As a result of low system PTE, the SFW driver fails to map the application I/O buffer to the system address space. RESOLUTION: The SFW behavior to map the application I/O buffer to the system address space has been modified as part of this hotfix. SFW now breaks the larger I/Os into smaller I/Os and then tries to map the application buffer into system address space. FILE / VERSION: vxio.sys / 6.1.00027.446 * 3842738 (Tracking ID: 3842736) SYMPTOM: Onlining or offlining of a VMDg resource causes the RHS.exe service to crash. DESCRIPTION: This issue occurs in a clustered environment with groups and VMDg resources. During the onlining or offlining of a VMDg resource, when the code tries to read memory beyond the allocated boundary, the RHS.exe service crashes. RESOLUTION: This hotfix resolves the issue by increasing the memory allocation to avoid the buffer over-run. FILE / VERSION: vxres.dll / 6.1.00028.445 * 3853963 (Tracking ID: 3854164) SYMPTOM: When the logged-on user is a nested domain user with administrative privileges, the user is not able to perform SFW administrative tasks on the SFW 6.1 server. DESCRIPTION: When you connect to the SFW server through the logged-on user option as a nested domain user (with administrative rights on the server), you are able to connect to the VEA server but are unable to perform any SFW-related administrative operation on the SFW server. This is due to a limitation in the existing Windows API for the nested domain user. RESOLUTION: This issue has been resolved by using the LDAP provider to fetch the user information from the active directory. The LDAP provider checks the nested user privileges and reports if the user has admin privileges. FILE / VERSION: vxvea3.dll / 3.4.730.0 OBGUI.jar / NA ci.jar / NA obCommon.jar / NA * 3853418 (Tracking ID: 3853417) SYMPTOM: Issue 1: The disk group created on EMC Invista devices are automatically deported during another disk group import. Issue 2: The value of the VDID attribute for the EMC Invista devices is not consistent. Issue 3: The failure of the SCSI INQUIRY command in the ddlprov.dll file causes incorrect generation of VDID. DESCRIPTION: VDID is used to uniquely identify a disk in the vold. In some cases, the device discovery module (EMC.DLL) incorrectly categorizes the disks as Count Key Data (CDK) devices leading to a change in the VDID. Due to this change, the vold is unable to find the disk with the original VDID. A failure of the SCSI INQUIRY command in the ddlprov.dll file on a disk can also cause an inconsistent VDID. Subsequently, when you perform a refresh/rescan or import a disk group, the previously imported disk groups which are associated with disks with inconsistent VDID may get deported. RESOLUTION: This issue has been resolved by ensuring that the EMC Invista devices are not discovered as CDK devices and fixing the logic in the ddlprov.dll file. FILE / VERSION: emc.dll/7.0.05301.1 ddlprov.dll/7.0.05300.2 * 3863759 (Tracking ID: 3862346) SYMPTOM: After reclaiming storage on disks that support thin provisioning and storage reclamation, SQL Server data might get corrupted. DESCRIPTION: A file system bitmap is acquired during a reclaim storage operation. The reclaim region and the region that is represented by the file system bitmap may not be exactly aligned. Therefore, some region beyond the reclaim boundary may get reclaimed, and this region may be in active use. In such a scenario, the data in the region that is in active use can get corrupted. RESOLUTION: This hotfix addresses the issue by updating the boundary condition check to consider that the file system map may not completely match the reclaim region. FILE / VERSION: vxconfig.dll / 6.1.00033.445 vxio.sys / 6.1.00033.445 * 3872010 (Tracking ID: 3854164) SYMPTOM: When the logged-on user is a nested domain user with administrative privileges, the user is not able to perform SFW administrative tasks on the SFW 6.1 server. DESCRIPTION: When you connect to the SFW server through the logged-on user option as a nested domain user (with administrative rights on the server), you are able to connect to the VEA server but are unable to perform any SFW-related administrative operation on the SFW server. This is due to a limitation in the existing Windows API for the nested domain user. RESOLUTION: This issue has been resolved by using Windows AuthZ APIs to fetch the user information from the active directory. These APIs check the nested user privileges and report if the user has admin privileges. File Name / Version: vxvea3.dll / 3.4.731.0 * 3874444 (Tracking ID: 3874443) SYMPTOM: After upgrading from SFW 6.1 CP3 to CP4, the disk queue length keeps increasing. DESCRIPTION: After upgrading from SFW 6.1 CP3 to CP4, the Disk queue length steadily increases. When you perform an I/O operation on the drives, the disk queue length increases but does not reduce after the operation is complete. After sometime, the SCOM server sends out alerts regarding the high disk queue length on the server. RESOLUTION: This hotfix addresses an issue where the value of the disk queue length performance counter does not reduce after the I/O operation is complete. File Name / Version: vxio.sys / 6.1.5305.445 * 3880433 (Tracking ID: 3880457) SYMPTOM: Irrespective of creating separate enclosures for Dell PowerVault MD3000 array series, a generic VDID appears for LUNs that belong to these arrays. DESCRIPTION: When LUNs from any of the following Dell PowerVault MD3000 array series are exposed to a system, SFW generates array-specific enclosures, but fails to generate proper VDIDs. - MD3400 - MD3600 - MD3800 A generic VDID is displayed in VEA for the LUNs that belong to these arrays. This issue occurs due to an internal error. RESOLUTION: The SFW behavior to generate VDIDs for Dell PowerVault MD3000 array series has been modified as part of this hotfix. SFW now uses an updated Dell VDID library 'dell.dll' to populate proper VDIDs for LUNs that are exposed from Dell PowerVault MD3000 array series. File Name / Version: dell.dll / 7.0.16101.4 * 3889886 (Tracking ID: 3889885) SYMPTOM: The VVR replication switches between "Active" and "Activating" state, and then switches to a DCM mode. DESCRIPTION: Volume Replicator enables you to perform a takeover operation to transfer a Primary role to a Secondary node. This operation is typically performed when an original Primary node fails or is brought down for maintenance. After a takeover with fast-failback enabled operation, if you fail over the Volume Replicator disk group (vvr_dg) from old Primary to a passive node in a cluster, then both (old Primary active and passive node) the nodes continue to send heartbeats to the new Primary node. As a result, the VVR replication switches between "Active" and "Activating" state. This issue occurs due to an internal error. RESOLUTION: VVR behavior is modified with this hotfix. It now de-references a node when it transitions from PRIMARY to an ACTING SECONDARY role. As a result, only an active node can send heartbeats to a secondary node. FILE / VERSION: vxio.sys, 6.1.06102.445 * 3898318 (Tracking ID: 3898317) SYMPTOM: In a Microsoft Failover Cluster Manager, volumes created on a Volume Manager diskgroup are not auto-mounted after an online/offline/failover operation of VMDG resource. DESCRIPTION: In a Microsoft Failover Cluster Manager, the volumes that are created on a Volume Manager diskgroup may fail to appear or are not auto-mounted after the VMDG resource is failed over or is brought online or taken offline. As a result, the volumes are inaccessible. This issue occurs due to an internal error. RESOLUTION: The VMDG resource behavior is modified as part of this hotfix. The volumes are now mounted correctly after a VMDG resource offline/online/failover operation. FILE / VERSION: vxres.dll / 6.1.6103.445 cluscmd.dll/6.1.6103.445 * 3901194 (Tracking ID: 3898322) SYMPTOM: A system may crash when a disk having volume snapshot is removed. DESCRIPTION: When you enable FastResync or take a volume snapshot, the disk having a mirrored volume is detached and a Data Change Object (DCO) is activated. The DCO keeps a track of any new I/Os served and the updates that take place when mirrored volume is detached. When the updates are logged in DCO, at any given point in time, the vxio driver processes 256 number of updates from a single StagedIO (SIO) and cleans up the underlying memory buffer in that SIO. If there are more than 256 updates in the SIO, the additional updates are processed in the next batch. During the processing of additional updates, if the vxio driver accesses the cleaned up memory, then a system crash may occur. RESOLUTION: vxio behavior to clean up the underlying memory buffer from an SIO is modified with this hotfix. The memory is now cleaned up only after all the updates are processed. FILE / VERSION: vxio.sys/ 6.1.06104.445 INSTALLING THE PATCH -------------------- What's new in this CP =====================| The following hotfixes have been added in this CP: - Hotfix_6_1_06102_445_3889886 - Hotfix_6_1_06103_445_3898318 - Hotfix_6_1_06104_445_3901194 For more information about these hotfixes, see the "FIXED_INCIDENTS" section in this Readme. Install instructions ====================| Download the appropriate cumulative public patch (CP) executable file to a temporary location on your system. Each cumulative public patch includes the individual hotfixes that contain enhancements and fixes related to reported issues. See the "FIXED_INCIDENTS" section for details. Before you begin ----------------: [1] Ensure that the logged-on user has privileges to install the CP on the systems. [2] One or more hotfixes that are included with this CP may require a reboot. Before proceeding with the installation ensure that the system can be rebooted. [3] Veritas recommends that you close the Cluster Manager (Java Console) and the Veritas Enterprise Administrator (VEA) Console before installing this CP. [4] Ensure that you close the Windows Event Viewer before proceeding with the installation. To install the CP in the silent mode -----------------------------------: Perform the following steps: [1] Double-click the CP executable file to start the CP installation. The installer performs the following tasks: - Extracts all the individual hotfix executable files The files are extracted at %commonprogramfiles(x86)%\Veritas Shared\WxRTPrivates\ - Runs the pre-install tasks - Installs all the hotfixes sequentially - Runs the post-install tasks The installation progress status is displayed in the command window. [2] After all the hotfixes are installed, the installer prompts you to restart the system. Type Y to restart the system immediately, or type N to restart the system later. You must restart the system for the changes to take effect. Note that the installer prompts for a system restart only if hotfixes that require a reboot are included in the CP and are installed. To install the CP using the command line ----------------------------------------: Use the VxHFBatchInstaller.exe utility to install a CP from the command line. The syntax options for this utility are as follows: vxhfbatchinstaller.exe /CP: [/PreInstallScript:] [/silent [/forcerestart]] where, - CPName is the cumulative patch executable file name without the platform, architecture, and .exe extension. For example, if CP executable name is CP4_SFW_61_W2K12_x64.exe, specify it as CP4_SFW_61. - PreInstallScript.pl is the Perl script that includes the pre-installation steps. These steps forcefully kill the required services and processes in case a graceful stop request does not succeed. Veritas recommends that you use this option and script only in case the CP installer fails repeatedly while performing the pre-installation tasks. - /silent indicates the installation is run in a non-verbose mode; the installer does not prompt for any inputs during the installation. - /forcerestart indicates that the system is automatically restarted, if required, after the installation is complete. Perform the following steps: [1] From the command prompt, navigate to the directory where the CP executable file is located and then run the file to extract the contents to a default location on the system. The installer displays a list of hotfixes that are included in the CP. - On 64-bit systems, the hotfixes executable files are extracted to: "%commonprogramfiles(x86)%\Veritas Shared\WxRTPrivates\" [2] In the same command window, run the following command to begin the CP installation in the silent mode: vxhfbatchinstaller.exe /CP: /silent For example, to install a SFW 6.1 CP for Windows Server 2012, the command is: vxhfbatchinstaller.exe /CP:CP4_SFW_61 /silent The installer performs the following tasks: - Extracts all the individual hotfix executable files The files are extracted at %commonprogramfiles(x86)%\Veritas Shared\WxRTPrivates\ - Runs the pre-install tasks - Installs all the hotfixes sequentially - Runs the post-install tasks The installation progress status is displayed in the command window. [3] After all the hotfixes are installed, the installer displays a message for restarting the system. You must restart the system for the changes to take effect. Note that the installer prompts for a system restart only if hotfixes that require a reboot are included in the CP and are installed. The installer automatically restarts the system if you had specified the /forcerestart option in step 2 earlier. VxHFBatchInstaller usage example ---------------------------------: [+] Install CP in silent mode, restart automatically: vxhfbatchinstaller.exe /CP:CP4_SFW_61 /silent /forcerestart Post-install steps ==================| The following section describes the steps that must be performed after installing the hotfixes included in this CP. Ensure that VIP_PATH environment variable is set to "C:\Program Files\Veritas\Veritas Object Bus\bin" and NOT to "C:\\Veritas Object Bus\bin". Assuming that C:\ is the default installation drive. Known issues ============| There are no known issues identified in this CP. -------------------------------------------------------+ REMOVING THE PATCH ------------------ NO SPECIAL INSTRUCTIONS -------------------- DISCLAIMER: This fix is provided without warranty of any kind including the warranties of title or implied warranties of merchantability, fitness for a particular purpose and non-infringement. Veritas disclaims all liability relating to or arising out of this fix. It is recommended that the fix be evaluated in a test environment before implementing it in your production environment. When the fix is incorporated into an InfoScale for Windows maintenance release, the resulting Hotfix or Service Pack must be installed as soon as possible. Veritas Technical Services will notify you when the maintenance release (Hotfix or Service Pack) is available if you sign up for notifications from the Veritas support site http://www.veritas.com/support and/or from Services Operations Readiness Tools (SORT) http://sort.veritas.com. Additional notes ================| [+] To confirm the list of cumulative patches installed on a system, run the following command from the directory where the CP files are extracted: vxhfbatchinstaller.exe /list The output of this command displays a list of cumulative patches and the hotfixes that are installed as part of a CP. This command also displays the hotfixes that are included in a CP but are not installed on the system. [+] To confirm the installation of the hotfixes, perform one of the following: - Run the following command: vxhf.exe /list The output of this command lists the hotfixes installed on the system. - In the Windows Add/Remove program, click "View installed updates" to view the list of the hotfixes installed on the system. [+] To confirm the latest cumulative patch installed on a system, run the following command from the directory where the CP files are extracted: vxhfbatchinstaller.exe /cplevel The output of this command displays the latest CP that is installed, the CP status, and a list of all hotfixes that were a part of the CP but not installed on the system. [+] The CP installer (vxhfbatchinstaller.exe) creates and stores logs at: "%allusersprofile%\Veritas\VxHF\VxHFBatchInstaller.txt" [+] The hotfix installer (vxhf.exe) creates and stores logs at: "%allusersprofile%\Veritas\VxHF\VxHF.txt" [+] For general information about the hotfix installer (vxhf.exe), please refer to the following technote: http://www.veritas.com/docs/000039694 [+] To view a list of hotfixes already installed on a system, please refer to the steps mentioned in the following technote: http://www.veritas.com/docs/000039691 [+] For information on uninstalling a hotfix, please refer to the following technotes: http://www.veritas.com/docs/000023795 http://www.veritas.com/docs/000039693 [+] For general information about the CP, please refer to the following technote: http://www.veritas.com/docs/000023789 OTHERS ------ NONE