* * * READ ME * * * * * * InfoScale 7.0 P2 * * * * * * Patch * * * Patch Date: 2018-02-02 This document provides the following information: * PATCH NAME * OPERATING SYSTEMS SUPPORTED BY THE PATCH * PACKAGES AFFECTED BY THE PATCH * BASE PRODUCT VERSIONS FOR THE PATCH * SUMMARY OF INCIDENTS FIXED BY THE PATCH * DETAILS OF INCIDENTS FIXED BY THE PATCH * INSTALLATION PRE-REQUISITES * INSTALLING THE PATCH * REMOVING THE PATCH PATCH NAME ---------- InfoScale 7.0 P2 Patch OPERATING SYSTEMS SUPPORTED BY THE PATCH ---------------------------------------- Windows Server 2008 R2 X64 Windows 2012 X64 Windows Server 2012 R2 X64 BASE PRODUCT VERSIONS FOR THE PATCH ----------------------------------- * InfoScale Storage 7.0 SUMMARY OF INCIDENTS FIXED BY THE PATCH --------------------------------------- Patch ID: IS 7.0 CP2 * 3853667 (3853666) This hotfix addresses multiple issues. * 3862187 (3861093) This hotfix addresses multiple issues. * 3866918 (3854164) When the logged-on user is a nested domain user with administrative privileges, the user is not able to perform Storage Foundation for Windows (SFW) administrative tasks on the SFW server. * 3876898 (3862346) After reclaiming storage on disks that support thin provisioning and storage reclamation, SQL Server data might get corrupted. * 3877099 (3877089) An incorrect memory allocation in the VMDG resource dll(vxres.dll) causes the RHS.exe service to crash. * 3889098 (3880457) Generic VDID is displayed for LUNs belonging to Dell PowerVault MD3000 array series. * 3904127 (3898322) A system may crash when a disk having volume snapshot is removed. * 3904953 (3904022) Deporting disk groups with striped volumes takes longer time. DETAILS OF INCIDENTS FIXED BY THE PATCH --------------------------------------- This patch fixes the following incidents: Patch ID: IS 7.0 CP2 * 3853667 (Tracking ID: 3853666) SYMPTOM: This hotfix addresses the following issues: INCIDENT: 3853667 | TRACKING_ID: 3853666 SYMPTOM: The VVRRVG resource fails intermittently during a takeover or a failback operation. INCIDENT: 3787075 | TRACKING_ID: 3775638 SYMPTOM: After a failover, the VMDg resource in a File Server role hangs in an 'Online Pending' state. INCIDENT: 3821353 | TRACKING_ID: 3819634 SYMPTOM: Issue 1: The storage reclamation operation may hang and the system may become unresponsive on an array that supports the UNMAP command. Issue 2: The storage reclamation operation may fail on an array that supports UNMAP command. NOTE: These issues are seen only on Windows Server 2012/R2. INCIDENT: 3850785 | TRACKING_ID: 3839096 SYMPTOM: SFW broadcasts the license keys over the network on the UDP port number 2164. INCIDENT: 3850945 | TRACKING_ID: 3838759 SYMPTOM: A system crashes when excessive number of large write I/O requests are received on volumes mirrored using DRL. INCIDENT: 3851224 | TRACKING_ID: 3767110 SYMPTOM: Mirrored volumes with DCO log volumes and DRL logs are not resynchronized using DRL, after a system crash. INCIDENT: 3851278 | TRACKING_ID: 3842736 SYMPTOM: Onlining or offlining of a Volume Manager Disk group (VMDg) resource causes the Resource Host Monitor (RHS.exe) service to crash. INCIDENT: 3851722 | TRACKING_ID: 3826318 SYMPTOM: When you add a large number of disks and perform a Rescan, the VEA GUI hangs on the 'Disks View'. INCIDENT: 3852370 | TRACKING_ID: 3849187 SYMPTOM: The VEA GUI displays incorrect status for the historic data collection even after you have disabled it. INCIDENT: 3853418 | TRACKING_ID: 3853417 SYMPTOM: Issue 1: The disk group created on EMC Invista devices are automatically deported during another disk group import. Issue 2: The value of the VDID attribute for the EMC Invista devices is not consistent. Issue 3: The failure of the SCSI INQUIRY command in the ddlprov.dll file causes incorrect generation of VDID. DESCRIPTION: The detailed descriptions of the issues that are addressed in this hotfix are as follows: INCIDENT: 3853667 | TRACKING_ID: 3853666 DESCRIPTION: The read-write permissions on a volume are changed when migrating it from one system to another. The VVRRVG agent was unable to handle this change. A volume is migrated during a takeover or failback operation. Therefore, during such operations, the agent reported the VVRRVG resource as Faulted. This hotfix addresses the issue by updating the VVRRVG agent, which now interprets the change of permissions on a volume based on the status of a flag. The agent is now able to handle the change, and the VVRRVG resource no longer faults. INCIDENT: 3787075 | TRACKING_ID: 3775638 DESCRIPTION: After a failover of VMDg resource in a File Server role, the resources are brought online in the following order: VMDg->File Server->Network name During the online process, the VMDg resource queries the Network name to recreate the file shares. However, if the Network name resource hasn't restarted until then, the VMDg resource remains in an 'Online Pending' state and hangs after the resource hosting subsystem crashes due to the time out. This hotfix modifies the VMDg resource online process to address the issue. To recreate the file shares, the VMDg resource now queries the File Server resource instead of the Network name resource. After a failover, the File Server resource restarts before the VMDg resource queries it. This behaviour prevents the VMDg resource to go in an 'Online Pending' state. INCIDENT: 3821353 | TRACKING_ID: 3819634 DESCRIPTION: Issue 1: SFW wizards and CLI commands provide support for reclaiming the unused storage from the disks that support reclamation. If these disks belong to an array that supports the UNMAP command, the SFW operations that are initiated for reclamation may fail. As a result, the system becomes unresponsive. This issue occurs because of a deadlock in vxio.sys. The SFW behaviour to reclaim the unused storage has been modified as part of this hotfix. SFW now overcomes the deadlock faced in the vxio.sys during the reclamation of the unused storage. Issue 2: SFW wizards and CLI commands provide support for reclaiming the unused storage from the disks that support reclamation. If these disks belong to an array that supports UNMAP command, the SFW operations that are initiated for reclamation may fail. This issue occurs when a TRIM command fails with status 'STATUS_DATA_OVERRUN'. The SFW behaviour to reclaim the unused storage has been modified as part of this hotfix. A TRIM command with status 'STATUS_DATA_OVERRUN' is now considered as a successful operation. INCIDENT: 3850785 | TRACKING_ID: 3839096 DESCRIPTION: Even though the network duplication check is disabled, SFW broadcasts the license keys on UDP port 2164 every 30 minutes. SFW broadcasts these license keys with LIC_CHECK_SEND in the header. RESOLUTION: This hotfix resolves the issue by disabling the broadcast if the network duplication check is disabled. INCIDENT: 3850945 | TRACKING_ID: 3838759 DESCRIPTION: A system crashes when excessive number of I/O requests that involve more than 1MB of data write operations are received, on a volume that is mirrored using DRL. The issue occurs because of the excessive number of large write I/O requests that result in low System Page Table Entries (PTEs). As a result of low system PTE, the SFW driver fails to map the application I/O buffer to the system address space. The SFW behavior to map the application I/O buffer to the system address space has been modified as part of this hotfix. SFW now breaks the larger I/Os into smaller I/Os and then tries to map the application buffer into system address space. INCIDENT: 3851224 | TRACKING_ID: 3767110 DESCRIPTION: This issue occurs for mirrored volumes with Disk Change Object (DCO) log volumes and Dirty region logging (DRL) logs in case of a system failure. Here, instead of using DRL to quickly resynchronize all the copies of a mirrored volume, the system restores all mirrors of the volume by copying the full contents of the volume between its mirrors. This is a lengthy and I/O intensive process. This hotfix resolves the issue by modifying SFW behaviour. Note: If your existing mirrored volumes have DCO log volumes and DRL logs, you must delete and then recreate the DRL logs. INCIDENT: 3851278 | TRACKING_ID: 3842736 DESCRIPTION: This issue occurs in a clustered environment with groups and VMDg resources. During the onlining or offlining of a VMDg resource, when the code tries to read memory beyond the allocated boundary, the RHS.exe service crashes. This hotfix resolves the issue by increasing the memory allocation to avoid the buffer over-run. INCIDENT: 3851722 | TRACKING_ID: 3826318 DESCRIPTION: This issue occurs when you add a large number of Logical Units (LUNs) or disks to the host and perform a Rescan. When you select the 'Disks View' on the VEA GUI, the GUI stops responding. This issue has been resolved by adding two VEA Refresh Timeout tunables. You can use the following tunables to control the intervals in which the view is refreshed: - Quick Refresh Timeout - It is the minimum time interval after which any change will reflect in GUI. The Quick Refresh Timeout value can range in between 20 to 2500 milliseconds. - Delayed Refresh Timeout - It is used to optimize these refreshes in case of a large number of events or notifications. If a large number of notifications come in the 'quick refresh timeout' time interval, then instead of refreshing the view in the 'quick refresh timeout' time interval, the 'delayed refresh timeout' time interval is used. The Delayed Refresh Timeout value can range in between 100 to 25000 milliseconds. For example, suppose the Quick Refresh Timeout is set at 20ms, and the Delayed Refresh Timeout is set at 100ms. When the first update notification comes, the view is refreshed after 20ms. But if more than one update notifications come in the 20ms time interval, then instead of refreshing the view after 20ms, the view is refreshed after 100ms. To access these tunables, select Preferences from the VEA Tools menu. In the dialog box that appears, select the Advanced tab. NOTE: The value for the VEA Refresh Timeout tunables will vary depending on your environment. INCIDENT: 3852370 | TRACKING_ID: 3849187 DESCRIPTION: When you stop the historical data collection, the VVR provider records the state in the ISIS bus. The ISIS bus gets updated whenever an event, like RLINK disconnect, is triggered. During this update cycle, the default Started state of the historical data collection is populated in the ISIS bus. Due to this, even though the historical collection is stopped, the VEA GUI displays 'Stop Historical Data Collection'. This hotfix resolves the issue by ensuring that the ISIS bus picks up the correct state of the historical data collection. INCIDENT: 3853418 | TRACKING_ID: 3853417 DESCRIPTION: VDID is used to uniquely identify a disk in the vold. In some cases, the device discovery module (EMC.DLL) incorrectly categorizes the disks as Count Key Data (CDK) devices leading to a change in the VDID. Due to this change, the vold is unable to find the disk with the original VDID. A failure of the SCSI INQUIRY command in the ddlprov.dll file on a disk can also cause an inconsistent VDID. Subsequently, when you perform a refresh/rescan or import a disk group, the previously imported disk groups which are associated with disks with inconsistent VDID may get deported. This cumulative patch updates to the product to address all the aforementioned issue has been resolved by ensuring that the EMC Invista devices are not discovered as CDK devices and fixing the logic in the ddlprov.dll file. RESOLUTION: This hotfix updates the product to address all the aforementioned issues. * 3862187 (Tracking ID: 3861093) SYMPTOM: This hotfix addresses the following issues: INCIDENT: 3862187 | TRACKING_ID: 3861093 SYMPTOM: When you reattach a mirror plex to a volume, the system crashes (BSOD). INCIDENT: 3853667 | TRACKING_ID: 3853666 SYMPTOM: The VVRRVG resource fails intermittently during a takeover or a failback operation. INCIDENT: 3787075 | TRACKING_ID: 3775638 SYMPTOM: After a failover, the VMDg resource in a File Server role hangs in an 'Online Pending' state. INCIDENT: 3821353 | TRACKING_ID: 3819634 SYMPTOM: Issue 1: The storage reclamation operation may hang and the system may become unresponsive on an array that supports the UNMAP command. Issue 2: The storage reclamation operation may fail on an array that supports UNMAP command. NOTE: These issues are seen only on Windows Server 2012/R2. INCIDENT: 3850785 | TRACKING_ID: 3839096 SYMPTOM: SFW broadcasts the license keys over the network on the UDP port number 2164. INCIDENT: 3850945 | TRACKING_ID: 3838759 SYMPTOM: A system crashes when excessive number of large write I/O requests are received on volumes mirrored using DRL. INCIDENT: 3851224 | TRACKING_ID: 3767110 SYMPTOM: Mirrored volumes with DCO log volumes and DRL logs are not resynchronized using DRL, after a system crash. INCIDENT: 3851278 | TRACKING_ID: 3842736 SYMPTOM: Onlining or offlining of a Volume Manager Disk group (VMDg) resource causes the Resource Host Monitor (RHS.exe) service to crash. INCIDENT: 3851722 | TRACKING_ID: 3826318 SYMPTOM: When you add a large number of disks and perform a Rescan, the VEA GUI hangs on the 'Disks View'. INCIDENT: 3852370 | TRACKING_ID: 3849187 SYMPTOM: The VEA GUI displays incorrect status for the historic data collection even after you have disabled it. INCIDENT: 3853418 | TRACKING_ID: 3853417 SYMPTOM: Issue 1: The disk group created on EMC Invista devices are automatically deported during another disk group import. Issue 2: The value of the VDID attribute for the EMC Invista devices is not consistent. Issue 3: The failure of the SCSI INQUIRY command in the ddlprov.dll file causes incorrect generation of VDID. DESCRIPTION: The detailed descriptions of the issues that are addressed in this hotfix are as follows: INCIDENT: 3862187 | TRACKING_ID: 3861093 DESCRIPTION: This issue occurs when you enable FastResync (FR). When the re-synchronization task is performed with multiple threads, the system crashes due to memory allocation issues. This hotfix resolves the issue by modifying the multi-thread re-synchronization behaviour. INCIDENT: 3853667 | TRACKING_ID: 3853666 DESCRIPTION: The read-write permissions on a volume are changed when migrating it from one system to another. The VVRRVG agent was unable to handle this change. A volume is migrated during a takeover or failback operation. Therefore, during such operations, the agent reported the VVRRVG resource as Faulted. This hotfix addresses the issue by updating the VVRRVG agent, which now interprets the change of permissions on a volume based on the status of a flag. The agent is now able to handle the change, and the VVRRVG resource no longer faults. INCIDENT: 3787075 | TRACKING_ID: 3775638 DESCRIPTION: After a failover of VMDg resource in a File Server role, the resources are brought online in the following order: VMDg->File Server->Network name During the online process, the VMDg resource queries the Network name to recreate the file shares. However, if the Network name resource hasn't restarted until then, the VMDg resource remains in an 'Online Pending' state and hangs after the resource hosting subsystem crashes due to the time out. This hotfix modifies the VMDg resource online process to address the issue. To recreate the file shares, the VMDg resource now queries the File Server resource instead of the Network name resource. After a failover, the File Server resource restarts before the VMDg resource queries it. This behaviour prevents the VMDg resource to go in an 'Online Pending' state. INCIDENT: 3821353 | TRACKING_ID: 3819634 DESCRIPTION: Issue 1: SFW wizards and CLI commands provide support for reclaiming the unused storage from the disks that support reclamation. If these disks belong to an array that supports the UNMAP command, the SFW operations that are initiated for reclamation may fail. As a result, the system becomes unresponsive. This issue occurs because of a deadlock in vxio.sys. The SFW behaviour to reclaim the unused storage has been modified as part of this hotfix. SFW now overcomes the deadlock faced in the vxio.sys during the reclamation of the unused storage. Issue 2: SFW wizards and CLI commands provide support for reclaiming the unused storage from the disks that support reclamation. If these disks belong to an array that supports UNMAP command, the SFW operations that are initiated for reclamation may fail. This issue occurs when a TRIM command fails with status 'STATUS_DATA_OVERRUN'. The SFW behaviour to reclaim the unused storage has been modified as part of this hotfix. A TRIM command with status 'STATUS_DATA_OVERRUN' is now considered as a successful operation. INCIDENT: 3850785 | TRACKING_ID: 3839096 DESCRIPTION: Even though the network duplication check is disabled, SFW broadcasts the license keys on UDP port 2164 every 30 minutes. SFW broadcasts these license keys with LIC_CHECK_SEND in the header. RESOLUTION: This hotfix resolves the issue by disabling the broadcast if the network duplication check is disabled. INCIDENT: 3850945 | TRACKING_ID: 3838759 DESCRIPTION: A system crashes when excessive number of I/O requests that involve more than 1MB of data write operations are received, on a volume that is mirrored using DRL. The issue occurs because of the excessive number of large write I/O requests that result in low System Page Table Entries (PTEs). As a result of low system PTE, the SFW driver fails to map the application I/O buffer to the system address space. The SFW behavior to map the application I/O buffer to the system address space has been modified as part of this hotfix. SFW now breaks the larger I/Os into smaller I/Os and then tries to map the application buffer into system address space. INCIDENT: 3851224 | TRACKING_ID: 3767110 DESCRIPTION: This issue occurs for mirrored volumes with Disk Change Object (DCO) log volumes and Dirty region logging (DRL) logs in case of a system failure. Here, instead of using DRL to quickly resynchronize all the copies of a mirrored volume, the system restores all mirrors of the volume by copying the full contents of the volume between its mirrors. This is a lengthy and I/O intensive process. This hotfix resolves the issue by modifying SFW behaviour. Note: If your existing mirrored volumes have DCO log volumes and DRL logs, you must delete and then recreate the DRL logs. INCIDENT: 3851278 | TRACKING_ID: 3842736 DESCRIPTION: This issue occurs in a clustered environment with groups and VMDg resources. During the onlining or offlining of a VMDg resource, when the code tries to read memory beyond the allocated boundary, the RHS.exe service crashes. This hotfix resolves the issue by increasing the memory allocation to avoid the buffer over-run. INCIDENT: 3851722 | TRACKING_ID: 3826318 DESCRIPTION: This issue occurs when you add a large number of Logical Units (LUNs) or disks to the host and perform a Rescan. When you select the 'Disks View' on the VEA GUI, the GUI stops responding. This issue has been resolved by adding two VEA Refresh Timeout tunables. You can use the following tunables to control the intervals in which the view is refreshed: - Quick Refresh Timeout - It is the minimum time interval after which any change will reflect in GUI. The Quick Refresh Timeout value can range in between 20 to 2500 milliseconds. - Delayed Refresh Timeout - It is used to optimize these refreshes in case of a large number of events or notifications. If a large number of notifications come in the 'quick refresh timeout' time interval, then instead of refreshing the view in the 'quick refresh timeout' time interval, the 'delayed refresh timeout' time interval is used. The Delayed Refresh Timeout value can range in between 100 to 25000 milliseconds. For example, suppose the Quick Refresh Timeout is set at 20ms, and the Delayed Refresh Timeout is set at 100ms. When the first update notification comes, the view is refreshed after 20ms. But if more than one update notifications come in the 20ms time interval, then instead of refreshing the view after 20ms, the view is refreshed after 100ms. To access these tunables, select Preferences from the VEA Tools menu. In the dialog box that appears, select the Advanced tab. NOTE: The value for the VEA Refresh Timeout tunables will vary depending on your environment. INCIDENT: 3852370 | TRACKING_ID: 3849187 DESCRIPTION: When you stop the historical data collection, the VVR provider records the state in the ISIS bus. The ISIS bus gets updated whenever an event, like RLINK disconnect, is triggered. During this update cycle, the default Started state of the historical data collection is populated in the ISIS bus. Due to this, even though the historical collection is stopped, the VEA GUI displays 'Stop Historical Data Collection'. This hotfix resolves the issue by ensuring that the ISIS bus picks up the correct state of the historical data collection. INCIDENT: 3853418 | TRACKING_ID: 3853417 DESCRIPTION: VDID is used to uniquely identify a disk in the vold. In some cases, the device discovery module (EMC.DLL) incorrectly categorizes the disks as Count Key Data (CDK) devices leading to a change in the VDID. Due to this change, the vold is unable to find the disk with the original VDID. A failure of the SCSI INQUIRY command in the ddlprov.dll file on a disk can also cause an inconsistent VDID. Subsequently, when you perform a refresh/rescan or import a disk group, the previously imported disk groups which are associated with disks with inconsistent VDID may get deported. This hotfix resolves the issue by ensuring that the EMC Invista devices are not discovered as CDK devices and fixing the logic in the ddlprov.dll file. RESOLUTION: This hotfix updates the product to address all the aforementioned issues. * 3866918 (Tracking ID: 3854164) SYMPTOM: When the logged-on user is a nested domain user with administrative privileges, the user is not able to perform SFW administrative tasks on the SFW server. DESCRIPTION: When you connect to the SFW server through the logged-on user option as a nested domain user (with administrative rights on the server), you are able to connect to the VEA server but are unable to perform any SFW-related administrative operation on the server. This is due to a limitation in the existing Windows API for the nested domain user. RESOLUTION: This issue has been resolved by using the LDAP provider to fetch the user information from the active directory. The LDAP provider checks the nested user privileges and reports if the user has admin privileges. FILE / VERSION: vxvea3.dll / 3.4.951.0 OBGUI.jar / NA ci.jar / NA obCommon.jar / NA * 3876898 (Tracking ID: 3862346) SYMPTOM: After reclaiming storage on disks that support thin provisioning and storage reclamation, SQL Server data might get corrupted. DESCRIPTION: A file system bitmap is acquired during a reclaim storage operation. The reclaim region and the region that is represented by the file system bitmap may not be exactly aligned. Therefore, some region beyond the reclaim boundary may get reclaimed, and this region may be in active use. In such a scenario, the data in the region that is in active use can get corrupted. RESOLUTION: This hotfix addresses the issue by updating the boundary condition check to consider that the file system map may not completely match the reclaim region. FILE / VERSION: vxio.sys / 7.0.15304.129 vxconfig.dll / 7.0.15304.129 * 3877099 (Tracking ID: 3877089) SYMPTOM: An incorrect memory allocation in the VMDG resource dll(vxres.dll) causes the RHS.exe service to crash. DESCRIPTION: When you upgrade Storage Foundation from version 6.1 to InfoScale 7.0 or install an InfoScale 7.0 product, the product is installed with Microsoft Failover Cluster already configured. If the name of a failover cluster fileshare resource, or any of its dependent, is of 16 bytes (or its multiple), and you online the resource, an incorrect memory allocation in the vxres.dll causes the RHS.exe service to crash. RESOLUTION: This hotfix addresses the issue updating the vxres.dll file with the proper memory allocation code. File / Version vxres.dll / 7.0.15305.129 * 3889098 (Tracking ID: 3880457) SYMPTOM: Irrespective of creating separate enclosures for Dell PowerVault MD3000 array series, a generic VDID appears for LUNs that belong to these arrays. DESCRIPTION: When LUNs from any of the following Dell PowerVault MD3000 array series are exposed to a system, SFW generates array-specific enclosures, but fails to generate proper VDIDs. - MD3400 - MD3600 - MD3800 A generic VDID is displayed in VEA for the LUNs that belong to these arrays. This issue occurs due to an internal error. RESOLUTION: The SFW behavior to generate VDIDs for Dell PowerVault MD3000 array series has been modified as part of this hotfix. SFW now uses an updated Dell VDID library 'dell.dll' to populate proper VDIDs for LUNs that are exposed from Dell PowerVault MD3000 array series. FILE / VERSION: dell.dll / 7.0.16101.4 * 3904127 (Tracking ID: 3898322) SYMPTOM: A system may crash when a disk having volume snapshot is removed. DESCRIPTION: When you enable FastResync or take a volume snapshot, the disk having a mirrored volume is detached and a Data Change Object (DCO) is activated. The DCO keeps a track of any new I/Os served and the updates that take place when mirrored volume is detached. When the updates are logged in DCO, at any given point in time, the vxio driver processes 256 number of updates from a single StagedIO (SIO) and cleans up the underlying memory buffer in that SIO. If there are more than 256 updates in the SIO, the additional updates are processed in the next batch. During the processing of additional updates, if the vxio driver accesses the cleaned up memory, then a system crash may occur. RESOLUTION: vxio behavior to clean up the underlying memory buffer from an SIO is modified with this hotfix. The memory is now cleaned up only after all the updates are processed. FILE / VERSION: vxio.sys/ 7.0.16102.129 * 3904953 (Tracking ID: 3904022) SYMPTOM: Deporting disk groups with striped volumes takes longer time. DESCRIPTION: It takes longer time to deport disk groups with striped volumes. This issue occurs because of the delay in lock volume operation while deporting a disk group. The delay in lock volume operation is caused due to the slow processing of striped volumes. RESOLUTION: This hotfix addresses the issue by optimizing the vxio driver operations for striped volumes. FILE / VERSION: vxio.sys/ 7.0.16103.129 INSTALLING THE PATCH -------------------- What's new in this CP =====================| The following hotfixes have been added in this CP: - Hotfix_7_0_10002_129_3862187_x64 - Hotfix_7_0_15303_129_3866918_x64 - Hotfix_7_0_15304_129_3876898_x64 - Hotfix_7_0_15305_129_3877099_x64 - Hotfix_7_0_16101_129_3889098_x64 - Hotfix_7_0_16102_129_3904127_x64 - Hotfix_7_0_16103_129_3904953_x64 For more information about these hotfixes, see the "FIXED_INCIDENTS" section in this Readme. Install instructions ====================| Download the appropriate cumulative public patch (CP) executable file to a temporary location on your system. Note: You can install the CP directly on the host machine without downloading the executable file using the Veritas InfoScale Operations Manager (VIOM) Server Each cumulative public patch includes the individual hotfixes that contain enhancements and fixes related to reported issues. See the "FIXED_INCIDENTS" section for details. Before you begin ----------------: [1] Ensure that the logged-on user has privileges to install the CP on the systems. [2] One or more hotfixes that are included with this CP may require a reboot. Before proceeding with the installation ensure that the system can be rebooted. [3] Veritas recommends that you close the Cluster Manager (Java Console) and the Veritas Enterprise Administrator (VEA) Console before installing this CP. [4] Ensure that you close the Windows Event Viewer before proceeding with the installation. [5] Ensure Veritas InfoScale Operations Manager server is deployed [6] In case of installing the CP using Veritas InfoScale Operations Manager (VIOM) Server, the machines must be added as host on which you want to install the CP. To install the CP in the silent mode -----------------------------------: Perform the following steps: [1] Double-click the CP executable file to start the CP installation. The installer performs the following tasks: - Extracts all the individual hotfix executable files The files are extracted at %commonprogramfiles(x86)%\Veritas Shared\WxRTPrivates\ - Runs the pre-install tasks - Installs all the hotfixes sequentially - Runs the post-install tasks The installation progress status is displayed in the command window. [2] After all the hotfixes are installed, the installer prompts you to restart the system. Type Y to restart the system immediately, or type N to restart the system later. You must restart the system for the changes to take effect. Note that the installer prompts for a system restart only if hotfixes that require a reboot are included in the CP and are installed. To install the CP using the command line ----------------------------------------: Use the VxHFBatchInstaller.exe utility to install a CP from the command line. The syntax options for this utility are as follows: vxhfbatchinstaller.exe /Patch: [/PreInstallScript:] [/silent [/suppress]] where, - CPName is the cumulative patch executable file name without the platform, architecture, and .exe extension. For example, if CP executable name is CP2_IS_70_W2K12_x64.exe, specify it as CP2_IS_70. - PreInstallScript.pl is the Perl script that includes the pre-installation steps. These steps forcefully kill the required services and processes in case a graceful stop request does not succeed. Veritas recommends that you use this option and script only in case the CP installer fails repeatedly while performing the pre-installation tasks. - /silent indicates the installation is run in a non-verbose mode; the installer does not prompt for any inputs during the installation. - /suppress indicates that the system is automatically restarted, if required, after the installation is complete. Perform the following steps: [1] From the command prompt, navigate to the directory where the CP executable file is located and then run the file to extract the contents to a default location on the system. The installer displays a list of hotfixes that are included in the CP. - On 64-bit systems, the hotfixes executable files are extracted to: "%commonprogramfiles(x86)%\Veritas Shared\WxRTPrivates\" [2] In the same command window, run the following command to begin the CP installation in the silent mode: vxhfbatchinstaller.exe /Patch: /silent For example, to install a IS 7.0 CP for Windows Server 2012, the command is: vxhfbatchinstaller.exe /Patch:CP2_IS_70 /silent The installer performs the following tasks: - Extracts all the individual hotfix executable files The files are extracted at %commonprogramfiles(x86)%\Veritas Shared\WxRTPrivates\ - Runs the pre-install tasks - Installs all the hotfixes sequentially - Runs the post-install tasks The installation progress status is displayed in the command window. [3] After all the hotfixes are installed, the installer displays a message for restarting the system. You must restart the system for the changes to take effect. Note that the installer prompts for a system restart only if hotfixes that require a reboot are included in the CP and are installed. The installer automatically restarts the system if you had specified the /suppress option in step 2 earlier. VxHFBatchInstaller usage example ---------------------------------: [+] Install CP in silent mode, restart automatically: vxhfbatchinstaller.exe /Patch:CP2_IS_70 /silent /supress Post-install steps ==================| The following section describes the steps that must be performed after installing the hotfixes included in this CP. Ensure that VIP_PATH environment variable is set to "C:\Program Files\Veritas\Veritas Object Bus\bin" and NOT to "C:\\Veritas Object Bus\bin". Assuming that C:\ is the default installation drive. Known issues ============| 1. CP uninstallation from ARP is not supported. -------------------------------------------------------+ REMOVING THE PATCH ------------------ NO SPECIAL INSTRUCTIONS -------------------- DISCLAIMER: This fix is provided without warranty of any kind including the warranties of title or implied warranties of merchantability, fitness for a particular purpose and non-infringement. Veritas disclaims all liability relating to or arising out of this fix. It is recommended that the fix be evaluated in a test environment before implementing it in your production environment. When the fix is incorporated into an InfoScale for Windows maintenance release, the resulting Hotfix or Service Pack must be installed as soon as possible. Veritas Technical Services will notify you when the maintenance release (Hotfix or Service Pack) is available if you sign up for notifications from the Veritas support site http://www.veritas.com/support and/or from Services Operations Readiness Tools (SORT) http://sort.veritas.com. Additional notes ================| [+] To confirm the list of cumulative patches installed on a system, run the following command from the directory where the CP files are extracted: vxhfbatchinstaller.exe /list The output of this command displays a list of cumulative patches and the hotfixes that are installed as part of a CP. This command also displays the hotfixes that are included in a CP but are not installed on the system. [+] To confirm the installation of the hotfixes, perform one of the following: - Run the following command: vxhf.exe /list The output of this command lists the hotfixes installed on the system. - In the Windows Add/Remove program, click "View installed updates" to view the list of the hotfixes installed on the system. [+] To confirm the latest cumulative patch installed on a system, run the following command from the directory where the CP files are extracted: vxhfbatchinstaller.exe /patchlevel The output of this command displays the latest CP that is installed, the CP status, and a list of all hotfixes that were a part of the CP but not installed on the system. [+] The CP installer (vxhfbatchinstaller.exe) creates and stores logs at: "%allusersprofile%\Veritas\VxHF\VxHFBatchInstaller.txt" [+] The hotfix installer (vxhf.exe) creates and stores logs at: "%allusersprofile%\Veritas\VxHF\VxHF.txt" [+] For general information about the hotfix installer (vxhf.exe), please refer to the following technote: http://www.veritas.com/docs/000039694 [+] To view a list of hotfixes already installed on a system, please refer to the steps mentioned in the following technote: http://www.veritas.com/docs/000039691 [+] For information on uninstalling a hotfix, please refer to the following technotes: http://www.veritas.com/docs/000039693 [+] For general information about the CP installer (VxHFBatchInstaller.exe), please refer to the following technote: http://www.veritas.com/docs/000023795 OTHERS ------ NONE