vrp-centos68_x86_64-Patch-2.0.0.500

 Basic information
Release type: Patch
Release date: 2016-11-24
OS update support: None
Technote: None
Documentation: None
Popularity: 538 viewed    downloaded
Download size: 1.19 GB
Checksum: 3508608109

 Applies to one or more of the following products:
Resiliency Platform 2.0 On CentOS 6.8 x86-64

 Obsolete patches, incompatibilities, superseded patches, or other requirements:

This patch supersedes the following patches: Release date
vrp-centos68_x86_64-Patch-2.0.0.400 (obsolete) 2016-10-21
vrp-centos68_x86_64-Patch-2.0.0.300 (obsolete) 2016-10-06
vrp-centos68_x86_64-Patch-2.0.0.200 (obsolete) 2016-10-03
vrp-centos68_x86_64-Patch-2.0.0.100 (obsolete) 2016-09-11

 Fixes the following incidents:
3898032, 3898928, 3899841, 3900017, 3900209, 3901142, 3901884, 3902520, 3902614, 3902763, 3905661, 3905664, 3905669, 3905673, 3905676, 3905679, 3905712, 4891, 4945, 5080, 5083, 5084, 5085, 5086, 5113, 5122, 5125, 5172, 5310, 5348, 5425, 5447

 Patch ID:
None.

Readme file
                          * * * READ ME * * *
              * * * Veritas Resiliency Platform 2.0 * * *
                         * * * Patch 500 * * *
                         Patch Date: 2016-11-25


This document provides the following information:

   * PATCH NAME
   * OPERATING SYSTEMS SUPPORTED BY THE PATCH
   * PACKAGES AFFECTED BY THE PATCH
   * BASE PRODUCT VERSIONS FOR THE PATCH
   * SUMMARY OF INCIDENTS FIXED BY THE PATCH
   * DETAILS OF INCIDENTS FIXED BY THE PATCH
   * INSTALLATION PRE-REQUISITES
   * INSTALLING THE PATCH
   * REMOVING THE PATCH


PATCH NAME
----------
Veritas Resiliency Platform 2.0 Patch 500


OPERATING SYSTEMS SUPPORTED BY THE PATCH
----------------------------------------
CentOS 6.8 x86-64



BASE PRODUCT VERSIONS FOR THE PATCH
-----------------------------------
   * Veritas Resiliency Platform 2.0


SUMMARY OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
Patch ID: 2.0.0.500
* 3905661 (3905662) Race condition found in the Linux kernel's memory 
subsystem.
* 3905664 (3905665) Empty lines within the Network Pairing wizard/popup screen.
* 3905669 (3905671) Need to refresh VBSs after modifying RGs or editing the 
dependency structure within VCS.
* 3905673 (3905674) Exceptions are not getting set on RGs in VBS post VBS edit.
* 3905676 (3905677) Monitor RGs in VBS are skipped from operations.
* 3905679 (3905680) Need to reflect changes in VBS if any of its RG gets 
deleted.
* 3905712 (3905713) Group 2 Resiliency Plan Execution Failure.
* 3899841 (3902821) RG/VBS state does not get cleared.
* 3901884 (3902824) Performance issues with RG.
* 3902520 (3902831) Resiliency Plan execution failed to abort.
* 3902614 (3902975) Cannot create resiliency group for virtual machines.
* 3902763 (3902835) Failed to create Bronze RG for ESX VM.
* 3900017 (3901443) Application and virtual machine based resiliency group in 
Virtual Business Service should be operated in the tier order.
* 3898928 (3901132) 3PAR unable to handle synchronising state during migrate
* 3900209 (3901135) VM shutdown in Stop RG operation times out if it takes more 
than 5 minutes
* 3901142 (3901139) Stop only global SG in InfoScale RG operations
* 3898032 (3898027) List of issues fixed in Patch.
* 5085 (5085) Replication was not considered while applying recover service objective.
* 4891 (4891) Ability to start every element of the stack after replication role change on the source DC.
* 4945 (4945) VRP Data Mover licensing should consider Per Giga Byte as well as Per Virtual Machine licensing
* 5113 (5113) Clustered Shared Volume(CSV) Object ID was getting changed after takeover which was not available with blob to perform the operation.
* 5080 (5080) Resync operation was successful but underlying CSV object was still present which caused the failure of next migrate operation.
* 5310 (5310) Add functionality for discovery and operations of VMware distributed virtual switches.
* 5084 (5084) If in a InfoScale Availability (VCS) SG dependency, there is a local SG as parent and a global SG is a child, then DR in such a case is not supported.
* 5086 (5086) Rehearsal operation is not supported if multi level SG dependency is involved in InfoScale Availability (VCS).
* 5083 (5083) When there are multiple global SGs at the top level, multiple RGs can be configured using one SG each.
* 5348 (5348) Remote IMS calls are happening 2 times for VIOM , due to this custom script as part of RP are getting executed twice.
* 5425 (5425) Custom script are showing Application hosts reported to VIOM as script target , there should be a confirmation from user before doing this.
* 5447 (5447) If all the nodes for an standalone application or InfoScale Availability managed application, stop heartbeat to IMS/VIOM, there will be risk displayed on RG managing that application and Risk Dashboard.
* 5172 (5172) When protection is configured using VRP datamover and the VMs are under heavy IO loads, the VM console may display buffer IO errors and the VM may become unresponsive
* 5125 (5125) A CLISH option is provided to clear Admin wait on Replication Gateway
* 5122 (5122) A new CLISH option in RM has been provided to upgrade VRP Datamover components on the ESX Cluster.


DETAILS OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
This patch fixes the following incidents:

Patch ID: 2.0.0.500

* 3905661 (Tracking ID: 3905662)

SYMPTOM:
A race condition was found in the way the Linux kernel's memory 
subsystem handled the copy-on-write (COW) breakage of private read-only memory 
mappings.

DESCRIPTION:
An unprivileged local user could use this flaw to gain write 
access to otherwise read-only memory mappings and thus increase their 
privileges on the system. This could be abused by an attacker to modify 
existing setuid files with instructions to elevate privileges.

RESOLUTION:
CentOS RPMs has been updated to fixed version.
In order to get changes in place, appliance will need reboot after upgrade.

* 3905664 (Tracking ID: 3905665)

SYMPTOM:
In case of migrate of VMware VM connected to dvSwitch without vlan 
mapping, the VM NICs on target side get disconnected but carry stale dvSwitch 
information causing the invalid dvSwitch objects created. These objects do not 
have name too and thus appear as empty lines in pairing wizard.

DESCRIPTION:
The disconnected VM NICs carrying the stale dvSwitch information 
is an issue.

RESOLUTION:
If the VM NIC was earlier connected to dvSwitch on primary and is 
now in disconnected state, then do not report its dvSwitch uuid as  part of 
NIC discovery.

* 3905669 (Tracking ID: 3905671)

SYMPTOM:
Post edit resiliency group operation, there is a need for a manual 
edit of all the virtual  business services that contain this resiliency group.

DESCRIPTION:
This was because the changes in resiliency group configuration 
were not properly reflected to the virtual business service definition.

RESOLUTION:
Made the fix to consider all changes done in the resiliency group 
into the virtual business service definition and also regenerate all the 
operation workflows of that virtual business service. Any manual edit of 
virtual business service should not be needed post edit of resiliency group, 
it will be handled internally.

* 3905673 (Tracking ID: 3905674)

SYMPTOM:
Post edit operation of a virtual business service, if there are any 
exceptions(Optional/Starts after) set on any of its resiliency groups, they do 
not get reflected after the operation is submitted.

DESCRIPTION:
The exceptions set on resiliency groups in a virtual business 
service were getting skipped from the definition on an edit operation.

RESOLUTION:
Made the fix to save the changes made to exceptions set on 
resiliency groups on edit operation of virtual business service.

* 3905676 (Tracking ID: 3905677)

SYMPTOM:
If a virtual business service contains resiliency groups with Monitor 
service objective applied, such resiliency groups cannot be considered for 
disaster recovery operations. Ideally, they have to be stopped (if configured 
on source data centre) or started (if configured on target data centre).
But they were completely getting skipped from disaster recovery operations.

DESCRIPTION:
Only resiliency groups with recover service objective applied 
were being considered for disaster recovery operations.

RESOLUTION:
Made the fix to consider monitor resiliency groups and based on 
the data centre on which they are configured, added start/stop for them in the 
virtual business service disaster recovery operations.

* 3905679 (Tracking ID: 3905680)

SYMPTOM:
If any of the resiliency groups in a virtual business service is 
deleted, changes should be reflected in the definition of the virtual business 
service.

DESCRIPTION:
Virtual business service was unaware of the resiliency groups in 
it getting deleted.

RESOLUTION:
Made the fix to listen to resiliency group delete event  and re 
generate all operation workflows for virtual business service. If the 
resiliency group getting deleted is the last one in a virtual business
service, the virtual business service itself will get deleted automatically.

* 3905712 (Tracking ID: 3905713)

SYMPTOM:
Migrate VBS action during Resiliency Plan execution fails with "Login 
Failed" error.

DESCRIPTION:
The failure occurs during concurrent IMS API execution where they 
try to update the preferences view during post login.

RESOLUTION:
The preferences view updates are not required for IMS. Hence the 
post login processing was removed there by avoiding concurrent updates to the 
raw view which caused the failure.

* 3899841 (Tracking ID: 3902821)

SYMPTOM:
The RG/VBS state does not get cleared if VBS operation fails when 
run through RP.

DESCRIPTION:
The wrapper VBS task in RP does not get created and hence the
callback (which takes care of resetting the RG/VBS state) does not get 
called.

RESOLUTION:
We have added the wrapper task which clears the RG/VBS states 
post 
VBS operation failure in RP.

* 3901884 (Tracking ID: 3902824)

SYMPTOM:
RG with InfoScale application(s) where the application Service 
Group 
is part of multiple Service Group dependency, takes more time in 
Migrate/Takeover operation.

DESCRIPTION:
We wait for incremental time(3 sec, 6, 12... ) to check SG 
state 
after firing online/offline operation on SG which may lead us to wait more 
time that the SG actually takes to go online/offline.

RESOLUTION:
We have added two optimisations 
1) Dont wait for incremental time(3 sec, 6, 12... ) and wait for constant 
time i.e. 5 sec to check SG state after firing online/offline operation on 
SG.
2) Online/Offline operation is triggered on all SGs which are at same level 
and then monitor their state. Earlier we used to trigger Online/Offline 
operation on one SG at a time and then monitor its state and so on.

* 3902520 (Tracking ID: 3902831)

SYMPTOM:
When VBS start/stop executes as part of RP, if any of the RG in 
this VBS was edited before the RP execution, VBS workflow in RP aborts but 
the VBS/RG states remain as is.

DESCRIPTION:
Post RG edit operation, VBS operation workflows are generated. 
The workflow version with which these operation workflows are regenerated is 
incorrect. Due to this, the RP tries to execute the VBS workflow with the 
wrong version.

RESOLUTION:
We have corrected the version with which the VBS workflows are 
regenerated post RG edit operation.

* 3902614 (Tracking ID: 3902975)

SYMPTOM:
RG creation fails at Generate Orchestration step where it complains 
of an error while fetching information for VMware.

DESCRIPTION:
There are a large number of lun objects being discovered which 
is 
causing scale issues with the query to find corresponding ESX servers for a 
given set of luns in the VMware service.

RESOLUTION:
Optimise query to get ESX for given luns by not putting the 
_vrp_type=vrp_lun clause which causes problems when we have a large number 
of 
lun objects in the database.

* 3902763 (Tracking ID: 3902835)

SYMPTOM:
RG creation fails at Generate Orchestration step where one can see 
errors in log while fetching processing networking information.

DESCRIPTION:
There were some null entries for ecosystems and technologies 
that 
were not handled properly.

RESOLUTION:
Added null check at appropriate places to make sure the 
processing 
continues.

* 3900017 (Tracking ID: 3901443)

SYMPTOM:
Virtual Business Service stops application based Resiliency Group 
before Virtual Machine based Resiliency Group, even if they are placed in 
lower tier.

DESCRIPTION:
Tier ordering is not honored for mix of Application and Virtual 
Machine based Resiliency Group in Virtual Business Service.

RESOLUTION:
Honoring the tier order for mix of Application and Virtual 
Machine based Resiliency Group while performing operations on Virtual 
Business Service.

* 3898928 (Tracking ID: 3901132)

SYMPTOM:
Migrate operation fails if 3PAR replication state is in synchronising state

DESCRIPTION:
When migrate operation is run from VRP if 3PAR replication state is in synchronising state migrate operation 
fails with error 'Invalid remote copy status'

RESOLUTION:
3PAR sync operation will wait if consistency group is in syncing state and run syncrcopygroup again.

* 3900209 (Tracking ID: 3901135)

SYMPTOM:
VM shutdown in Stop RG operation times out if it takes more than 5 
minutes

DESCRIPTION:
If a VM shutdown operation takes more than 5 minutes RG operations 
fails to complete

RESOLUTION:
Increased timeout value to 60 minutes

* 3901142 (Tracking ID: 3901139)

SYMPTOM:
RG operations stopping all SGs in InfoScale RG

DESCRIPTION:
The default behaviour of InfoScale RG was to stop all SGs in it

RESOLUTION:
Changed default behaviour not to stop local SGs in any RG operations

* 3898032 (Tracking ID: 3898027)

SYMPTOM:
Patch fixes issues in VRP.

DESCRIPTION:
More details about individual issues can be accessed using JIRA

-------------
JIRA IDs
-------------
SANTAMARIA-5085
SANTAMARIA-4891
SANTAMARIA-4945
SANTAMARIA-5113
SANTAMARIA-5080
SANTAMARIA-5310
SANTAMARIA-5084
SANTAMARIA-5086
SANTAMARIA-5083
SANTAMARIA-5348
SANTAMARIA-5425
SANTAMARIA-5447
SANTAMARIA-5172
SANTAMARIA-5125
SANTAMARIA-5122

RESOLUTION:
Not applicable

* 5085 (Tracking ID: 5085)

SYMPTOM:
Application can be configured with recover service objective even if replication is not configured

DESCRIPTION:
Even if application had no replication configured, it could be configured for recovery service objective. This allowed Takeover and Rehearsal operation which did not mean anything specific for an application which has no replication configured.

RESOLUTION:
Recover service objective is now not allowed is there is no replication configured for the application.

* 4891 (Tracking ID: 4891)

SYMPTOM:
Replication is already active (source) on the DC. Entire stack after replication needs to be started.

DESCRIPTION:
Start operation provides an option to "start" every element of the stack after replication role change. It includes storage, network, compute and customization. Deep start is possible and relevant when data/replication is already active (source) on that DC. It is recommended to use this option if it is known that a refresh of the entire stack is needed.

RESOLUTION:
select the "start compute, storage and network" option in start operation.

Note: After the upgrade the Recover RG should be configured again (edit RG) for this option to be available.

* 4945 (Tracking ID: 4945)

SYMPTOM:
License counts do not change for Per Virtual Machine

DESCRIPTION:
While counting licenses for workloads that are protected using VRP Data Mover, two counts needs to considered. One is for the data being protected which is Per Giga Byte for the used file system sizes being replicated. The other is Per Virtual Machine for the number of Virtual Machines being protected. The Per Virtual Machine counts were not being properly accounted for.

RESOLUTION:
Upon upgrade the daily license schedule will correct the Per Virtual Machine counts. Alternatively, when the RG is edited, the Per Virtual Machine counts would be corrected.

* 5113 (Tracking ID: 5113)

SYMPTOM:
CSV goes into unknown state after takeover.

DESCRIPTION:
In a real disaster scenario, when HyperV servers are brought up after recovery, Clustered Shared Volume(CSV) Object ID changes after takeover. This was not getting updated in the database. Since we were using the old Object ID, we were unable to perform the resync operation successfully.

RESOLUTION:
Use cluster resource Id instead of CSV Object Id to perform the unmount operation.

* 5080 (Tracking ID: 5080)

SYMPTOM:
CSV object is in unknown state even after resync.

DESCRIPTION:
In a real disaster scenario, when HyperV servers are brought up after recovery, Clustered Shared Volume(CSV) Object ID changes after takeover. This was not getting updated in the database. Since we were using the old Object ID, we were unable to perform the resync operation successfully. We perform the resync operation after takeover. Even though the resync operation was showing successful, the CSV was still in unknown state. When we try migrate operation, as the CSV is still present, the mount CSVFS volume fails.

RESOLUTION:
Use cluster resource Id instead of CSV Object Id to perform the unmount operation.

* 5310 (Tracking ID: 5310)

SYMPTOM:
Distributed virtual switches and its port groups are not discovered and hence cannot be mapped.

DESCRIPTION:
Virtual switch discovery for VMware did not include DVS discovery. Since DVS and its port groups were not discovered, those were also not listed for network mapping.

RESOLUTION:
Added support to discover DV Switches and its port groups.

* 5084 (Tracking ID: 5084)

SYMPTOM:
The DR operations fails for such a dependency at the VCS side.

DESCRIPTION:
Even in such a case, DR operations were allowed to be done for a RG with such SG dependency. And these DR operation anyways used to fail. No validation was there in place.

RESOLUTION:
Added a validation during create RG.If such type of dependency is present in GD object, recover SO will not be applied to RG.(It will not be shown in applicable list of SOs in create RG wizard).

* 5086 (Tracking ID: 5086)

SYMPTOM:
Both rehearsal and cleanup rehearsal operation fail in this case.

DESCRIPTION:
InfoScale Availability (VCS) integration is via VIOM and VIOM is not aware of the firedrill SGs for the whole dependency of SGs. That is why the operation doesn't succeed as a whole.

RESOLUTION:
Added a validation during configuration of RG. If validation fails, rehearsal and cleanup rehearsal is not available for this scenario.

* 5083 (Tracking ID: 5083)

SYMPTOM:
Multiple RG for a single group dependency can be configured and orchestrated, resulting in delay and timing issues.

DESCRIPTION:
The VCS discovery at VIOM did not fetch the SG information from the SG dependency set in the cluster. Also, it did not have support for the multiple replication based SGs present as part of the dependency.

RESOLUTION:
Added validation so that all top level global SGs in a group dependency are added to the RG.

* 5348 (Tracking ID: 5348)

SYMPTOM:
Custom script which are part of RP will execute twice for remote VIOM servers.

DESCRIPTION:
Assets Lookup table has 2 entries for VIOM, due to this remote API calls are happening 2 times for VIOM.

RESOLUTION:
Modified command facade logic to send Remote API call only once even if there are multiple entries in the lookup table.

* 5425 (Tracking ID: 5425)

SYMPTOM:
Custom script are showing Application hosts reported to VIOM as script target.

DESCRIPTION:
VIOM Managed Hosts is a different management and authorization domain than VRP. VRP user adds VIOM to VRP for orchestrating DR of VCS (InfoScale Availability) managed applications. As such, the interactions which are authorized are VIOM APIs for VCS Service Groups, apart from visibility & discovery. Though, assets in VIOM managed hosts are discovered, they are used only for internal logic. What we see here is, the managed hosts are made available as script execution hosts for Resiliency Plan script tasks. Note, VRP user has now access to scripts which can run as root in the VIOM managed hosts.

RESOLUTION:
At the time of VIOM add, display checkbox for "Allow managed hosts in this VIOM domain to execute Resiliency Plan scripts". This is unchecked by default. Toggling of this property is also available in the edit of VIOM server.

Only those VIOM managed hosts are visible as script execution targets for which permission has been explicitly provided.

* 5447 (Tracking ID: 5447)

SYMPTOM:
When all the nodes for an standalone application or InfoScale Availability managed application are down, there is no indication of that in VRP.

DESCRIPTION:
No risk is generated in VRP which will indicate that all hosts for application are not reachable from VIOM/IMS. User in this case does not know of the problem and cannot take a decision to check the network connectivity of the hosts from VIOM/IMS, or check if the hosts are available and xprtld process is running on those hosts.

RESOLUTION:
A risk is generated in VRP which will indicate that all hosts for application are not reachable from VIOM/IMS. If this risk is present on applications and their container RG, certain RG operations will fail to execute.

* 5172 (Tracking ID: 5172)

SYMPTOM:
Under heavy IO loads, the virtual machine may become unresponsive and console may have buffer IO error.

DESCRIPTION:
There are various optimizations done in the VRP datamover component for enhanced performance and scalability.

RESOLUTION:
The VRP datamover replication continues without any errors under heavy IO loads

* 5125 (Tracking ID: 5125)

SYMPTOM:
Due to several possible issues in Replication Gateway, Admin wait error may be displayed at RG level.

DESCRIPTION:
Due to several issues including disk errors in Replication Gateway, replication state may display Admin Wait error. After the issues are fixed, there is CLISH option to clear Admin Wait on that Replication Gateway. Refer the product documentation for the detailed steps.

RESOLUTION:
There is a enhanced method via CLISH option to clear Admin Wait on the Replication Gateway

* 5122 (Tracking ID: 5122)

SYMPTOM:
There were no options available in the product to upgrade the VRP datamover components

DESCRIPTION:
If the RM is upgraded to VRP2.0HF1, then the procedure to upgrade the VRP Datamover component has been documented in the product guide.

RESOLUTION:
There is a new CLISH options added to upgrade the VRP Datamover components on the ESX Cluster.



INSTALLING THE PATCH
--------------------
DOWNLOADING THE PATCH

This Patch is applicable for VRP 2.0

1. Download repository bundle using 'Setup Repository Bundle Download' link on patch page

* Veritas_Resiliency_Platform_Setup_Repository_2.0.0.500.tar.gz
      This zip contains the setup_conf_repo.pl and other files to be used for setting up the patch on repository server

2. Download the file vrp-centos68_x86_64-Patch-2.0.0.500.tar.gz on your repository server
3. Unzip the file vrp-centos68_x86_64-Patch-2.0.0.500.tar.gz on your repository server
4. Multiple files will be extracted within patches folder

* Veritas_Resiliency_Platform_Upgrade_Bundle_2.0.0.500.tar.gz
      The file to be configured using setup_conf_repo.pl on repository server for RM+IMS fixes

* Veritas_DataMover_Upgrade_Bundle_2.0.0.500.tar.gz
      The file to be configured using setup_conf_repo.pl on repository server for DataMover fixes

* VRTSsfmitrp-2.0.0.500.sfa
      This addon is to be uploaded on VIOM management server UI in case of InfoScale support from VRP


INSTALLATION INSTRUCTIONS
--------------------------------

Perform the following steps in given order to apply the patch:

1. Check the Veritas Resiliency Platform's Deployment Guide for detailed instructions on updating the resiliency platform

2. Stop environmental services

  * Stop all the IMS services at production data center through CLISH:
        manage>  services ims stop ALL

  * Stop all the Resiliency Manager services at production data center through CLISH:
        manage>  services rm stop ALL
                
  * Stop all the IMS services at recovery data center through CLISH:
        manage>  services ims stop ALL

  * Stop all the Resiliency Manager services at recovery data center through CLISH:
        manage>  services rm stop ALL


3. Apply update on Resiliency Manager and IMS

  * Apply update on Resiliency Manager at production data center
  * Apply update on Resiliency Manager at recovery data center
  * Apply update on IMS at production data center
  * Apply update on IMS at recovery data center

      For information on how to apply update on virtual appliances, see the "Updating Resiliency Platform" chapter in Veritas Resiliency Platform 2.0 Deployment Guide.


4. If a Veritas InfoScale Operations Manager server is added to Resiliency Manager, apply update on Veritas Resiliency Platform Enablement add-on (VRTSsfmitrp) for Veritas InfoScale Operations Manager:

  * Log into Veritas InfoScale Operations Manager server. 
        * Go to Settings >   Deployment>  Upload Solutions and upload the .sfa file that you have downloaded and extracted within patches folder.

  * Right click and install the VRTSsfmitrp-2.0.0.500.sfa on Veritas InfoScale Operations Manager server.

  * Click "Restart Web Server" on the task bar.

  * Log into Veritas InfoScale Operations Manager server. 
        * Go to Settings >   Deployment and install the VRTSsfmitrp-2.0.0.500.sfa on required hosts.

  * Log in to the Resiliency Manager console.

  * Remove the Veritas InfoScale Operations Manager servers from production as well as recovery data center once and then add them again.

  * Edit the resiliency groups that were created before applying the update.You will be prompted to include all the service groups belonging to a single group dependency. 


5. If any resiliency group has already been configured using Resiliency Platform Data Mover before applying update to IMS, you need to apply update to Resiliency Platform Data Mover bundle:
                
  * Use the following command of CLISH:
       * manage>  datamover vmware-iofilter upgrade

  * The command will display a list of applicable hosts, and you need to enter the name of the host where you want to upgrade the Resiliency Platform Data Mover bundle.

  Note: If automatic DRS is not enabled, you need to put the ESX hosts into maintenance mode to go ahead with the bundle update.


REMOVING THE PATCH
------------------
Not applicable


SPECIAL INSTRUCTIONS
--------------------
Note #1. For incident 3905661, for the fixed new kernel to be used, a reboot of the server is required. The older kernel can be removed using "yum remove [older kernel version]" command.

Note #2. For already created VBSs having both Monitor SO and Recover SO RGs, Monitor SO RGs are skipped during DR operations (HF4 issue fixed in HF5). To exercise the fix for above issue, these VBSs need to be edited after HF5 upgrade.


OTHERS
------
NONE