infoscale-sles15_x86_64-Patch-8.0.2.1200

 Basic information
Release type: Patch
Release date: 2023-09-26
OS update support: SLES15 x86-64 SP 4
Technote: None
Documentation: None
Popularity: 333 viewed    downloaded
Download size: 358.09 MB
Checksum: 1540483815

 Applies to one or more of the following products:
InfoScale Availability 8.0.2 On SLES15 x86-64
InfoScale Enterprise 8.0.2 On SLES15 x86-64
InfoScale Foundation 8.0.2 On SLES15 x86-64
InfoScale Storage 8.0.2 On SLES15 x86-64

 Obsolete patches, incompatibilities, superseded patches, or other requirements:
None.

 Fixes the following incidents:
4113391, 4119267, 4120300, 4121230, 4123065, 4123069, 4123080, 4124086, 4124291, 4124702, 4124794, 4124796, 4124960, 4124963, 4124964, 4124966, 4124968, 4125162, 4125322, 4125392, 4125811, 4125870, 4125871, 4125873, 4125875, 4125878, 4125891, 4125895, 4126104, 4126262, 4126266, 4127509, 4127510, 4127518, 4127519, 4127524, 4127525, 4127527, 4127528, 4127594, 4127720, 4127785, 4128127, 4128249, 4128723, 4128835, 4128886, 4129494, 4129664, 4129681, 4129708, 4129715, 4129766, 4129838, 4130206, 4130402, 4130816, 4130827, 4130947, 4131320, 4133009, 4133131, 4133132, 4133133

 Patch ID:
VRTSrest-3.0.10-linux
VRTSllt-8.0.2.1200-SLES15
VRTSvxfen-8.0.2.1200-SLES15
VRTSdbac-8.0.2.1100-SLES15
VRTSgab-8.0.2.1200-SLES15
VRTSamf-8.0.2.1200-SLES15
VRTSvcs-8.0.2.1200-SLES15
VRTSvcsag-8.0.2.1200-SLES15
VRTSsfmh-8.0.2.111_Linux.rpm
VRTSvxfs-8.0.2.1200-SLES15
VRTSodm-8.0.2.1200-SLES15
VRTSglm-8.0.2.1200-SLES15
VRTSgms-8.0.2.1200-SLES15
VRTSveki-8.0.2.1200-SLES15
VRTSvxvm-8.0.2.1200-SLES15
VRTSaslapm-8.0.2.1200-SLES15

Readme file
                          * * * READ ME * * *
                      * * * InfoScale 8.0.2 * * *
                         * * * Patch 1200 * * *
                         Patch Date: 2023-09-14


This document provides the following information:

   * PATCH NAME
   * OPERATING SYSTEMS SUPPORTED BY THE PATCH
   * PACKAGES AFFECTED BY THE PATCH
   * BASE PRODUCT VERSIONS FOR THE PATCH
   * SUMMARY OF INCIDENTS FIXED BY THE PATCH
   * DETAILS OF INCIDENTS FIXED BY THE PATCH
   * INSTALLATION PRE-REQUISITES
   * INSTALLING THE PATCH
   * REMOVING THE PATCH
   * KNOWN ISSUES


PATCH NAME
----------
InfoScale 8.0.2 Patch 1200


OPERATING SYSTEMS SUPPORTED BY THE PATCH
----------------------------------------
SLES15 x86-64


PACKAGES AFFECTED BY THE PATCH
------------------------------
VRTSamf
VRTSaslapm
VRTSdbac
VRTSgab
VRTSglm
VRTSgms
VRTSllt
VRTSodm
VRTSrest
VRTSsfmh
VRTSvcs
VRTSvcsag
VRTSveki
VRTSvxfen
VRTSvxfs
VRTSvxvm


BASE PRODUCT VERSIONS FOR THE PATCH
-----------------------------------
   * InfoScale Availability 8.0.2
   * InfoScale Enterprise 8.0.2
   * InfoScale Foundation 8.0.2
   * InfoScale Storage 8.0.2


SUMMARY OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
Patch ID: VRTSveki-8.0.2.1200
* 4130816 (4130815) Generate and add changelog in VEKI rpm
Patch ID: VRTSveki-8.0.2.1100
* 4120300 (4110457) Veki packaging were failing due to dependency
Patch ID: VRTSrest-3.0.10
* 4124960 (4130028) GET apis of vm and filesystem were failing because of datatype mismatch in spec and original output, if the client generates the client code from specs
* 4124963 (4127170) While modifying the system list for service group when dependency is there, the api would fail
* 4124964 (4127167) -force option is used by default in delete of rvg and a new -online option is used in patch of rvg
* 4124966 (4127171) While getting excluded disks on Systems API we were getting nodelist instead of nodename in href
* 4124968 (4127168) In GET request on rvgs all datavolumes in RVGs not listed correctly
* 4125162 (4127169) Get disks api failing when cvm is down on any node
Patch ID: VRTSsfmh-vom-HF0802111
* 4131320 (4131319) vxdcli status shows 'RUNNING_NEEDS_RESTART' on RHEL9U2 platform.
Patch ID: VRTSvcsag-8.0.2.1200
* 4130206 (4127320) The ProcessOnOnly agent fails to bring online a resource when a user shell is set to /sbin/nologin.
Patch ID: VRTSvcs-8.0.2.1200
* 4113391 (4124956) GCO configuration with hostname is not working.
Patch ID: VRTSvcs-8.0.2.1100
* 4124702 (4103073) Upgrading Netsnmp component to fix security vulnerabilities .
Patch ID: VRTSdbac-8.0.2.1100
* 4133133 (4133130) Veritas Infoscale Availability qualification for latest sles15 kernels is provided.
Patch ID: VRTSvxfen-8.0.2.1200
* 4124086 (4124084) Security vulnerabilities exist in the Curl third-party components used by VCS.
* 4125891 (4113847) Support for even number of coordination disks for CVM-based disk-based fencing
* 4125895 (4108561) Reading vxfen reservation not working
Patch ID: VRTSamf-8.0.2.1200
* 4133131 (4133130) Veritas Infoscale Availability qualification for latest sles15 kernels is provided.
Patch ID: VRTSllt-8.0.2.1200
* 4128886 (4128887) During rmmod of llt package, warning trace is observed on kernel versions higher than 5.14 on RHEL9 and SLES15.
Patch ID: VRTSgab-8.0.2.1200
* 4133132 (4133130) Veritas Infoscale Availability qualification for latest sles15 kernels is provided.
Patch ID: VRTSvxvm-8.0.2.1200
* 4119267 (4113582) In VVR environments, reboot on VVR primary nodes results in RVG going into passthru mode.
* 4123065 (4113138) 'vradmin repstatus' invoked on the secondary site shows stale information
* 4123069 (4116609) VVR Secondary logowner change is not reflected with virtual hostnames.
* 4123080 (4111789) VVR does not utilize the network provisioned for it.
* 4124291 (4111254) vradmind dumps core while associating a rlink to rvg because of NULL pointer reference.
* 4124794 (4114952) With virtual hostnames, pause replication operation fails.
* 4124796 (4108913) Vradmind dumps core because of memory corruption.
* 4125392 (4114193) 'vradmin repstatus' incorrectly shows replication status as inconsistent.
* 4125811 (4090772) vxconfigd/vx commands hung if fdisk opened secondary volume and secondary logowner panic'd
* 4128127 (4132265) Machine attached with NVMe devices may panic.
* 4128835 (4127555) Unable to configure replication using diskgroup id.
* 4129766 (4128380) With virtual hostnames, 'vradmin resync' command may fail if invoked from DR site.
* 4130402 (4107801) /dev/vx/.dmp hardware path entries are not getting created on SLES15SP3 onwards.
* 4130827 (4098391) Continuous system crash is observed during VxVM installation.
* 4130947 (4124725) With virtual hostnames, 'vradmin delpri' command may hang.
* 4129664 (4129663) Generate and add changelog in vxvm rpm
Patch ID: VRTSaslapm-8.0.2.1200
* 4133009 (4133010) Generate and add changelog in aslapm rpm
Patch ID: VRTSvxvm-8.0.2.1100
* 4125322 (4119950) Security vulnerabilities exists in third party components [curl and libxml].
Patch ID: VRTSgms-8.0.2.1200
* 4126266 (4125932) no symbol version warning for ki_get_boot in dmesg after SFCFSHA configuration.
* 4127527 (4107112) When finding GMS module with version same as kernel version, need to consider kernel-build number.
* 4127528 (4107753) If GMS module with version same as kernel version is not present, need to consider kernel build number to calculate best fit module.
* 4129708 (4129707) Generate and add changelog in GMS rpm
Patch ID: VRTSglm-8.0.2.1200
* 4127524 (4107114) When finding GLM module with version same as kernel version, need to consider kernel-build number.
* 4127525 (4107754) If GLM module with version same as kernel version is not present, need to consider kernel build number to calculate best fit module.
* 4129715 (4129714) Generate and add changelog in GLM rpm
Patch ID: VRTSodm-8.0.2.1200
* 4126262 (4126256) no symbol version warning for VEKI's symbol in dmesg after SFCFSHA configuration
* 4127518 (4107017) When finding ODM module with version same as kernel version, need to consider kernel-build number.
* 4127519 (4107778) If ODM module with version same as kernel version is not present, need to consider kernel build number to calculate best fit module.
* 4129838 (4129837) Generate and add changelog in ODM rpm
Patch ID: VRTSvxfs-8.0.2.1200
* 4121230 (4119990) Recovery stuck while flushing and invalidating the buffers
* 4125870 (4120729) Incorrect file replication(VFR) job status at VFR target site, while replication is in running state at source.
* 4125871 (4114176) After failover, job sync fails with error "Device or resource busy".
* 4125873 (4108955) VFR job hangs on source if thread creation fails on target.
* 4125875 (4112931) vxfsrepld consumes a lot of virtual memory when it has been running for long time.
* 4125878 (4096267) Veritas File Replication jobs might failed when there are large number of jobs run in parallel.
* 4126104 (4122331) Enhancement in vxfs error message which are logged while marking the bitmap or inode as "BAD".
* 4127509 (4107015) When finding VxFS module with version same as kernel version, need to consider kernel-build number.
* 4127510 (4107777) If VxFS module with version same as kernel version is not present, need to consider kernel build number to calculate best fit module.
* 4127594 (4126957) System crashes with VxFS stack.
* 4127720 (4127719) Added fallback logic in fsdb binary and made changes to fstyp binary such that it now dumps uuid.
* 4127785 (4127784) Earlier fsppadm binary was just giving warning in case of invalid UID, GID number. After this change providing invalid UID / GID  e.g.
"1ABC" (UID/GID are always numbers) will result into error and parsing will stop.
* 4128249 (4119965) VxFS mount binary failed to mount VxFS with SELinux context.
* 4128723 (4114127) Hang in VxFS internal LM Conformance - inotify test
* 4129494 (4129495) Kernel panic observed in internal VxFS LM conformance testing.
* 4129681 (4129680) Generate and add changelog in VxFS rpm


DETAILS OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
This patch fixes the following incidents:

Patch ID: VRTSveki-8.0.2.1200

* 4130816 (Tracking ID: 4130815)

SYMPTOM:
VEKI rpm does not have changelog

DESCRIPTION:
Changelog in rpm will help to find missing incidents with respect to other version.

RESOLUTION:
Changelog is generated and added to VEKI rpm.

Patch ID: VRTSveki-8.0.2.1100

* 4120300 (Tracking ID: 4110457)

SYMPTOM:
Veki packaging failure due to missing of storageapi specific files

DESCRIPTION:
While creating the build area for different components like GLM, GMS, ORAODM, unixvm, VxFS veki build area creation were failing because of storageapi changes 
were not taken care in the Veki mk-symlink and build scripts.

RESOLUTION:
Added support for creation of storageapi build area, storageapi packaging changes via veki, and storageapi build via veki from Veki makefiles.
This is helping to package the storageapi along with veki and resolving all interdependencies

Patch ID: VRTSrest-3.0.10

* 4124960 (Tracking ID: 4130028)

SYMPTOM:
GET apis of vm and filesystem were failing because of datatype mismatch in spec and original output, if the client generates the client code from specs

DESCRIPTION:
The get api was returning different response from what was mentioned in the specs

RESOLUTION:
Changed the response of the GET api of vm and fs apis to match the specs. After the changes client generated code will not get error

* 4124963 (Tracking ID: 4127170)

SYMPTOM:
While modifying the system list for service group when dependency is there, the api would fail

DESCRIPTION:
While modifying the system list for service group when dependency is there, the api would fail. So we were not able to modify system list if there were dependency of service group on other service group

RESOLUTION:
Now we have modified the code for api to modify system list for service group when the dependency exists.

* 4124964 (Tracking ID: 4127167)

SYMPTOM:
DELETE rvg was failing when replication was in progress

DESCRIPTION:
DELETE rvg was failing when replication was in progress, so used -force option in deletion, so that rvg is deleted successfully. Added a new online option in PATCH of rvg, so that used can explicitly say that he wants online add volume

RESOLUTION:
DELETE rvg was failing when replication was in progress, so used -force option in deletion, so that rvg is deleted successfully. Added a new online option in PATCH of rvg, so that used can explicitly say that he wants online add volume

* 4124966 (Tracking ID: 4127171)

SYMPTOM:
While getting excluded disks on Systems API we were getting nodelist instead of nodename in href. When the user would try to GET on that link, the request would fail

DESCRIPTION:
The GET system list api was returning wrong reference links for excluded disks. When the user would try to GET on that link, the request would fail

RESOLUTION:
Returning the correct href for excluded disks from GET system api.

* 4124968 (Tracking ID: 4127168)

SYMPTOM:
In GET request on rvgs all datavolumes in RVGs not listed correctly

DESCRIPTION:
The command which we were using for getting the list of data volumes on rvg, was not returning all data volumes, because of which the api was not returning all the data volumes of rvg

RESOLUTION:
Changed the command to get data volumes of rvg. Now GET on rvg will return all the data volumes associated with that rvg

* 4125162 (Tracking ID: 4127169)

SYMPTOM:
Get disks api failing when cvm is down on any node

DESCRIPTION:
When node is out of cluster from CVM, GET disks api is failing and not giving proper output

RESOLUTION:
Used the appropriate checks to get the proper list of disks from GET disks api.

Patch ID: VRTSsfmh-vom-HF0802111

* 4131320 (Tracking ID: 4131319)

SYMPTOM:
vxdcli status shows 'RUNNING_NEEDS_RESTART' on RHEL9U2 platform.

DESCRIPTION:
VxlistPluginVersion is not updating on dcli service start in dcli_ini.conf file, because of which the status of dcli shows as 'RUNNING_NEEDS_RESTART'.

RESOLUTION:
Updated the correct VxlistPluginVersion in dcli_ini.conf on dcli service start.

Patch ID: VRTSvcsag-8.0.2.1200

* 4130206 (Tracking ID: 4127320)

SYMPTOM:
The ProcessOnOnly agent fails to bring online a resource when a user shell is set to /sbin/nologin.

DESCRIPTION:
The agent fails to bring online a resource when the shell for the user is set to /sbin/nologin.

RESOLUTION:
The ProcessOnOnly agent is enhanced to support the /sbin/nologin shell. If the shell is set to /sbin/nologin, the agent uses /bin/bash as shell to start the process.

Patch ID: VRTSvcs-8.0.2.1200

* 4113391 (Tracking ID: 4124956)

SYMPTOM:
Traditionally, virtual IP addresses are used as cluster addresses. Cluster address is also used for peer-to-peer communication in GCO-DR deployment. Thus, gcoconfig utility is accustomed to IPv4 and IPv6 addresses. It gives error if hostname is provided as cluster address.

DESCRIPTION:
In cloud ecosystem, hostnames are widely used. As cluster address, gcoconfig utility must be compatible with hostname and virtual IPs.

RESOLUTION:
To address the limitation (gcoconfig does not accept hostname as cluster address), gcoconfig utility is enhanced to be supported on the following:
1. NIC and IP configuration:
   i. Continue using NIC and IP configuration.
2. Hostname as cluster address along with corresponding DNS. 
   i. On premise DNS: 
      a. Utility will take following inputs: Domain, Resource Records, TSIGKeyFile (if opted secured DNS), and StealthMasters (optional).
      b. Accordingly, gcoconfig utility will create DNS resource in cluster service group. 
   ii. AWSRoute53 DNS: It is Amazon DNS web service.
      a. Utility will take following inputs: Hosted Zone ID, Resource Records, AWS Binaries, and Directory Path.
      b. Accordingly, gcoconfig utility will create AWSRoute53 DNS type's resource in cluster service group.
   iii. AzureDNSZone: It is Microsoft DNS web service.
      a. Utility will take following inputs: Azure DNS Zone Resource Id, Resource Records are mandatory attributes. Additionally, user must either provide 
         Managed Identity Client ID or Azure Auth Resource.
      b. Accordingly, gcoconfig utility will create AzureDNSZone type's resource in cluster service group.

For the end points mentioned in Resource Records, gcoconfig utility can neither ensure their accessibility, nor manage their lifecycle. Hence, these are not within the scope of gcoconfig utility.

Patch ID: VRTSvcs-8.0.2.1100

* 4124702 (Tracking ID: 4103073)

SYMPTOM:
Security vulnerabilities present in existing version of Netsnmp.

DESCRIPTION:
Upgrading Netsnmp component to fix security vulnerabilities

RESOLUTION:
Upgrading Netsnmp component to fix security vulnerabilities.

Patch ID: VRTSdbac-8.0.2.1100

* 4133133 (Tracking ID: 4133130)

SYMPTOM:
Veritas Infoscale Availability does not qualify latest sles15 kernels.

DESCRIPTION:
Veritas Infoscale Availability qualification for latest sles15 kernels has been provided.

RESOLUTION:
Veritas Infoscale Availability qualify latest sles15 kernels.

Patch ID: VRTSvxfen-8.0.2.1200

* 4124086 (Tracking ID: 4124084)

SYMPTOM:
Security vulnerabilities exist in the Curl third-party components used by VCS.

DESCRIPTION:
Security vulnerabilities exist in the Curl third-party components used by VCS.

RESOLUTION:
Curl is upgraded in which the security vulnerabilities have been addressed.

* 4125891 (Tracking ID: 4113847)

SYMPTOM:
Even number of cp disks is not supported by design. This enhancement is a part of AFA wherein a faulted disk needs to be replaced as soon as the number of coordination disks is even in number and fencing is up and running

DESCRIPTION:
Regular split / network partitioning must be an odd number of disks.
Even number of cp support is provided with cp_count. With cp_count/2+1, fencing is not allowed to come up. Also if cp_count is not defined 
in vxfenmode file then by default minimum 3 cp disk are needed, otherwise vxfen does not start.

RESOLUTION:
In case of even number of cp disk, another disk is added. The number of cp disks is odd and fencing is thus running.

* 4125895 (Tracking ID: 4108561)

SYMPTOM:
Vxfen print keys  internal utility was not working because of overrunning of array internally

DESCRIPTION:
Vxfen print keys  internal utility will not work if the number of keys exceed 8 will then return garbage value
Overrunning array keylist[i].key of 8 bytes at byte offset 8 using index y (which evaluates to 8)

RESOLUTION:
Restricted the internal loop to VXFEN_KEYLEN. Reading reservation working fine now.

Patch ID: VRTSamf-8.0.2.1200

* 4133131 (Tracking ID: 4133130)

SYMPTOM:
Veritas Infoscale Availability does not qualify latest sles15 kernels.

DESCRIPTION:
Veritas Infoscale Availability qualification for latest sles15 kernels has been provided.

RESOLUTION:
Veritas Infoscale Availability qualify latest sles15 kernels.

Patch ID: VRTSllt-8.0.2.1200

* 4128886 (Tracking ID: 4128887)

SYMPTOM:
Below warning trace is observed while unloading llt module:
[171531.684503] Call Trace:
[171531.684505]  <TASK>
[171531.684509]  remove_proc_entry+0x45/0x1a0
[171531.684512]  llt_mod_exit+0xad/0x930 [llt]
[171531.684533]  ? find_module_all+0x78/0xb0
[171531.684536]  __do_sys_delete_module.constprop.0+0x178/0x280
[171531.684538]  ? exit_to_user_mode_loop+0xd0/0x130

DESCRIPTION:
While unloading llt module, vxnet/llt dir is not removed properly due to which warning trace is observed .

RESOLUTION:
Proc_remove api is used which cleans up the whole subtree.

Patch ID: VRTSgab-8.0.2.1200

* 4133132 (Tracking ID: 4133130)

SYMPTOM:
Veritas Infoscale Availability does not qualify latest sles15 kernels.

DESCRIPTION:
Veritas Infoscale Availability qualification for latest sles15 kernels has been provided.

RESOLUTION:
Veritas Infoscale Availability qualify latest sles15 kernels.

Patch ID: VRTSvxvm-8.0.2.1200

* 4119267 (Tracking ID: 4113582)

SYMPTOM:
In VVR environments, reboot on VVR primary nodes results in RVG going into passthru mode.

DESCRIPTION:
Reboot of primary nodes resulted in missing write completions of updates on the primary SRL volume. After the node came up, last update received by VVR secondary was incorrectly compared with the missing updates.

RESOLUTION:
Fixed the check to correctly compare the last received update by VVR secondary.

* 4123065 (Tracking ID: 4113138)

SYMPTOM:
In CVR environments configured with virtual hostname, after node reboots on VVR Primary and Secondary, 'vradmin repstatus' invoked on the secondary site shows stale information with following  warning message:
VxVM VVR vradmin INFO V-5-52-1205 Primary is unreachable or RDS has configuration error. Displayed status information is from Secondary and can be out-of-date.

DESCRIPTION:
This issue occurs when there is a explicit RVG logowner set on the CVM master due to which the old connection of vradmind with its remote peer disconnects and new connection is not formed.

RESOLUTION:
Fixed the issue with the vradmind connection with its remote peer.

* 4123069 (Tracking ID: 4116609)

SYMPTOM:
In CVR environments where replication is configured using virtual hostnames, vradmind on VVR primary loses connection with its remote peer after a planned RVG logowner change on the VVR secondary site.

DESCRIPTION:
vradmind on VVR primary was unable to detect a RVG logowner change on the VVR secondary site.

RESOLUTION:
Enabled primary vradmind to detect RVG logowner change on the VVR secondary site.

* 4123080 (Tracking ID: 4111789)

SYMPTOM:
In VVR/CVR environments, VVR would use any IP/NIC/network to replicate the data and may not utilize the high performance NIC/network configured for VVR.

DESCRIPTION:
The default value of tunable was set to 'any_ip'.

RESOLUTION:
The default value of tunable is set to 'replication_ip'.

* 4124291 (Tracking ID: 4111254)

SYMPTOM:
vradmind dumps core with the following stack:

#3  0x00007f3e6e0ab3f6 in __assert_fail () from /root/cores/lib64/libc.so.6
#4  0x000000000045922c in RDS::getHandle ()
#5  0x000000000056ec04 in StatsSession::addHost ()
#6  0x000000000045d9ef in RDS::addRVG ()
#7  0x000000000046ef3d in RDS::createDummyRVG ()
#8  0x000000000044aed7 in PriRunningState::update ()
#9  0x00000000004b3410 in RVG::update ()
#10 0x000000000045cb94 in RDS::update ()
#11 0x000000000042f480 in DBMgr::update ()
#12 0x000000000040a755 in main ()

DESCRIPTION:
vradmind was trying to access a NULL pointer (Remote Host Name) in a rlink object, as the Remote Host attribute of the rlink hasn't been set.

RESOLUTION:
The issue has been fixed by making code changes.

* 4124794 (Tracking ID: 4114952)

SYMPTOM:
With VVR configured with a virtual hostname, after node reboots on DR site, 'vradmin pauserep' command failed with following error:
VxVM VVR vradmin ERROR V-5-52-421 vradmind server on host <host> not responding or hostname cannot be resolved.

DESCRIPTION:
The virtual host mapped to multiple IP addresses, and vradmind was using incorrectly mapped IP address.

RESOLUTION:
Fixed by using the correct mapping of IP address from the virtual host.

* 4124796 (Tracking ID: 4108913)

SYMPTOM:
Vradmind dumps core with the following stacks:
#3  0x00007f2c171be3f6 in __assert_fail () from /root/coredump/lib64/libc.so.6
#4  0x00000000005d7a90 in VList::concat () at VList.C:1017
#5  0x000000000059ae86 in OpMsg::List2Msg () at Msg.C:1280
#6  0x0000000000441bf6 in OpMsg::VList2Msg () at ../../include/Msg.h:389
#7  0x000000000043ec33 in DBMgr::processStatsOpMsg () at DBMgr.C:2764
#8  0x00000000004093e9 in process_message () at srvmd.C:418
#9  0x000000000040a66d in main () at srvmd.C:733

#0  0x00007f4d23470a9f in raise () from /root/core.Jan18/lib64/libc.so.6
#1  0x00007f4d23443e05 in abort () from /root/core.Jan18/lib64/libc.so.6
#2  0x00007f4d234b3037 in __libc_message () from /root/core.Jan18/lib64/libc.so.6
#3  0x00007f4d234ba19c in malloc_printerr () from /root/core.Jan18/lib64/libc.so.6
#4  0x00007f4d234bba9c in _int_free () from /root/core.Jan18/lib64/libc.so.6
#5  0x00000000005d5a0a in ValueElem::_delete_val () at Value.C:491
#6  0x00000000005d5990 in ValueElem::~ValueElem () at Value.C:480
#7  0x00000000005d7244 in VElem::~VElem () at VList.C:480
#8  0x00000000005d8ad9 in VList::~VList () at VList.C:1167
#9  0x000000000040a71a in main () at srvmd.C:743

#0  0x000000000040b826 in DList::head () at ../include/DList.h:82
#1  0x00000000005884c1 in IpmHandle::send () at Ipm.C:1318
#2  0x000000000056e101 in StatsSession::sendUCastStatsMsgToPrimary () at StatsSession.C:1157
#3  0x000000000056dea1 in StatsSession::sendStats () at StatsSession.C:1117
#4  0x000000000046f610 in RDS::collectStats () at RDS.C:6011
#5  0x000000000043f2ef in DBMgr::collectStats () at DBMgr.C:2799
#6  0x00007f98ed9131cf in start_thread () from /root/core.Jan26/lib64/libpthread.so.0
#7  0x00007f98eca4cdd3 in clone () from /root/core.Jan26/lib64/libc.so.6

DESCRIPTION:
There is a race condition in vradmind that may cause memory corruption and unpredictable result. Vradmind periodically forks a child thread to collect VVR statistic data and send them to the remote site. While the main thread may also be sending data using the same handler object, thus member variables in the handler object are accessed in parallel from multiple threads and may become corrupted.

RESOLUTION:
The code changes have been made to fix the issue.

* 4125392 (Tracking ID: 4114193)

SYMPTOM:
'vradmin repstatus' command showed replication data status incorrectly as 'inconsistent'.

DESCRIPTION:
vradmind was relying on replication data status from both primary as well as DR site.

RESOLUTION:
Fixed replication data status to rely on the primary data status.

* 4125811 (Tracking ID: 4090772)

SYMPTOM:
vxconfigd/vx commands hang on secondary site in a CVR environment.

DESCRIPTION:
Due to a window with unmatched SRL positions, if any application (e.g. fdisk) trying
to open the secondary RVG volume will acquire a lock and wait for SRL positions to match.
During this if any vxvm transaction kicked in will also have to wait for same lock.
Further logowner node panic'd which triggered logownership change protocol which hung
as earlier transaction was stuck. As logowner change protocol could not complete,
in absence of valid logowner SRL position could not match and caused deadlock. That lead
to vxconfigd and vx command hang.

RESOLUTION:
Added changes to allow read operation on volume even if SRL positions are
unmatched. We are still blocking write IOs and just allowing open() call for read-only
operations, and hence there will not be any data consistency or integrity issues.

* 4128127 (Tracking ID: 4132265)

SYMPTOM:
Machine with NVMe disks panics with following stack: 
blk_update_request
blk_mq_end_request
dmp_kernel_nvme_ioctl
dmp_dev_ioctl
dmp_send_nvme_passthru_cmd_over_node
dmp_pr_do_nvme_read
dmp_pgr_read
dmpioctl
dmp_ioctl
blkdev_ioctl
__x64_sys_ioctl
do_syscall_64

DESCRIPTION:
Issue was applicable to setups with NVMe devices which do not support SCSI3-PR as an ioctl was called without checking correctly if SCSI3-PR was supported.

RESOLUTION:
Fixed the check to avoid calling the ioctl on devices which do not support SCSI3-PR.

* 4128835 (Tracking ID: 4127555)

SYMPTOM:
While adding secondary site using the 'vradmin addsec' command, the command fails with following error if diskgroup id is used in place of diskgroup name:
VxVM vxmake ERROR V-5-1-627 Error in field remote_dg=<dgid>: name is too long

DESCRIPTION:
Diskgroup names can be 32 characters long where as diskgroup ids can be 64 characters long. This was not handled by vradmin commands.

RESOLUTION:
Fix vradmin commands to handle the case where longer diskgroup ids can be used in place of diskgroup names.

* 4129766 (Tracking ID: 4128380)

SYMPTOM:
If VVR is configured using virtual hostname and 'vradmin resync' command is invoked from a DR site node, it fails with following error:
VxVM VVR vradmin ERROR V-5-52-405 Primary vradmind server disconnected.

DESCRIPTION:
In case of virtual hostname maps to multiple IPs, vradmind service on the DR site was not able to reach the VVR logowner node on the primary site due to incorrect IP address mapping used.

RESOLUTION:
Fixed vradmind to use correct mapped IP address of the primary vradmind.

* 4130402 (Tracking ID: 4107801)

SYMPTOM:
/dev/vx/.dmp hardware path entries are not getting created on SLES15SP3 onwards.

DESCRIPTION:
vxpath-links is responsible for creating the the hardware paths under /dev/vx/.dmp .
This script get invokes from: /lib/udev/vxpath_links. The "/lib/udev" folder is not present in SLES15SP3.
This folder is explicitly removed from  SLES15SP3 onwards and it is expected to create Veritas specific scripts/libraries from vendor specific folder.

RESOLUTION:
Code changes have been made to invoke "/etc/vx/vxpath-links" instead of "/lib/udev/vxpath-links".

* 4130827 (Tracking ID: 4098391)

SYMPTOM:
Kernel panic is observed with following stack:

#6 [ffffa479c21cf6f0] page_fault at ffffffffb240130e
    [exception RIP: bfq_bio_bfqg+37]
    RIP: ffffffffb1e78135  RSP: ffffa479c21cf7a0  RFLAGS: 00010002
    RAX: 000000000000001f  RBX: 0000000000000000  RCX: ffffa479c21cf860
    RDX: ffff8bd779775000  RSI: ffff8bd795b2fa00  RDI: ffff8bd795b2fa00
    RBP: ffff8bd78f136000   R8: 0000000000000000   R9: ffff8bd793a5b800
    R10: ffffa479c21cf828  R11: 0000000000001000  R12: ffff8bd7796b6e60
    R13: ffff8bd78f136000  R14: ffff8bd795b2fa00  R15: ffff8bd7946ad0bc
    ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
#7 [ffffa479c21cf7b0] bfq_bic_update_cgroup at ffffffffb1e78458
#8 [ffffa479c21cf7e8] bfq_bio_merge at ffffffffb1e6f47f
#9 [ffffa479c21cf840] blk_mq_submit_bio at ffffffffb1e48c09
#10 [ffffa479c21cf8c8] submit_bio_noacct at ffffffffb1e3c7e3
#11 [ffffa479c21cf958] submit_bio at ffffffffb1e3c87b
#12 [ffffa479c21cf9a8] submit_bh_wbc at ffffffffb1d2536a
#13 [ffffa479c21cf9e0] block_read_full_page at ffffffffb1d27ac1
#14 [ffffa479c21cfa90] do_read_cache_page at ffffffffb1c2f7e5
#15 [ffffa479c21cfb48] read_part_sector at ffffffffb1e546b5
#16 [ffffa479c21cfb60] read_lba at ffffffffb1e595d2
#17 [ffffa479c21cfba8] efi_partition at ffffffffb1e59f4d
#18 [ffffa479c21cfcb8] blk_add_partitions at ffffffffb1e54377
#19 [ffffa479c21cfcf8] bdev_disk_changed at ffffffffb1d2a8fa
#20 [ffffa479c21cfd30] __blkdev_get at ffffffffb1d2c16c
#21 [ffffa479c21cfda0] blkdev_get at ffffffffb1d2c2b4
#22 [ffffa479c21cfdb8] __device_add_disk at ffffffffb1e5107e
#23 [ffffa479c21cfe20] dmp_register_disk at ffffffffc0e68ae7 [vxdmp]
#24 [ffffa479c21cfe50] dmp_reconfigure_db at ffffffffc0e8d8bd [vxdmp]
#25 [ffffa479c21cfe80] dmpioctl at ffffffffc0e75cd5 [vxdmp]
#26 [ffffa479c21cfe90] dmp_ioctl at ffffffffc0e9d469 [vxdmp]
#27 [ffffa479c21cfea8] blkdev_ioctl at ffffffffb1e4ed19
#28 [ffffa479c21cfef0] block_ioctl at ffffffffb1d2a719
#29 [ffffa479c21cfef8] ksys_ioctl at ffffffffb1cfb262
#30 [ffffa479c21cff30] __x64_sys_ioctl at ffffffffb1cfb296
#31 [ffffa479c21cff38] do_syscall_64 at ffffffffb1a0538b
#32 [ffffa479c21cff50] entry_SYSCALL_64_after_hwframe at ffffffffb240008c

DESCRIPTION:
VxVM causes kernel panic because of null pointer dereference in kernel code when BFQ disk io scheduler is used. This is observed on SLES15 SP3 minor kernel >= 5.3.18-150300.59.68.1 and SLES15 SP4 minor kernel >= 5.14.21-150400.24.11.1

RESOLUTION:
Code changes have been done to fix this issue in IS-8.0 and IS-8.0.2.

* 4130947 (Tracking ID: 4124725)

SYMPTOM:
With VVR configured using virtual hostnames, 'vradmin delpri' command could hang after doing the RVG cleanup.

DESCRIPTION:
'vradmin delsec' command used prior to 'vradmin delpri' command had left the cleanup in an incomplete state resulting in next cleanup command to hang.

RESOLUTION:
Fixed to make sure that 'vradmin delsec' command executes its workflow correctly.

* 4129664 (Tracking ID: 4129663)

SYMPTOM:
vxvm rpm does not have changelog

DESCRIPTION:
Changelog in rpm will help to find missing incidents with respect to other version.

RESOLUTION:
Changelog is generated and added to vxvm rpm.

Patch ID: VRTSaslapm-8.0.2.1200

* 4133009 (Tracking ID: 4133010)

SYMPTOM:
aslapm rpm does not have changelog

DESCRIPTION:
Changelog in rpm will help to find missing incidents with respect to other version.

RESOLUTION:
Changelog is generated and added to aslapm rpm.

Patch ID: VRTSvxvm-8.0.2.1100

* 4125322 (Tracking ID: 4119950)

SYMPTOM:
Vulnerabilities have been reported in third party components, [curl and libxml] that are used by VxVM.

DESCRIPTION:
Third party components [curl and libxml] in their current versions,  used by VxVM have been reported with security vulnerabilities which 
needs

RESOLUTION:
[curl and libxml] have been upgraded to newer versions in which the reported security vulnerabilities have been addressed.

Patch ID: VRTSgms-8.0.2.1200

* 4126266 (Tracking ID: 4125932)

SYMPTOM:
no symbol version warning for ki_get_boot in dmesg after SFCFSHA configuration.

DESCRIPTION:
no symbol version warning for ki_get_boot in dmesg after SFCFSHA configuration.

RESOLUTION:
Updated the code to build gms with correct kbuild symbols.

* 4127527 (Tracking ID: 4107112)

SYMPTOM:
The GMS module fails to load on linux minor kernel.

DESCRIPTION:
This issue occurs due to changes in the minor kernel.

RESOLUTION:
Modified existing modinst-gms script to consider kernel-build version in exact-version-module version calculation.

* 4127528 (Tracking ID: 4107753)

SYMPTOM:
The GMS module fails to load on linux minor kernel.

DESCRIPTION:
This issue occurs due to changes in the minor kernel.

RESOLUTION:
Modified existing modinst-gms script to consider kernel-build version in best-fit-module-version calculation if exact-version-module is not 
present.

* 4129708 (Tracking ID: 4129707)

SYMPTOM:
GMS rpm does not have changelog

DESCRIPTION:
Changelog in rpm will help to find missing incidents with respect to other version.

RESOLUTION:
Changelog is generated and added to GMS rpm.

Patch ID: VRTSglm-8.0.2.1200

* 4127524 (Tracking ID: 4107114)

SYMPTOM:
The GLM module fails to load on linux minor kernel.

DESCRIPTION:
This issue occurs due to changes in the minor kernel.

RESOLUTION:
Modified existing modinst-glm script to consider kernel-build version in exact-version-module version calculation.

* 4127525 (Tracking ID: 4107754)

SYMPTOM:
The GLM module fails to load on linux minor kernel.

DESCRIPTION:
This issue occurs due to changes in the minor kernel.

RESOLUTION:
Modified existing modinst-glm script to consider kernel-build version in best-fit-module-version calculation if exact-version-module is not 
present.

* 4129715 (Tracking ID: 4129714)

SYMPTOM:
GLM rpm does not have changelog

DESCRIPTION:
Changelog in rpm will help to find missing incidents with respect to other version.

RESOLUTION:
Changelog is generated and added to GLM rpm.

Patch ID: VRTSodm-8.0.2.1200

* 4126262 (Tracking ID: 4126256)

SYMPTOM:
no symbol version warning for "ki_get_boot" in dmesg after SFCFSHA configuration

DESCRIPTION:
modpost is unable to read VEKI's Module.symvers while building ODM module, which results in no symbol version warning for "ki_get_boot" symbol of VEKI.

RESOLUTION:
Modified the code to make sure that modpost picks all the dependent symbols while building ODM module.

* 4127518 (Tracking ID: 4107017)

SYMPTOM:
The ODM module fails to load on linux minor kernel.

DESCRIPTION:
This issue occurs due to changes in the minor kernel.

RESOLUTION:
Modified existing modinst-odm script to consider kernel-build version in exact-version-module version calculation.

* 4127519 (Tracking ID: 4107778)

SYMPTOM:
The ODM module fails to load on linux minor kernel.

DESCRIPTION:
This issue occurs due to changes in the minor kernel.

RESOLUTION:
Modified existing modinst-odm script to consider kernel-build version in best-fit-module-version calculation if exact-version-module is not 
present.

* 4129838 (Tracking ID: 4129837)

SYMPTOM:
ODM rpm does not have changelog

DESCRIPTION:
Changelog in rpm will help to find missing incidents with respect to other version.

RESOLUTION:
Changelog is generated and added to ODM rpm.

Patch ID: VRTSvxfs-8.0.2.1200

* 4121230 (Tracking ID: 4119990)

SYMPTOM:
Some nodes in cluster are in hang state and recovery is stuck.

DESCRIPTION:
There is a deadlock where one thread locks the buffer and wait for the recovery to complete. Recovery on the other hand may get stuck while flushing and
invalidating the buffers from buffer cache as it cannot lock the buffer.

RESOLUTION:
If recovery is in progress, release buffer and return VX_ERETRY callers will retry the operation. There are some cases where lock is taken on 2 buffers. For
those cases pass the flag VX_NORECWAIT which will retry the operation after releasing both the buffers.

* 4125870 (Tracking ID: 4120729)

SYMPTOM:
Incorrect file replication(VFR) job status at VFR target site, while replication is in running state at source.

DESCRIPTION:
If full sync is started in recovery mode, we don't update state on target during start of replication (from failed to full-sync running). This state change is missed and is causing issues with states for next incremental syncs.

RESOLUTION:
Updated the code to address the correct state at target when vfr full sync is started in recovery mode

* 4125871 (Tracking ID: 4114176)

SYMPTOM:
After failover, job sync fails with error "Device or resource busy".

DESCRIPTION:
If job is in failed state on target because of job failure from source side, repld was not updating its state when it was restarted in recovery mode. Because of which job state was remaining in running state even after successful replication on target. With this state on target, if job is promoted, then replication process was not creating new ckpt for first sync after failover which was resulting in corrupting state file on new source. Because of this incorrect/corrupt state file, job sync from new source was failing with error "Device or resource busy".

RESOLUTION:
Code is modified to correct the state on target when job was started in recovery mode.

* 4125873 (Tracking ID: 4108955)

SYMPTOM:
VFR job hangs on source if thread creation fails on target.

DESCRIPTION:
On Target, if thread creation for pass completion fails because of high memory usage, repld demon doesn't send that failure reply to source. This can lead to vxfsreplicate process to remains in waiting state indefinitely for reply for pass completion from target. This will lead to job hang on source and will need manual intervention to kill the job.

RESOLUTION:
Code is modified to  retry thread creation on target and if it fails after 5 retries, target will reply to source with appropriate error.

* 4125875 (Tracking ID: 4112931)

SYMPTOM:
vxfsrepld consumes a lot of virtual memory when it has been running for long time.

DESCRIPTION:
Current VxFS thread pool  is not efficient if it has been used for daemon process like vxfsrepld. It didn't release underline resources used by newly created threads which in turn increasing virtual memory consumption of that process. underline resources of threads will be released either when we call pthread_join() on them or when threads are created with detached attribute. with current implementation, pthread_join() is called only when thread pool is destroyed as a part of cleanup. but with vxfsrepld, it is not expected to call pool_destroy() every time when job is successful. pool_destroy is called only when repld is stopped. This was leading to consolidating threads resources and increasing VM usage of process.

RESOLUTION:
Code is modified to detach threads when it exits.

* 4125878 (Tracking ID: 4096267)

SYMPTOM:
Veritas File Replication jobs might failed when there are large number of jobs run in parallel.

DESCRIPTION:
File Replication Jobs might fail, with Large number of jobs configured and running in parallel with Veritas File Replication.
With large number of jobs there is a chance of referring a job which is already freed, due to which there is a core generated with replication service and
job might failed.

RESOLUTION:
updated code to handle the code to take a hold while checking invalid job configuration.

* 4126104 (Tracking ID: 4122331)

SYMPTOM:
Block number, device id information, in-core inode state are missing from the error messages logged in syslog while marking a bitmap/inode as "BAD".

DESCRIPTION:
Block number, device id information, in-core inode state are missing from the error messages logged in syslog upon encountering bitmap corruption or while marking an inode "BAD".

RESOLUTION:
Code changes have been done to include required missing information in corresponding error messages.

* 4127509 (Tracking ID: 4107015)

SYMPTOM:
The VxFS module fails to load on linux minor kernel.

DESCRIPTION:
This issue occurs due to changes in the minor kernel.

RESOLUTION:
Modified existing modinst-vxfs script to consider kernel-build version in exact-version-module version calculation.

* 4127510 (Tracking ID: 4107777)

SYMPTOM:
The VxFS module fails to load on linux minor kernel.

DESCRIPTION:
This issue occurs due to changes in the minor kernel.

RESOLUTION:
Modified existing modinst-vxfs script to consider kernel-build version in best-fit-module-version calculation if exact-version-module is not present.

* 4127594 (Tracking ID: 4126957)

SYMPTOM:
If "fsadm -o mntunlock=<string> <mountpoint>" and "umount -f <mountpoint>" operations are run in parallel,
system may crash with following stack:

 vx_aioctl_unsetmntlock+0xd3/0x2a0 [vxfs]
 vx_aioctl_vfs+0x256/0x2d0 [vxfs]
 vx_admin_ioctl+0x156/0x2f0 [vxfs]
 vxportalunlockedkioctl+0x529/0x660 [vxportal]
 do_vfs_ioctl+0xa4/0x690
 ksys_ioctl+0x64/0xa0
 __x64_sys_ioctl+0x16/0x20
 do_syscall_64+0x5b/0x1b0

DESCRIPTION:
There is a race condition between these two operations, due to which by the time fsadm thread tries to access
FS data structure, it is possible that umount operation has already freed the structures, which leads to panic.

RESOLUTION:
As a fix, the fsadm thread first checks if the umount operation is in progress. If so, it fails rather than continuing.

* 4127720 (Tracking ID: 4127719)

SYMPTOM:
fsdb binary fails to open the device on a VVR secondary volume in RW mode although it has write permissions. fstyp binary could not dump fs_uuid value.

DESCRIPTION:
We have observed that fsdb when run on a VVR secondary volume bails out. 
At fs level, the volume has write permission but since it is secondary from VVR perspective, it is not allowed to be opened in write mode at block layer.
fstyp binary could not dump fs_uuid value along with other superblock fields.

RESOLUTION:
Added fallback logic, wherein fsdb if fs_open fails to open the device in Read-Write mode, it will try to open it in Read-Only mode. Fixed fstyp binary to dump fs_uuid value along with other superblock fields.
Code changes have been done to reflect these changes.

* 4127785 (Tracking ID: 4127784)

SYMPTOM:
/opt/VRTS/bin/fsppadm validate /mnt4 invalid_uid.xml
UX:vxfs fsppadm: WARNING: V-3-26537: Invalid USER id 1xx specified at or near line 10

DESCRIPTION:
Before this fix, fsppadm command was not stoping the parsing, and was treating invalid uid/gid as warning only. Here invalid uid/gid means whether
they are integer numbers or not. If given uid/gid is not existing then it is still a warning.

RESOLUTION:
Code added to give user proper error in case if invalid user/group ids are provided.

* 4128249 (Tracking ID: 4119965)

SYMPTOM:
VxFS mount binary failed to mount VxFS with SELinux context.

DESCRIPTION:
Mounting the file system using VxFS binary with specific SELinux context shows below error:
/FSQA/fsqa/vxfsbin/mount -t vxfs /dev/vx/dsk/testdg/vol1 /mnt1 -ocontext="system_u:object_r:httpd_sys_content_t:s0"
UX:vxfs mount: ERROR: V-3-28681: Selinux context is invalid or option/operation is not supported. Please look into the syslog for more information.

RESOLUTION:
VxFS mount command is modified to pass context options to kernel only if SELinux is enabled.

* 4128723 (Tracking ID: 4114127)

SYMPTOM:
Hang in VxFS internal LM Conformance - inotify test

DESCRIPTION:
On SLES15 SP4 kernel, we observed that internal test is going into hang state with below process stack:

[<0>] fsnotify_sb_delete+0x19d/0x1e0
[<0>] generic_shutdown_super+0x3f/0x120
[<0>] deactivate_locked_super+0x3c/0x70
[<0>] vx_unmount_cleanup_notify.part.37+0x96/0x150 [vxfs]
[<0>] vx_kill_sb+0x91/0x2b0 [vxfs]
[<0>] deactivate_locked_super+0x3c/0x70
[<0>] cleanup_mnt+0xb8/0x150
[<0>] task_work_run+0x70/0xb0
[<0>] exit_to_user_mode_prepare+0x224/0x230
[<0>] syscall_exit_to_user_mode+0x18/0x40
[<0>] do_syscall_64+0x67/0x80
[<0>] entry_SYSCALL_64_after_hwframe+0x44/0xae

RESOLUTION:
Code changes are done to resolve the hang.

* 4129494 (Tracking ID: 4129495)

SYMPTOM:
Kernel panic observed in internal VxFS LM conformance testing.

DESCRIPTION:
Kernel panic has been observed in internal VxFS testing, OS writeback thread marking inode for writeback and then calling filesystem hook vx_writepages.
The OS writeback is not expected to get inside iput(), as it would self deadlock while waiting on writeback. This deadlock causing tsrapi command hang which further causing kernel panic.

RESOLUTION:
Modified code to avoid deallocation of inode when the inode writeback is in progress.

* 4129681 (Tracking ID: 4129680)

SYMPTOM:
VxFS rpm does not have changelog

DESCRIPTION:
Changelog in rpm will help to find missing incidents with respect to other version.

RESOLUTION:
Changelog is generated and added to VxFS rpm.



INSTALLING THE PATCH
--------------------
Run the Installer script to automatically install the patch:
-----------------------------------------------------------
Please be noted that the installation of this P-Patch will cause downtime.

To install the patch perform the following steps on at least one node in the cluster:
1. Copy the patch infoscale-sles15_x86_64-Patch-8.0.2.1200.tar.gz to /tmp
2. Untar infoscale-sles15_x86_64-Patch-8.0.2.1200.tar.gz to /tmp/hf
    # mkdir /tmp/hf
    # cd /tmp/hf
    # gunzip /tmp/infoscale-sles15_x86_64-Patch-8.0.2.1200.tar.gz
    # tar xf /tmp/infoscale-sles15_x86_64-Patch-8.0.2.1200.tar
3. Install the hotfix(Please be noted that the installation of this P-Patch will cause downtime.)
    # pwd /tmp/hf
    # ./installVRTSinfoscale802P1200 [<host1> <host2>...]

You can also install this patch together with 8.0.2 base release using Install Bundles
1. Download this patch and extract it to a directory
2. Change to the Veritas InfoScale 8.0.2 directory and invoke the installer script
   with -patch_path option where -patch_path should point to the patch directory
    # ./installer -patch_path [<path to this patch>] [<host1> <host2>...]

Install the patch manually:
--------------------------
Manual installation is not recommended.


REMOVING THE PATCH
------------------
Manual uninstallation is not recommended.


KNOWN ISSUES
------------
* Tracking ID: 4059982

SYMPTOM: When the secondary site upgrade, we need to migrate/failover the application on upgraded secondary site.
But it got failed with error [RVGSharedPri:online:migrate of RVG failed with error code 256].

WORKAROUND: Upgrade secondary site first.After secondary site upgrade we need to switch application on upgraded site (secondary).If we configure GCO,  normally we use -switch operation for application failover to secondary.Instead of -switch operation we need to use vradmind migrate command for RVG failover to secondary.And after this online the application on secondary.Proceed to primary site upgrade



SPECIAL INSTRUCTIONS
--------------------
NONE


OTHERS
------
NONE