infoscale-rhel8_x86_64-Patch-8.0.2.1400
Obsolete
The latest patch(es) : infoscale-rhel8_x86_64-Patch-8.0.2.1500 

 Basic information
Release type: Patch
Release date: 2024-01-09
OS update support: None
Technote: None
Documentation: None
Popularity: 285 viewed    downloaded
Download size: 98.13 MB
Checksum: 3138611663

 Applies to one or more of the following products:
InfoScale Availability 8.0.2 On RHEL8 x86-64
InfoScale Enterprise 8.0.2 On RHEL8 x86-64
InfoScale Foundation 8.0.2 On RHEL8 x86-64
InfoScale Storage 8.0.2 On RHEL8 x86-64

 Obsolete patches, incompatibilities, superseded patches, or other requirements:

This patch is obsolete. It is superseded by: Release date
infoscale-rhel8_x86_64-Patch-8.0.2.1500 2024-02-08

 Fixes the following incidents:
4121230, 4123715, 4123834, 4125870, 4125871, 4125873, 4125875, 4125878, 4126104, 4126262, 4127509, 4127510, 4127518, 4127519, 4127594, 4127720, 4127785, 4128249, 4129494, 4129681, 4129838, 4131312, 4141666, 4143509, 4144274

 Patch ID:
VRTSvxfs-8.0.2.1400-0095_RHEL8
VRTSodm-8.0.2.1400-0095_RHEL8
VRTSpython-3.9.16.2-RHEL8

Readme file
                          * * * READ ME * * *
                      * * * InfoScale 8.0.2 * * *
                         * * * Patch 1400 * * *
                         Patch Date: 2024-01-01


This document provides the following information:

   * PATCH NAME
   * OPERATING SYSTEMS SUPPORTED BY THE PATCH
   * PACKAGES AFFECTED BY THE PATCH
   * BASE PRODUCT VERSIONS FOR THE PATCH
   * SUMMARY OF INCIDENTS FIXED BY THE PATCH
   * DETAILS OF INCIDENTS FIXED BY THE PATCH
   * INSTALLATION PRE-REQUISITES
   * INSTALLING THE PATCH
   * REMOVING THE PATCH


PATCH NAME
----------
InfoScale 8.0.2 Patch 1400


OPERATING SYSTEMS SUPPORTED BY THE PATCH
----------------------------------------
RHEL8 x86-64


PACKAGES AFFECTED BY THE PATCH
------------------------------
VRTSodm
VRTSpython
VRTSvxfs


BASE PRODUCT VERSIONS FOR THE PATCH
-----------------------------------
   * InfoScale Availability 8.0.2
   * InfoScale Enterprise 8.0.2
   * InfoScale Foundation 8.0.2
   * InfoScale Storage 8.0.2


SUMMARY OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
Patch ID: VRTSvxfs-8.0.2.1400
* 4141666 (4141665) Security vulnerabilities exist in the Zlib third-party components used by VxFS.
Patch ID: VRTSvxfs-8.0.2.1200
* 4121230 (4119990) Recovery stuck while flushing and invalidating the buffers
* 4123715 (4113121) VXFS support for RHEL 8.8.
* 4125870 (4120729) Incorrect file replication(VFR) job status at VFR target site, while replication is in running state at source.
* 4125871 (4114176) After failover, job sync fails with error "Device or resource busy".
* 4125873 (4108955) VFR job hangs on source if thread creation fails on target.
* 4125875 (4112931) vxfsrepld consumes a lot of virtual memory when it has been running for long time.
* 4125878 (4096267) Veritas File Replication jobs might failed when there are large number of jobs run in parallel.
* 4126104 (4122331) Enhancement in vxfs error message which are logged while marking the bitmap or inode as "BAD".
* 4127509 (4107015) When finding VxFS module with version same as kernel version, need to consider kernel-build number.
* 4127510 (4107777) If VxFS module with version same as kernel version is not present, need to consider kernel build number to calculate best fit module.
* 4127594 (4126957) System crashes with VxFS stack.
* 4127720 (4127719) Added fallback logic in fsdb binary and made changes to fstyp binary such that it now dumps uuid.
* 4127785 (4127784) Earlier fsppadm binary was just giving warning in case of invalid UID, GID number. After this change providing invalid UID / GID  e.g.
"1ABC" (UID/GID are always numbers) will result into error and parsing will stop.
* 4128249 (4119965) VxFS mount binary failed to mount VxFS with SELinux context.
* 4129494 (4129495) Kernel panic observed in internal VxFS LM conformance testing.
* 4129681 (4129680) Generate and add changelog in VxFS rpm
* 4131312 (4128895) On servers with SELinux enabled, VxFS mount command may throw error.
Patch ID: VRTSpython-3.9.16.2
* 4143509 (4143508) Upgrading multiple vulnerable module under VRTSpython to address open exploitable security vulnerabilities.
Patch ID: VRTSodm-8.0.2.1400
* 4144274 (4144269) After installing VRTSvxfs-8.0.2.1400 ODM fails to start.
Patch ID: VRTSodm-8.0.2.1200
* 4123834 (4113118) ODM support for RHEL 8.8.
* 4126262 (4126256) no symbol version warning for VEKI's symbol in dmesg after SFCFSHA configuration
* 4127518 (4107017) When finding ODM module with version same as kernel version, need to consider kernel-build number.
* 4127519 (4107778) If ODM module with version same as kernel version is not present, need to consider kernel build number to calculate best fit module.
* 4129838 (4129837) Generate and add changelog in ODM rpm


DETAILS OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
This patch fixes the following incidents:

Patch ID: VRTSvxfs-8.0.2.1400

* 4141666 (Tracking ID: 4141665)

SYMPTOM:
Security vulnerabilities exist in the Zlib third-party components used by VxFS.

DESCRIPTION:
VxFS uses Zlib third-party components with some security vulnerabilities.

RESOLUTION:
VxFS is updated to use a newer version of Zlib third-party components in which the security vulnerabilities have been addressed.

Patch ID: VRTSvxfs-8.0.2.1200

* 4121230 (Tracking ID: 4119990)

SYMPTOM:
Some nodes in cluster are in hang state and recovery is stuck.

DESCRIPTION:
There is a deadlock where one thread locks the buffer and wait for the recovery to complete. Recovery on the other hand may get stuck while flushing and
invalidating the buffers from buffer cache as it cannot lock the buffer.

RESOLUTION:
If recovery is in progress, release buffer and return VX_ERETRY callers will retry the operation. There are some cases where lock is taken on 2 buffers. For
those cases pass the flag VX_NORECWAIT which will retry the operation after releasing both the buffers.

* 4123715 (Tracking ID: 4113121)

SYMPTOM:
The VxFS module fails to load on RHEL8.8.

DESCRIPTION:
This issue occurs due to changes in the RHEL8.8.

RESOLUTION:
Updated VXFS to support RHEL 8.8.

* 4125870 (Tracking ID: 4120729)

SYMPTOM:
Incorrect file replication(VFR) job status at VFR target site, while replication is in running state at source.

DESCRIPTION:
If full sync is started in recovery mode, we don't update state on target during start of replication (from failed to full-sync running). This state change is missed and is causing issues with states for next incremental syncs.

RESOLUTION:
Updated the code to address the correct state at target when vfr full sync is started in recovery mode

* 4125871 (Tracking ID: 4114176)

SYMPTOM:
After failover, job sync fails with error "Device or resource busy".

DESCRIPTION:
If job is in failed state on target because of job failure from source side, repld was not updating its state when it was restarted in recovery mode. Because of which job state was remaining in running state even after successful replication on target. With this state on target, if job is promoted, then replication process was not creating new ckpt for first sync after failover which was resulting in corrupting state file on new source. Because of this incorrect/corrupt state file, job sync from new source was failing with error "Device or resource busy".

RESOLUTION:
Code is modified to correct the state on target when job was started in recovery mode.

* 4125873 (Tracking ID: 4108955)

SYMPTOM:
VFR job hangs on source if thread creation fails on target.

DESCRIPTION:
On Target, if thread creation for pass completion fails because of high memory usage, repld demon doesn't send that failure reply to source. This can lead to vxfsreplicate process to remains in waiting state indefinitely for reply for pass completion from target. This will lead to job hang on source and will need manual intervention to kill the job.

RESOLUTION:
Code is modified to  retry thread creation on target and if it fails after 5 retries, target will reply to source with appropriate error.

* 4125875 (Tracking ID: 4112931)

SYMPTOM:
vxfsrepld consumes a lot of virtual memory when it has been running for long time.

DESCRIPTION:
Current VxFS thread pool  is not efficient if it has been used for daemon process like vxfsrepld. It didn't release underline resources used by newly created threads which in turn increasing virtual memory consumption of that process. underline resources of threads will be released either when we call pthread_join() on them or when threads are created with detached attribute. with current implementation, pthread_join() is called only when thread pool is destroyed as a part of cleanup. but with vxfsrepld, it is not expected to call pool_destroy() every time when job is successful. pool_destroy is called only when repld is stopped. This was leading to consolidating threads resources and increasing VM usage of process.

RESOLUTION:
Code is modified to detach threads when it exits.

* 4125878 (Tracking ID: 4096267)

SYMPTOM:
Veritas File Replication jobs might failed when there are large number of jobs run in parallel.

DESCRIPTION:
File Replication Jobs might fail, with Large number of jobs configured and running in parallel with Veritas File Replication.
With large number of jobs there is a chance of referring a job which is already freed, due to which there is a core generated with replication service and
job might failed.

RESOLUTION:
updated code to handle the code to take a hold while checking invalid job configuration.

* 4126104 (Tracking ID: 4122331)

SYMPTOM:
Block number, device id information, in-core inode state are missing from the error messages logged in syslog while marking a bitmap/inode as "BAD".

DESCRIPTION:
Block number, device id information, in-core inode state are missing from the error messages logged in syslog upon encountering bitmap corruption or while marking an inode "BAD".

RESOLUTION:
Code changes have been done to include required missing information in corresponding error messages.

* 4127509 (Tracking ID: 4107015)

SYMPTOM:
The VxFS module fails to load on linux minor kernel.

DESCRIPTION:
This issue occurs due to changes in the minor kernel.

RESOLUTION:
Modified existing modinst-vxfs script to consider kernel-build version in exact-version-module version calculation.

* 4127510 (Tracking ID: 4107777)

SYMPTOM:
The VxFS module fails to load on linux minor kernel.

DESCRIPTION:
This issue occurs due to changes in the minor kernel.

RESOLUTION:
Modified existing modinst-vxfs script to consider kernel-build version in best-fit-module-version calculation if exact-version-module is not present.

* 4127594 (Tracking ID: 4126957)

SYMPTOM:
If "fsadm -o mntunlock=<string> <mountpoint>" and "umount -f <mountpoint>" operations are run in parallel,
system may crash with following stack:

 vx_aioctl_unsetmntlock+0xd3/0x2a0 [vxfs]
 vx_aioctl_vfs+0x256/0x2d0 [vxfs]
 vx_admin_ioctl+0x156/0x2f0 [vxfs]
 vxportalunlockedkioctl+0x529/0x660 [vxportal]
 do_vfs_ioctl+0xa4/0x690
 ksys_ioctl+0x64/0xa0
 __x64_sys_ioctl+0x16/0x20
 do_syscall_64+0x5b/0x1b0

DESCRIPTION:
There is a race condition between these two operations, due to which by the time fsadm thread tries to access
FS data structure, it is possible that umount operation has already freed the structures, which leads to panic.

RESOLUTION:
As a fix, the fsadm thread first checks if the umount operation is in progress. If so, it fails rather than continuing.

* 4127720 (Tracking ID: 4127719)

SYMPTOM:
fsdb binary fails to open the device on a VVR secondary volume in RW mode although it has write permissions. fstyp binary could not dump fs_uuid value.

DESCRIPTION:
We have observed that fsdb when run on a VVR secondary volume bails out. 
At fs level, the volume has write permission but since it is secondary from VVR perspective, it is not allowed to be opened in write mode at block layer.
fstyp binary could not dump fs_uuid value along with other superblock fields.

RESOLUTION:
Added fallback logic, wherein fsdb if fs_open fails to open the device in Read-Write mode, it will try to open it in Read-Only mode. Fixed fstyp binary to dump fs_uuid value along with other superblock fields.
Code changes have been done to reflect these changes.

* 4127785 (Tracking ID: 4127784)

SYMPTOM:
/opt/VRTS/bin/fsppadm validate /mnt4 invalid_uid.xml
UX:vxfs fsppadm: WARNING: V-3-26537: Invalid USER id 1xx specified at or near line 10

DESCRIPTION:
Before this fix, fsppadm command was not stoping the parsing, and was treating invalid uid/gid as warning only. Here invalid uid/gid means whether
they are integer numbers or not. If given uid/gid is not existing then it is still a warning.

RESOLUTION:
Code added to give user proper error in case if invalid user/group ids are provided.

* 4128249 (Tracking ID: 4119965)

SYMPTOM:
VxFS mount binary failed to mount VxFS with SELinux context.

DESCRIPTION:
Mounting the file system using VxFS binary with specific SELinux context shows below error:
/FSQA/fsqa/vxfsbin/mount -t vxfs /dev/vx/dsk/testdg/vol1 /mnt1 -ocontext="system_u:object_r:httpd_sys_content_t:s0"
UX:vxfs mount: ERROR: V-3-28681: Selinux context is invalid or option/operation is not supported. Please look into the syslog for more information.

RESOLUTION:
VxFS mount command is modified to pass context options to kernel only if SELinux is enabled.

* 4129494 (Tracking ID: 4129495)

SYMPTOM:
Kernel panic observed in internal VxFS LM conformance testing.

DESCRIPTION:
Kernel panic has been observed in internal VxFS testing, OS writeback thread marking inode for writeback and then calling filesystem hook vx_writepages.
The OS writeback is not expected to get inside iput(), as it would self deadlock while waiting on writeback. This deadlock causing tsrapi command hang which further causing kernel panic.

RESOLUTION:
Modified code to avoid deallocation of inode when the inode writeback is in progress.

* 4129681 (Tracking ID: 4129680)

SYMPTOM:
VxFS rpm does not have changelog

DESCRIPTION:
Changelog in rpm will help to find missing incidents with respect to other version.

RESOLUTION:
Changelog is generated and added to VxFS rpm.

* 4131312 (Tracking ID: 4128895)

SYMPTOM:
On servers with SELinux enabled, VxFS mount command may throw error as following.
Error message: UX:vxfs mount: ERROR: V-3-21264: <volume> is already mounted, <mount_point> is busy,
                 or the allowable number of mount points has been exceeded.

DESCRIPTION:
VxFS mount commands now run with vxfs_mount_t SELinux context. This context was missing permissions to execute VxVM commands. Hence it was not able to confirm whether the filesystem was already mounted elsewhere or not. Hence it may throw error as the volume is already mounted.

RESOLUTION:
Permission to run VxVM commands under vxfs_mount_t SELinux context are added.

Patch ID: VRTSpython-3.9.16.2

* 4143509 (Tracking ID: 4143508)

SYMPTOM:
There are open exploitable CVEs which are having High/Critical CVSS score, in multiple modules under VRTSpython.

DESCRIPTION:
There are open exploitable CVEs which are having High/Critical CVSS score, in the current VRTSPython modules under 
VRTSpython.

RESOLUTION:
Upgrading multiple modules under VRTSpython to address open exploitable security vulnerabilities.

Patch ID: VRTSodm-8.0.2.1400

* 4144274 (Tracking ID: 4144269)

SYMPTOM:
After installing VRTSvxfs-8.0.2.1400, ODM fails to start.

DESCRIPTION:
Because of the VxFS version update, the ODM module needs to be repackaged due to an
internal dependency on VxFS version.

RESOLUTION:
As part of this fix, the ODM module has been repackaged to support the updated
VxFS version.

Patch ID: VRTSodm-8.0.2.1200

* 4123834 (Tracking ID: 4113118)

SYMPTOM:
The ODM module fails to load on RHEL8.8.

DESCRIPTION:
This issue occurs due to changes in the RHEL8.8.

RESOLUTION:
Updated ODM to support RHEL 8.8.

* 4126262 (Tracking ID: 4126256)

SYMPTOM:
no symbol version warning for "ki_get_boot" in dmesg after SFCFSHA configuration

DESCRIPTION:
modpost is unable to read VEKI's Module.symvers while building ODM module, which results in no symbol version warning for "ki_get_boot" symbol of VEKI.

RESOLUTION:
Modified the code to make sure that modpost picks all the dependent symbols while building ODM module.

* 4127518 (Tracking ID: 4107017)

SYMPTOM:
The ODM module fails to load on linux minor kernel.

DESCRIPTION:
This issue occurs due to changes in the minor kernel.

RESOLUTION:
Modified existing modinst-odm script to consider kernel-build version in exact-version-module version calculation.

* 4127519 (Tracking ID: 4107778)

SYMPTOM:
The ODM module fails to load on linux minor kernel.

DESCRIPTION:
This issue occurs due to changes in the minor kernel.

RESOLUTION:
Modified existing modinst-odm script to consider kernel-build version in best-fit-module-version calculation if exact-version-module is not 
present.

* 4129838 (Tracking ID: 4129837)

SYMPTOM:
ODM rpm does not have changelog

DESCRIPTION:
Changelog in rpm will help to find missing incidents with respect to other version.

RESOLUTION:
Changelog is generated and added to ODM rpm.



INSTALLING THE PATCH
--------------------
Run the Installer script to automatically install the patch:
-----------------------------------------------------------
Please be noted that the installation of this P-Patch will cause downtime.

To install the patch perform the following steps on at least one node in the cluster:
1. Copy the patch infoscale-rhel8_x86_64-Patch-8.0.2.1400.tar.gz to /tmp
2. Untar infoscale-rhel8_x86_64-Patch-8.0.2.1400.tar.gz to /tmp/hf
    # mkdir /tmp/hf
    # cd /tmp/hf
    # gunzip /tmp/infoscale-rhel8_x86_64-Patch-8.0.2.1400.tar.gz
    # tar xf /tmp/infoscale-rhel8_x86_64-Patch-8.0.2.1400.tar
3. Install the hotfix(Please be noted that the installation of this P-Patch will cause downtime.)
    # pwd /tmp/hf
    # ./installVRTSinfoscale802P1400 [<host1> <host2>...]

You can also install this patch together with 8.0.2 base release using Install Bundles
1. Download this patch and extract it to a directory
2. Change to the Veritas InfoScale 8.0.2 directory and invoke the installer script
   with -patch_path option where -patch_path should point to the patch directory
    # ./installer -patch_path [<path to this patch>] [<host1> <host2>...]

Install the patch manually:
--------------------------
Manual installation is not recommended.


REMOVING THE PATCH
------------------
Manual uninstallation is not recommended.


SPECIAL INSTRUCTIONS
--------------------
Vulnerability fixed:
CVE-2023-0464 (BDSA-2023-0610), CVE-2023-2650 (BDSA-2023-1337), BDSA-2022-0284, CVE-2023-0466, CVE-2023-
0465, CVE-2023-5678 (BDSA-2023-3046), CVE-2023-3817 (BDSA-2023-1972), BDSA-2023-1866, CVE-2023-32681 
(BDSA-2023-1278), CVE-2023-37920 (BDSA-2023-2109), CVE-2023-43804 (BDSA-2023-2618), BDSA-2023-2814 (CVE-
2023-45803), CVE-2022-37434(BDSA-2022-2183)


OTHERS
------
NONE