infoscale-rhel8_x86_64-Patch-8.0.0.3200

 Basic information
Release type: Patch
Release date: 2024-04-01
OS update support: RHEL8 x86-64 Update 9
Technote: None
Documentation: None
Popularity: 2159 viewed    downloaded
Download size: 610.78 MB
Checksum: 3286427371

 Applies to one or more of the following products:
InfoScale Availability 8.0 On RHEL8 x86-64
InfoScale Enterprise 8.0 On RHEL8 x86-64
InfoScale Foundation 8.0 On RHEL8 x86-64
InfoScale Storage 8.0 On RHEL8 x86-64

 Obsolete patches, incompatibilities, superseded patches, or other requirements:
None.

 Fixes the following incidents:
4055808, 4056684, 4057420, 4057432, 4058590, 4061114, 4061158, 4062606, 4062799, 4064783, 4064786, 4065565, 4065628, 4065651, 4065841, 4066063, 4066213, 4066259, 4066667, 4067237, 4067609, 4067635, 4068407, 4068960, 4070098, 4071108, 4072228, 4073695, 4078335, 4078520, 4078531, 4079142, 4079173, 4079345, 4079637, 4079662, 4080041, 4080105, 4080122, 4080269, 4080276, 4080277, 4080579, 4080630, 4080845, 4080846, 4081150, 4081684, 4081790, 4082260, 4082865, 4083335, 4083337, 4083948, 4085619, 4085623, 4085839, 4086085, 4087233, 4087439, 4087791, 4088076, 4088341, 4088483, 4088762, 4092518, 4093140, 4095889, 4096656, 4097466, 4100923, 4100925, 4101232, 4102502, 4102924, 4102973, 4107367, 4108381, 4108392, 4108585, 4108933, 4108947, 4109554, 4110560, 4111442, 4111457, 4112417, 4112549, 4113310, 4113324, 4113357, 4113661, 4113663, 4113664, 4113666, 4113911, 4113912, 4114019, 4114020, 4114021, 4114654, 4114656, 4114963, 4115251, 4115252, 4115381, 4116421, 4116548, 4116551, 4116557, 4116559, 4116562, 4116565, 4116567, 4117110, 4118108, 4118111, 4118733, 4118795, 4118845, 4119023, 4119087, 4119257, 4119276, 4119279, 4119438, 4120350, 4121241, 4121714, 4123143, 4126124, 4130643, 4132798, 4134024, 4134791, 4135413, 4135420, 4136152, 4136241, 4144027, 4144029, 4144042, 4144043, 4144059, 4144060, 4144061, 4144063, 4144074, 4144082, 4144086, 4144117, 4144119, 4144128, 4144301, 4145248, 4145797, 4146456, 4146458, 4146462, 4146472, 4149249, 4149423, 4149436, 4154821, 4154855, 4154894, 4156794, 4156815, 4156824, 4156836, 4156837, 4156839, 4156841, 4157270, 4158230

 Patch ID:
VRTSodm-8.0.0.3200-RHEL8
VRTSaslapm-8.0.0.2800-RHEL8
VRTSvxfs-8.0.0.3200-RHEL8
VRTSvxvm-8.0.0.2800-RHEL8
VRTSllt-8.0.0.2600-RHEL8
VRTSsfmh-8.0.0.510_Linux.rpm

Readme file
                          * * * READ ME * * *
                       * * * InfoScale 8.0 * * *
                         * * * Patch 3200 * * *
                         Patch Date: 2024-03-28


This document provides the following information:

   * PATCH NAME
   * OPERATING SYSTEMS SUPPORTED BY THE PATCH
   * PACKAGES AFFECTED BY THE PATCH
   * BASE PRODUCT VERSIONS FOR THE PATCH
   * SUMMARY OF INCIDENTS FIXED BY THE PATCH
   * DETAILS OF INCIDENTS FIXED BY THE PATCH
   * INSTALLATION PRE-REQUISITES
   * INSTALLING THE PATCH
   * REMOVING THE PATCH
   * KNOWN ISSUES


PATCH NAME
----------
InfoScale 8.0 Patch 3200


OPERATING SYSTEMS SUPPORTED BY THE PATCH
----------------------------------------
RHEL8 x86-64


PACKAGES AFFECTED BY THE PATCH
------------------------------
VRTSaslapm
VRTSllt
VRTSodm
VRTSsfmh
VRTSvxfs
VRTSvxvm


BASE PRODUCT VERSIONS FOR THE PATCH
-----------------------------------
   * InfoScale Availability 8.0
   * InfoScale Enterprise 8.0
   * InfoScale Foundation 8.0
   * InfoScale Storage 8.0


SUMMARY OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
Patch ID: VRTSvxfs-8.0.0.3200
* 4096656 (4090032) System might panic in vx_dev_strategy() while Sybase or Oracle configuration.
* 4119279 (4119281) Higher page-in requests on Solaris 11 SPARC.
* 4126124 (4121691) Unknown messages in /var/log/messages regarding VxFS rules file (/etc/udev/rules.d/60-vxca.rules) for udev
* 4136241 (4136110) cmd "umount -l" is unmounting mount points even after adding mntlock in sles12 and sles15.
* 4144027 (4126957) System crashes with VxFS stack.
* 4144029 (4137040) System got hung.
* 4144042 (4112056) Hitting assert "f:vx_vnode_deinit:1" during in-house FS testing.
* 4144043 (4126943) Create lost+found directory in VxFS file system with default ACL permissions as 700.
* 4144059 (4134661) Hang seen in the cp command in case of checkpoint promote in cluster filesystem environment.
* 4144060 (4132435) Failures seen in FSQA cmds->fsck tests, panic in get_dotdotlst
* 4144061 (4092440) FSPPADM giving return code 0 (success) despite policy enforcement is failing.
* 4144063 (4116887) Running fsck -y on large size metasave with lots of hardlinks is consuming huge amount of system memory.
* 4144074 (4117342) System might panic due to hard lock up detected on CPU
* 4144082 (4134194) vxfs/glm worker thread panic with kernel NULL pointer dereference
* 4144086 (4136235) Includes module parameter for changing pnlct merge frequency.
* 4144117 (4099740) UX:vxfs mount: ERROR: V-3-21264: <device> is already mounted, <mount-point> is busy,
                 or the allowable number of mount points has been exceeded.
* 4144119 (4134884) Unable to deport Diskgroup. Volume or plex device is open or attached
* 4145797 (4145203) Invoking veki through systemctl inside vxfs-startup script.
* 4149436 (4141665) Security vulnerabilities exist in the Zlib third-party components used by VxFS.
Patch ID: VRTSvxfs-8.0.0.3100
* 4154855 (4141665) Security vulnerabilities exist in the Zlib third-party components used by VxFS.
Patch ID: VRTSvxfs-8.0.0.2900
* 4092518 (4096267) Veritas File Replication jobs might failed when there are large number of jobs run in parallel.
* 4097466 (4114176) After failover, job sync fails with error "Device or resource busy".
* 4107367 (4108955) VFR job hangs on source if thread creation fails on target.
* 4111457 (4117827) For EO compliance, there is a requirement to support 3 types of log file permissions 600, 640 and 644 with 600 being default new eo_perm tunable is added in vxtunefs command to manage the log file permissions.
* 4112417 (4094326) mdb invocation displays message "failed to add vx_sl_node_level walker: walk name already in use"
* 4118795 (4100021) Running setfacl followed by getfacl resulting in "No such device or address" error.
* 4119023 (4116329) While checking FS sanity with the help of "fsck -o full -n" command, we tried to correct the FS flag value (WORM/Softworm), but failed because -n (read-only) option was given.
* 4123143 (4123144) fsck binary generating coredump
Patch ID: VRTSvxfs-8.0.0.2700
* 4113911 (4113121) VXFS support for RHEL 8.8.
* 4114019 (4067505) invalid VX_AF_OVERLAY aflags error in fsck
* 4114020 (4083056) Hang observed while punching the smaller hole over the bigger hole.
* 4114021 (4101634) Directory inode getting incorrect file-type error in fsck.
Patch ID: VRTSvxfs-8.0.0.2600
* 4114654 (4114652) VXFS support for RHEL 8.7 minor kernel 4.18.0-425.19.2.
Patch ID: VRTSvxfs-8.0.0.2300
* 4108381 (4107777) VxFS support for RHEL 8.7 minor kernel.
Patch ID: VRTSvxfs-8.0.0.2200
* 4100925 (4100926) VxFS module failed to load on RHEL8.7
Patch ID: VRTSvxfs-8.0.0.2100
* 4095889 (4095888) Security vulnerabilities exist in the Sqlite third-party components used by VxFS.
Patch ID: VRTSvxfs-8.0.0.1800
* 4068960 (4073203) Veritas file replication might generate a core while replicating the files to target.
* 4071108 (3988752) Use ldi_strategy() routine instead of bdev_strategy() for IO's in solaris.
* 4072228 (4037035) VxFS should have the ability to control the number of inactive processing threads.
* 4078335 (4076412) Addressing Executive Order (EO) 14028, initial requirements which is intended to improve the Federal Governments investigative and remediation capabilities related to cybersecurity incidents.
* 4078520 (4058444) Loop mounts using files on VxFS fail on Linux systems.
* 4079142 (4077766) VxFS kernel module might leak memory during readahead of directory blocks.
* 4079173 (4070217) Command fsck might fail with 'cluster reservation failed for volume' message for a disabled cluster-mounted filesystem.
* 4082260 (4070814) Security Vulnerability observed in Zlib a third party component VxFS uses.
* 4082865 (4079622) Existing migration read/write iter operation handling is not fully functional as vxfs uses normal read/write file operation only.
* 4083335 (4076098) Fix migration issues seen with falcon-sensor.
* 4085623 (4085624) While running fsck, fsck might dump core.
* 4085839 (4085838) Command fsck may generate core due to processing of zero size attribute inode.
* 4086085 (4086084) VxFS mount operation causes system panic.
* 4088341 (4065575) Write operation might be unresponsive on a local mounted VxFS filesystem in a no-space condition
Patch ID: VRTSvxfs-8.0.0.1700
* 4081150 (4079869) Security Vulnerability in VxFS third party components
* 4083948 (4070814) Security Vulnerability in VxFS third party component Zlib
Patch ID: VRTSvxfs-8.0.0.1200
* 4055808 (4062971) Enable partition directory on WORM file system
* 4056684 (4056682) New features information on a filesystem with fsadm(file system administration utility) from a device is not displayed.
* 4062606 (4062605) Minimum retention time cannot be set if the maximum retention time is not set.
* 4065565 (4065669) Creating non-WORM checkpoints fails when the tunables - minimum retention time and maximum retention time are set.
* 4065651 (4065666) Enable partition directory on WORM file system having WORM enabled on files with retention period not expired.
Patch ID: VRTSvxfs-8.0.0.1100
* 4061114 (4052883) VxFS support for RHEL 8.5.
Patch ID: VRTSsfmh-HF0800510
* 4157270 (4157265) sfmh for IS 8.0 RHEL8.9 Platform Patch
Patch ID: VRTSllt-8.0.0.2600
* 4135413 (4084657) RHEL8/7.4.1 new installation, fencing/LLT panic while using TCP over LLT.
* 4135420 (3989372) When the CPU load and memory consumption is high in a VMware environment, some nodes in an InfoScale cluster may get fenced out.
* 4136152 (4124759) Panic happened with llt_ioship_recv on a server running in AWS.
* 4145248 (4139781) Unexpected or corrupted skb, memory type missing in buffer header.
* 4156794 (4135825) Once root file system is full during llt start, llt module failing to load forever.
* 4156815 (4087543) Node panic observed at llt_rdma_process_ack+189
* 4156824 (4138779) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 Update 9(RHEL8.9).
Patch ID: VRTSllt-8.0.0.2400
* 4116421 (4113340) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 8(RHEL8.8).
Patch ID: VRTSllt-8.0.0.2200
* 4108947 (4107779) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 7 on latest minor kernel 4.18.0-425.10.1.el8_7.x86_64(RHEL8.7).
Patch ID: VRTSllt-8.0.0.2100
* 4101232 (4100203) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 7(RHEL8.7).
Patch ID: VRTSllt-8.0.0.1800
* 4061158 (4061156) IO error on /sys/kernel/slab folder
* 4079637 (4079636) LLT over IPsec is causing node crash
* 4079662 (3981917) Support LLT UDP multiport on 1500 MTU based networks.
* 4080630 (4046953) Delete / disable confusing messages.
Patch ID: VRTSllt-8.0.0.1500
* 4073695 (4072335) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 6(RHEL8.6).
Patch ID: VRTSllt-8.0.0.1200
* 4066063 (4066062) Node panic is observed while using llt udp with multiport enabled.
* 4066667 (4040261) During LLT configuration, if set-verbose is set to 1 in /etc/llttab, an lltconfig core dump is observed.
Patch ID: VRTSllt-8.0.0.1100
* 4064783 (4053171) Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 5(RHEL8.5).
Patch ID: VRTSvxvm-8.0.0.2800
* 4093140 (4093067) System panic occurs because of NULL pointer in block device structure.
* 4130643 (4130642) node failed to rejoin the cluster after this node switched from master to slave due to the failure of the replicated diskgroup import.
* 4132798 (4132799) No detailed error messages while joining CVM fail.
* 4134024 (4134023) vxconfigrestore(Diskgroup configuration restoration) for H/W Replicated diskgroup failed.
* 4134791 (4134790) Hardware Replicated DG was marked with clone flag on SLAVEs.
* 4146456 (4128351) System hung observed when switching log owner.
* 4146458 (4122061) Observing hung after resync operation, vxconfigd was waiting for slaves' response
* 4146462 (4087628) CVM goes into faulted state when slave node of primary is rebooted .
* 4146472 (4115078) vxconfigd hung was observed when reboot all nodes of the primary site.
* 4149249 (4149248) Security vulnerabilities have been discovered in third-party components (OpenSSL, Curl, and libxml) employed by VxVM.
* 4149423 (4145063) unknown symbol message logged in syslogs while inserting vxio module.
* 4156836 (4152014) the excluded dmpnodes are visible after system reboot when SELinux is disabled.
* 4156837 (4134790) Hardware Replicated DG was marked with clone flag on SLAVEs.
* 4156839 (4077944) In VVR environment, application I/O operation may get hung.
* 4156841 (4142054) primary master got panicked with ted assert during the run.
Patch ID: VRTSaslapm 8.0.0.2800
* 4158230 (4158234) ASLAPM rpm Support on RHEL 8.9
Patch ID: VRTSvxvm-8.0.0.2700
* 4154821 (4149248) Security vulnerabilities have been discovered in third-party components (OpenSSL, Curl, and libxml) employed by VxVM.
Patch ID: VRTSvxvm-8.0.0.2600
* 4067237 (4058894) Messages in /var/log/messages regarding "ignore_device".
* 4109554 (4105953) system panic due to VVR accessed a NULL pointer.
* 4111442 (4066785) create new option usereplicatedev=only to import the replicated LUN only.
* 4112549 (4112701) Nodes stuck in reconfig hang and vxconfigd coredump after rebooting all nodes with a delay of 5min in between them.
* 4113310 (4114601) Panic: in dmp_process_errbp() for disk pull scenario.
* 4113357 (4112433) Security vulnerabilities exists in third party components [openssl, curl and libxml].
* 4114963 (4114962) [NBFS-3.1][DL]:MASTER and CAT_FS got corrupted while performing multiple NVMEs failure
* 4115251 (4115195) [NBFS-3.1][DL]:MEDIA_FS got corrupted after panic loop test
* 4115252 (4115193) Data corruption observed after the node fault and cluster restart in DR environment
* 4115381 (4091783) build script and mb.sh changes in unixvm for integration of storageapi
* 4116548 (4111254) vradmind dumps core while associating a rlink to rvg because of NULL pointer reference.
* 4116551 (4108913) Vradmind dumps core because of memory corruption.
* 4116557 (4085404) Huge perf drop after Veritas Volume Replicator (VVR) entered Data Change Map (DCM) mode, when a large size of Storage Replicator Log (SRL) is configured.
* 4116559 (4091076) SRL gets into pass-thru mode because of head error.
* 4116562 (4114257) Observed IO hung and high system load average after rebooted master and one slave node rejoins cluster.
* 4116565 (4034741) The current fix from limits IO load on secondary causing deadlock situtaion
* 4116567 (4072862) Stop cluster hang because of RVGLogowner and CVMClus resources fail to offline.
* 4117110 (4113841) VVR panic in replication connection handshake request from network scan tool.
* 4118108 (4114867) systemd-udevd[2224]: invalid key/value pair in file /etc/udev/rules.d/41-VxVM-selinux.rules on line 20, starting at character 103 ('D')
* 4118111 (4065490) VxVM udev rules consumes more CPU and appears in "top" output when system has thousands of storage devices attached.
* 4118733 (4106689) Solaris Zones cannot be started due to Method "/lib/svc/method/fs-local" failed with exit status 95
* 4118845 (4116024) machine panic due to access illegal address.
* 4119087 (4067191) IS8.0_SUSE15_CVR: After rebooted slave node master node got panic
* 4119257 (4090772) vxconfigd/vx commands hung if fdisk opened secondary volume and secondary logowner panic'd
* 4119276 (4090943) VVR Primary RLink cannot connect as secondary reports SRL log is full.
* 4119438 (4117985) EC volume corruption due to lockless access of FPU
* 4120350 (4120878) After enabling the dmp_native_support, system failed to boot.
* 4121241 (4114927) Failed to mount /boot on dmp device after enabling dmp_native_support.
* 4121714 (4081740) vxdg flush command slow due to too many luns needlessly access /proc/partitions.
Patch ID: VRTSvxvm-8.0.0.2400
* 4110560 (4104927) Changing the attributes in vxvm-boot.service for SLES15 is causing regression in RHEL versions.
* 4113324 (4113323) VxVM Support on RHEL 8.8
* 4113661 (4091076) SRL gets into pass-thru mode because of head error.
* 4113663 (4095163) system panic due to a race freeing VVR update.
* 4113664 (4091390) vradmind service has dump core and stopped on few nodes
* 4113666 (4064772) After enabling slub debug, system could hang with IO load.
Patch ID: VRTSvxvm-8.0.0.2200
* 4058590 (4058867) VxVM rpm Support on RHEL 8.7 minor kernel 4.18.0-425.10.1.el8_7.x86_64
* 4108392 (4107802) Fix for calculating best-fit module for upcoming RHEL8.7 minor kernels (higher than 4.18.0-425.10.1.el8_7.x86_64).
Patch ID: VRTSaslapm-8.0.0.2200
* 4108933 (4107932) ASLAPM rpm Support on RHEL 8.7 minor kernel 4.18.0-425.10.1.el8_7.x86_64
Patch ID: VRTSvxvm-8.0.0.2100
* 4102502 (4102501) A security vulnerability exists in the third-party component libcurl.
Patch ID: VRTSaslapm-8.0.0.2100
* 4102502 (4102501) A security vulnerability exists in the third-party component libcurl.
Patch ID: VRTSvxvm-8.0.0.1900
* 4102924 (4101128) VxVM rpm Support on RHEL 8.7 kernel
Patch ID: VRTSaslapm-8.0.0.1900
* 4102973 (4101139) ASLAPM rpm Support on RHEL 8.7 kernel
Patch ID: VRTSvxvm-8.0.0.1800
* 4067609 (4058464) vradmin resizevol fails when FS is not mounted on master.
* 4067635 (4059982) vradmind need not check for rlink connect during migrate.
* 4070098 (4071345) Unplanned fallback synchronisation is unresponsive
* 4078531 (4075860) Tutil putil rebalance flag is not getting cleared during +4 or more node addition
* 4079345 (4069940) FS mount failed during Cluster configuration on 24-node physical HP BOM2 setup.
* 4080041 (4056953) 3PAR PE LUNs are reported in error state by 3PAR ASL.
* 4080105 (4045837) Sub disks are in relocate state after exceed fault slave node panic.
* 4080122 (4044068) After disc replacement, Replace Node operation failed at Configure Netbackup stage.
* 4080269 (4044898) Copy rlink tags from reprec to info rec, through vxdg upgrade path.
* 4080276 (4065145) multivolume and vset not able to overwrite encryption tags on secondary.
* 4080277 (3966157) SRL batching feature is broken
* 4080579 (4077876) System is crashed when EC log replay is in progress after node reboot.
* 4080845 (4058166) Increase DCM log size based on volume size without exceeding region size limit of 4mb.
* 4080846 (4058437) Replication between 8.0 and 7.4.x fails due to sector size field.
* 4081790 (4080373) SFCFSHA configuration failed on RHEL 8.4.
* 4083337 (4081890) On RHEL8 NBFS/Access commands like python3, sort, sudo, ssh, etc are generating core dump during execution of the command mkfs.vxfs  mkfs.ext4.
* 4085619 (4086718) VxVM modules fail to load with latest minor kernel of SLES15SP2
* 4087233 (4086856) For Appliance FLEX product using VRTSdocker-plugin, need to add platform-specific dependencies service ( docker.service and podman.service ) change.
* 4087439 (4088934) Kernel Panic while running LM/CFS CONFORMANCE - variant (SLES15SP3)
* 4087791 (4087770) NBFS: Data corruption due to skipped full-resync of detached mirrors of volume after DCO repair operation
* 4088076 (4054685) In case of CVR environment, RVG recovery gets hung in linux platforms.
* 4088483 (4088484) Failed to load DMP_APM NVME modules
* 4088762 (4087099) DG is not imported after upgrade to InfoScale 8.0u1 on RHEL8.6.
Patch ID: VRTSaslapm-8.0.0.1800
* 4080041 (4056953) 3PAR PE LUNs are reported in error state by 3PAR ASL.
* 4088762 (4087099) DG is not imported after upgrade to InfoScale 8.0u1 on RHEL8.6.
Patch ID: VRTSvxvm-8.0.0.1700
* 4081684 (4082799) A security vulnerability exists in the third-party component libcurl.
Patch ID: VRTSvxvm-8.0.0.1600
* 4057420 (4060462) Nidmap information is not cleared after a node leaves, resulting in add node failure subsequently.
* 4062799 (4064208) Node failed to join the existing cluster after bits are upgraded to a newer version.
* 4065841 (4065495) Add support for DELL EMC PowerStore.
* 4066213 (4052580) Supporting multipathing for NVMe devices under VxVM.
* 4068407 (4068404) ASL request for HPE 3PAR/Primera/Alletra 9000 ALUA support.
Patch ID: VRTSaslapm-8.0.0.1600
* 4065841 (4065495) Add support for DELL EMC PowerStore.
* 4068407 (4068404) ASL request for HPE 3PAR/Primera/Alletra 9000 ALUA support.
Patch ID: VRTSvxvm-8.0.0.1200
* 4066259 (4062576) hastop -local never finishes on Rhel8.4 and RHEL8.5 servers with latest minor kernels due to hang in vxdg deport command.
Patch ID: VRTSvxvm-8.0.0.1100
* 4064786 (4053230) VxVM support for RHEL 8.5
* 4065628 (4065627) VxVM Modules failed to load after OS upgrade .
Patch ID: VRTSodm-8.0.0.3200
* 4144128 (4126256) no symbol version warning for VEKI's symbol in dmesg after SFCFSHA configuration
* 4144301 (4118154) System may panic in simple_unlock_mem() when errcheckdetail enabled.
Patch ID: VRTSodm-8.0.0.3100
* 4154894 (4144269) After installing VRTSvxfs ODM fails to start.
Patch ID: VRTSodm-8.0.0.2900
* 4057432 (4056673) Rebooting the system results into emergency mode due to corruption of module dependency files. Incorrect vxgms dependency in odm service file.
Patch ID: VRTSodm-8.0.0.2700
* 4113912 (4113118) ODM support for RHEL 8.8.
Patch ID: VRTSodm-8.0.0.2600
* 4114656 (4114655) ODM support for RHEL 8.7 minor kernel 4.18.0-425.19.2.
Patch ID: VRTSodm-8.0.0.2300
* 4108585 (4107778) ODM support for RHEL 8.7 minor kernel.
Patch ID: VRTSodm-8.0.0.2200
* 4100923 (4100922) ODM module failed to load on RHEL8.7


DETAILS OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
This patch fixes the following incidents:

Patch ID: VRTSvxfs-8.0.0.3200

* 4096656 (Tracking ID: 4090032)

SYMPTOM:
System might panic in vx_dev_strategy() while Sybase or Oracle configuration, 
the panic stack looks like following:
vx_dev_strategy
vx_dio_physio
vx_dio_rdwri
vx_write_direct
vx_write1
vx_write_common_slow
vx_write_common
vx_write
fop_write
pwrite

DESCRIPTION:
When we are allocating a different buffer, vx_dev_strategy() unable to find the LDI handle.

RESOLUTION:
Code is modified to fix this issue.

* 4119279 (Tracking ID: 4119281)

SYMPTOM:
Higher page-in requests on Solaris 11 SPARC.

DESCRIPTION:
After upgrading Infoscale, page-in requests are much higher. "vmstat" output looks normal but "sar" output looks abnormal (showing high page-in requests). 
"sar" is taking absolute sample for some reasons. "sar" is not supposed to use these values.

RESOLUTION:
Code changes are done to solve this issue

* 4126124 (Tracking ID: 4121691)

SYMPTOM:
Unknown messages regarding udev rules for ignore_device are observed in /var/log/messages .
systemd-udevd[841]: Configuration file /etc/udev/rules.d/60-vxca.rules is marked executable. Please remove executable permission bits. Proceeding anyway.

DESCRIPTION:
Unknown messages regarding udev rules for ignore_device are observed in /var/log/messages .
systemd-udevd[841]: Configuration file /etc/udev/rules.d/60-vxca.rules is marked executable. Please remove executable permission bits. Proceeding anyway.

RESOLUTION:
Required changes have been done to handle this defect.

* 4136241 (Tracking ID: 4136110)

SYMPTOM:
cmd "umount -l" is unmounting mount points even after adding mntlock in sles12 and sles15.

DESCRIPTION:
cmd "umount -l" is unmounting mount points even after adding mntlock in sles12 and sles15 due to missing umount.vxfs binary in /sbin directory.

RESOLUTION:
Code changes have been done to add umount.vxfs binary in /sbin directory in sles12 and sles15.

* 4144027 (Tracking ID: 4126957)

SYMPTOM:
If "fsadm -o mntunlock=<string> <mountpoint>" and "umount -f <mountpoint>" operations are run in parallel,
system may crash with following stack:

 vx_aioctl_unsetmntlock+0xd3/0x2a0 [vxfs]
 vx_aioctl_vfs+0x256/0x2d0 [vxfs]
 vx_admin_ioctl+0x156/0x2f0 [vxfs]
 vxportalunlockedkioctl+0x529/0x660 [vxportal]
 do_vfs_ioctl+0xa4/0x690
 ksys_ioctl+0x64/0xa0
 __x64_sys_ioctl+0x16/0x20
 do_syscall_64+0x5b/0x1b0

DESCRIPTION:
There is a race condition between these two operations, due to which by the time fsadm thread tries to access
FS data structure, it is possible that umount operation has already freed the structures, which leads to panic.

RESOLUTION:
As a fix, the fsadm thread first checks if the umount operation is in progress. If so, it fails rather than continuing.

* 4144029 (Tracking ID: 4137040)

SYMPTOM:
System got hung due to missing unlock on a file directory, this issue could be hit if there are a lot of mv(1) operations happening against one large VxFS directory.

DESCRIPTION:
In a large VxFS directory, LDH (alternate indexing) is activated once the number of directory entries cross the large directory threshold (vx_dexh_sz),  LDH creates hash attribute inode into the main directory. The exclusive lock of this LDH inode is required during file system rename operations (mv(1)), in case of multiple rename operations happening against one large directory, the trylock of LDH inode may fail due to the contention, and VX_EDIRLOCK is returned.

In case of VX_EDIRLOCK, VxFS should release the exclusive lock of source directory and update the locker list, then retry the operation. However VxFS releases the exclusive lock of target directory wrongly, instead of source and doesnt update the locker list, during the retry operation, although it happens to release the lock (target equals source if rename happens within the same dir), the locker list isnt updated, this locker record still remains in locker list, consequently, the same lock will not get released due to this extra record.

RESOLUTION:
Release the source dir lock instead of target, and update locker list accordingly.

* 4144042 (Tracking ID: 4112056)

SYMPTOM:
Will have incorrect values in inode fields i_acl and i_default_acl that is 0, however expected value is ACL_NOT_CACHED (-1)

DESCRIPTION:
VxFS does not set get_acl() callback in inode_operations (i_op), hence whenever kernel (version 4.x and above) checks the presence of this callback and does not 
find, it sets i_acl and i_default_act fields to 0.

RESOLUTION:
Corrected the bug with code changes.

* 4144043 (Tracking ID: 4126943)

SYMPTOM:
Create lost+found directory in VxFS file system with default ACL permissions as 700.

DESCRIPTION:
Due to security reasons, there was ask to create lost+found directory in VxFS file system with default ACL permissions as 700. So that, except root, no other users are able to access files under lost+found directory.

RESOLUTION:
VxFS filesystem creation with mkfs command will now result in creation of lost+found directory with default ACL permissions as 700.

* 4144059 (Tracking ID: 4134661)

SYMPTOM:
Hang seen in the cp command in case of checkpoint promote in cluster filesystem environment.

DESCRIPTION:
The Hang is seen in cp command as we are not able to pull the inode blocks which is marked as overlay, hence made the code changes to pull the inode blocks marked as overlay.

RESOLUTION:
The Hang is seen in cp command as we are not able to pull the inode blocks which is marked as overlay, hence made the code changes to pull the inode blocks marked as overlay.

* 4144060 (Tracking ID: 4132435)

SYMPTOM:
Failures seen in FSQA cmds->fsck tests, panic in get_dotdotlst

DESCRIPTION:
The inode getting processed in pass_unload->clean_dotdotlst(),
was not in the incore imap table, so its related dotdot list is also not created.
Because the dotdotlist is not initialized it hit the null pointer
dereference error in clean_dotdotlst, hence the panic.

RESOLUTION:
Code changes are done to check for inode allocation status 
in incore imap table while cleaning the dotdot list.

* 4144061 (Tracking ID: 4092440)

SYMPTOM:
# /opt/VRTS/bin/fsppadm enforce /mnt4
UX:vxfs fsppadm: ERROR: V-3-27988: Placement policy file does not exist for mount point /mnt4: No such file or directory
# echo $?
0

DESCRIPTION:
FSPPADM command was returning rc 0 even in case of error during policy enformentmet.

RESOLUTION:
Fixed the issue by code change.

* 4144063 (Tracking ID: 4116887)

SYMPTOM:
Running fsck -y on large size metasave with lots of hardlinks is consuming huge amount of system memory.

DESCRIPTION:
On a FS with lot of hardlinks, requires a lot of memory for storing dotdot information in memory. 
Pass1d populates this dotdot linklist. But it never frees this space. During the whole fsck run
if it requires some change in structural files, it will do rebuild. Every time it rebuilds, it will 
add up the space to the already consumed memory and this way the total memory consumption will be huge.

RESOLUTION:
Code changes are done to free the dotdot list.

* 4144074 (Tracking ID: 4117342)

SYMPTOM:
System might panic due to hard lock up detected on CPU

DESCRIPTION:
When purging the dentries, there is a possible race which can 
lead to corrupted vnode flag. Because of these corrupted flag, 
vxfs tries to purge dentry again and it gets stuck for vnode lock 
which was taken in the current thread context which leads to 
deadlock/softlockup.

RESOLUTION:
Code is modified to protect vnode flag with vnode lock.

* 4144082 (Tracking ID: 4134194)

SYMPTOM:
vxfs/glm worker thread panic with kernel NULL pointer dereference

DESCRIPTION:
In vx_dir_realloc(), When the directory block is full, to fit new file entry it reallocate this directory block into a larger extent.
So as the new extent gets allocated, the old cbuf is now part of the new extent.
But we dont invalidate old cbuf during dir_realloc, which ends up with a staled cbuf in the cache.
This staled buffer can cause the buffer overflow issue.

RESOLUTION:
Code changes are done to invalidate the cbuf immediately after the realloc.

* 4144086 (Tracking ID: 4136235)

SYMPTOM:
System with higher number of attribute inodes and pnlct inodes my see higher number of IOs on an idle system.

DESCRIPTION:
System with higher number of attribute inodes and pnlct inodes my see higher number of IOs on an idle CFS. Hence reducing the pnlct merge frequency may show
some performance improvement.

RESOLUTION:
Module parameter to change pnlct merge frequency.

* 4144117 (Tracking ID: 4099740)

SYMPTOM:
While mounting a file system, it fails with EBUSY error. Although on the setup, same device can not be seen as "mounted".

DESCRIPTION:
During mounting a filesystem, if it encounters error in kernel space, it leaks a hold count of the block device. This falsely implies the block device is still open in any future mounts. Because of that, when user retries the mount, the mount fails with EBUSY. It also causes memory leak for the same reason.

RESOLUTION:
Code changes are done to release the hold count on the block device properly.

* 4144119 (Tracking ID: 4134884)

SYMPTOM:
After unmounting the FS, when the diskgroup deport is initiated, it gives below error: 
vxvm:vxconfigd: V-5-1-16251 Disk group deport of testdg failed with error 70 - Volume or plex device is open or attached

DESCRIPTION:
During mount of a dirty file system, vxvm device open count is leaked, and consequently, the deport of the vxvm DG got failed. 
During the VXFS FS mount operation the corresponding vxvm device will be opened.
If the FS is not clean, it signifies mount to do the log replay. Later the log replay completes, and the mount will succeed.
But this device open count leak causes the diskgroup deport to fail.

RESOLUTION:
Code changes are done to address the device open count leak.

* 4145797 (Tracking ID: 4145203)

SYMPTOM:
vxfs startup scripts fails to invoke veki for kernel version higher than 3.

DESCRIPTION:
vxfs startup script failed to start Veki, as it was calling system V init script to start veki instead of the systemctl interface.

RESOLUTION:
Current code changes checks if kernel version is greater than 3.x and if systemd is present then use systemctl interface otherwise use system V  interface

* 4149436 (Tracking ID: 4141665)

SYMPTOM:
Security vulnerabilities exist in the Zlib third-party components used by VxFS.

DESCRIPTION:
VxFS uses Zlib third-party components with some security vulnerabilities.

RESOLUTION:
VxFS is updated to use a newer version of Zlib third-party components in which the security vulnerabilities have been addressed.

Patch ID: VRTSvxfs-8.0.0.3100

* 4154855 (Tracking ID: 4141665)

SYMPTOM:
Security vulnerabilities exist in the Zlib third-party components used by VxFS.

DESCRIPTION:
VxFS uses Zlib third-party components with some security vulnerabilities.

RESOLUTION:
VxFS is updated to use a newer version of Zlib third-party components in which the security vulnerabilities have been addressed.

Patch ID: VRTSvxfs-8.0.0.2900

* 4092518 (Tracking ID: 4096267)

SYMPTOM:
Veritas File Replication jobs might failed when there are large number of jobs run in parallel.

DESCRIPTION:
File Replication Jobs might fail, with Large number of jobs configured and running in parallel with Veritas File Replication. With large number of jobs there is a chance of referring a job which is already freed, due to which there is a core generated with replication service and job might failed.

RESOLUTION:
updated code to handle the code to take a hold while checking invalid job configuration.

* 4097466 (Tracking ID: 4114176)

SYMPTOM:
After failover, job sync fails with error "Device or resource busy".

DESCRIPTION:
If job is in failed state on target because of job failure from source side, repld was not updating its state when it was restarted in recovery mode. Because of which job state was remaining in running state even after successful replication on target. With this state on target, if job is promoted, then replication process was not creating new ckpt for first sync after failover which was resulting in corrupting state file on new source. Because of this incorrect/corrupt state file, job sync from new source was failing with error "Device or resource busy".

RESOLUTION:
Code is modified to correct the state on target when job was started in recovery mode.

* 4107367 (Tracking ID: 4108955)

SYMPTOM:
VFR job hangs on source if thread creation fails on target.

DESCRIPTION:
On Target, if thread creation for pass completion fails because of high memory usage, repld demon doesn't send that failure reply to source. This can lead to vxfsreplicate process to remains in waiting state indefinitely for reply for pass completion from target. This will lead to job hang on source and will need manual intervention to kill the job.

RESOLUTION:
Code is modified to retry thread creation on target and if it fails after 5 retries, target will reply to source with appropriate error.

* 4111457 (Tracking ID: 4117827)

SYMPTOM:
Without tunable change the logfile permission will always be 600 EO compliant

DESCRIPTION:
Tunable values and behavior: Value Behavior 0 (default) 600 permissions, update existing file permissions on upgrade 1 640 permissions, update existing file permissions on upgrade 2 644 permissions, update existing file permissions on upgrade 3 Inherit umask, update existing file permissions on upgrade 10 600 permissions, dont touch existing file permissions on upgrade 11 640 permissions, dont touch existing file permissions on upgrade 12 644 permissions, dont touch existing file permissions on upgrade 13 Inherit umask, dont touch existing file permissions on upgrade -------------------------------------------------------------------------------------- Adding new tunable as part of vxtunefs command which is per-node global tunable (not per filesystem). For Executive Order, CPI will be having workflow to update the tunable during installation/upgrade/configuration which will take care of updating in all nodes.

RESOLUTION:
New tunable is added to vxtunefs command. How to set tunable: /opt/VRTS/bin/vxtunefs -D eo_perm=1

* 4112417 (Tracking ID: 4094326)

SYMPTOM:
mdb invocation displays message "failed to add vx_sl_node_level walker: walk name already in use"

DESCRIPTION:
In vx_sl_kmcache_init(), kmcache is initialized for each level (in this case it is 8) separately. For passing the cache name as an argument to kmem_cache_create(), we have used a macro. #define VX_SL_KMCACHE_NAME(level) "vx_sl_node_"#level #define VX_SL_KMCACHE_CREATE(level) \ kmem_cache_create(VX_SL_KMCACHE_NAME(level), \ VX_KMEM_SIZE(VX_SL_KMCACHE_SIZE(level)),\ 0, NULL, NULL, NULL, NULL, NULL, 0); While using this macro, we have passed "level" as an argument and that has been expanded as "vx_sl_node_level" for all the 8 levels in `for` loop. This is causing the cache allocation for all the 8 levels with same name.

RESOLUTION:
Passing separate variable value (as level value) to VX_SL_KMCACHE_NAME as it is done in vx_wb_sl_kmcache_init().

* 4118795 (Tracking ID: 4100021)

SYMPTOM:
Running setfacl followed by getfacl resulting in "No such device or address" error.

DESCRIPTION:
When running setfacl command on some of the directories which have the VX_ATTR_INDIRECT type of acl attribute, it is not removing the existing acl attribute and adding a new one, which should not happen ideally. This is resulting in the failure of getfacl with following "No such device or address" error.

RESOLUTION:
we have done the code chages to removal of VX_ATTR_INDIRECT type acl in setfacl code.

* 4119023 (Tracking ID: 4116329)

SYMPTOM:
fsck -o full -n command will fail with error: "ERROR: V-3-28446: bc_write failure devid = 0, bno = 8, len = 1024"

DESCRIPTION:
Previously, to correct the file system WORM/SoftWORM, we didn't check if user wanted to correct the pflags or just wanted to validate if value is flag is missing or not. Also fsck was not capable to handle SOFTWORM flag.

RESOLUTION:
Code added to not try to fix the the problem if user ran fsck with -n option. Also SOFTWORM scenario is added.

* 4123143 (Tracking ID: 4123144)

SYMPTOM:
fsck binary generating coredump

DESCRIPTION:
In internal testing we found that fsck binary generates coredump due to below mentioned assert when we try to repair corrupted file system using below command: ./fsck -o full -y /dev/vx/rdsk/testdg/vol1 ASSERT(fset >= VX_FSET_STRUCT_INDEX)

RESOLUTION:
Added code to set default (primary) fileset by scanning the fset header list.

Patch ID: VRTSvxfs-8.0.0.2700

* 4113911 (Tracking ID: 4113121)

SYMPTOM:
The VxFS module fails to load on RHEL8.8.

DESCRIPTION:
This issue occurs due to changes in the RHEL8.8.

RESOLUTION:
Updated VXFS to support RHEL 8.8.

* 4114019 (Tracking ID: 4067505)

SYMPTOM:
Fsck reports error invalid VX_AF_OVERLAY aflags

DESCRIPTION:
If the inode does not have push linkage (inode not allocated / inode and data already pushed), we skip pushing the data blocks when the inode is removed. Inode will have overlay data blocks, gen bumped up and IEREMOVE set. During extop processing size is set to 0 and bmap is cleared. This is a valid scenario. Fsck while validating the inodes with overlay flag set, expects gen can be different only if the overlay inode has IEREMOVE set and it is last clone in the chain.

RESOLUTION:
If the push inode is not present allow gen to be different even if the clone is not last in chain.

* 4114020 (Tracking ID: 4083056)

SYMPTOM:
Hang observed while punching the smaller hole over the bigger hole.

DESCRIPTION:
We observed the hang while punching the smaller hole over the bigger hole in the file due to the tight race while processing the punching of the hole to the file and flushing it to the disk.

RESOLUTION:
Code changes checked in.

* 4114021 (Tracking ID: 4101634)

SYMPTOM:
Fsck reports error directory block containing inode has incorrect file-type and directory contains invalid directory blocks.

DESCRIPTION:
While doing diretory sanity in fsck we skip updating new directory type ondisk in case of filetype error, hence fsck reporting incorrect file-type error and directory contains invalid directory blocks .

RESOLUTION:
While doing diretory sanity in fsck updating new directory type ondisk in case of filetype error.

Patch ID: VRTSvxfs-8.0.0.2600

* 4114654 (Tracking ID: 4114652)

SYMPTOM:
The VxFS module fails to load on RHEL8.7 minor kernel 4.18.0-425.19.2.

DESCRIPTION:
This issue occurs due to changes in the RHEL8.7 minor kernel.

RESOLUTION:
Updated VXFS to support RHEL 8.7 minor kernel 4.18.0-425.19.2.

Patch ID: VRTSvxfs-8.0.0.2300

* 4108381 (Tracking ID: 4107777)

SYMPTOM:
The VxFS module fails to load on RHEL8.7 minor kernel.

DESCRIPTION:
This issue occurs due to changes in the RHEL8.7 minor kernel.

RESOLUTION:
Modified existing modinst-vxfs script to accommodate the changes in the kernel and load the correct module.

Patch ID: VRTSvxfs-8.0.0.2200

* 4100925 (Tracking ID: 4100926)

SYMPTOM:
VxFS module failed to load on RHEL8.7

DESCRIPTION:
The RHEL8.7 is new release and it has some changes in kernel which caused VxFS module failed to load on it.

RESOLUTION:
Added code to support VxFS on RHEL8.7.

Patch ID: VRTSvxfs-8.0.0.2100

* 4095889 (Tracking ID: 4095888)

SYMPTOM:
Security vulnerabilities exist in the Sqlite third-party components used by VxFS.

DESCRIPTION:
VxFS uses the Sqlite third-party components in which some security vulnerability exist.

RESOLUTION:
VxFS is updated to use newer version of this third-party components in which the security vulnerabilities have been addressed.

Patch ID: VRTSvxfs-8.0.0.1800

* 4068960 (Tracking ID: 4073203)

SYMPTOM:
Veritas file replication might generate a core while replicating the files to target when rename and unlink operation is performed on a file with FCL( file change log) mode on.

DESCRIPTION:
vxfsreplicate process of Veritas file replicator might get a segmentation fault with File change mode on when rename and unlink operation are performed on a file.

RESOLUTION:
Addressed the issue to replicate the files, in scenarios involving rename and unlink operation with FCL mode on.

* 4071108 (Tracking ID: 3988752)

SYMPTOM:
Use ldi_strategy() routine instead of bdev_strategy() for IO's in solaris.

DESCRIPTION:
bdev_strategy() is deprecated from solaris code and was causing performance issues when used for IO's. Solaris has recommended to use LDI framework for all IO's.

RESOLUTION:
Code is modified to use ldi framework for all IO's in solaris.

* 4072228 (Tracking ID: 4037035)

SYMPTOM:
VxFS should have the ability to control the number of inactive processing threads.

DESCRIPTION:
VxFS may spawn a large number of worker threads that become inactive over time. As a result, heavy lock contention occurs during the removal of inactive threads on high-end servers.

RESOLUTION:
To avoid the contention, a new tunable, vx_ninact_proc_threads, is added. You can use vx_ninact_proc_threads to adjust the number of inactive processing threads based on your server configuration and workload.

* 4078335 (Tracking ID: 4076412)

SYMPTOM:
Addressing Executive Order (EO) 14028, initial requirements which is intended to improve the Federal Governments investigative and remediation capabilities related to cybersecurity incidents. Executive Order helps in improving the nation's cybersecurity and also enhance any organization's cybersecurity and software supply chain integrity.

DESCRIPTION:
Executive Order helps in improving the nation's cybersecurity and also enhance any organization's cybersecurity and software supply chain integrity, some of the initial requirements will enable the logging which is compliant to Executive Order. This comprises of command logging, logging unauthorised access in filesystem and logging WORM events on filesystem. Also include changes to display IP address for Veritas File replication at control plane based on tunable.

RESOLUTION:
The initial requirements of EO are addressed in this release. As per Executive order(EO) for some of the requirements it should be Tunable based. For example IP logging where ever applicable (for VFR it should be at control plane(not for every data transfer), and this is also tunable based. Also for logging some kernel logs, like worm events(plan is to log those to syslog) etc are tunable based. Introduced new tunable, eo_logging_enable. There is a protocol change because of the introduction of the tunable. Though the changes are planned for TOT first and then will go to Update patch on 80all maint for EO release, there is impact of this protocol change for update patch. We might need to update protocol change with middle protocol version between existing protocol version and new protocol version(introduced because of eo) For VFR, IP addresses of source and destination are needed to be logged as part of EO. IP addresses will be included in the log while logging Starting/Resuming a job in VFR. Log Location: /var/VRTSvxfs/replication/log/mount_point-job_name.log There are 2 ways to fetch the IP address of the source and target. One is to get the IP addresses stored in the link structure of a session. These IPs are obtained by resolving the source and target hostname. It may contain both IPv4 and IPv6 for a node, and we cannot speculate on which IP actual connection has happened. The second way is to get the socket descriptor from an active connection of the session. This socket descriptor can be used to fetch the source and target IP associated with it. The second method is seems to get the actual IP addresses used for the connection between source and target. The change contains to fetch IP addresses from socket descriptor after establishing connections. More details on EO Logging with respective handling for initial release for VxFS https://confluence.community.veritas.com/pages/viewpage.action?spaceKey=VEStitle=EO+VxFS+Scrum+Page

* 4078520 (Tracking ID: 4058444)

SYMPTOM:
Loop mounts using files on VxFS fail on Linux systems running kernel version 4.1 or higher.

DESCRIPTION:
Starting with the 4.1 version of the Linux kernel, the driver loop.ko uses a new API for read and write requests to the file which was not previously implemented in VxFS. This causes the virtual disk reads during mount to fail while using the -o loop option , causing the mount to fail as well. The same functionality worked in older kernels (such as the version found in RHEL7).

RESOLUTION:
Implemented a new API for all regular files on VxFS, allowing usage of the loop device driver against files on VxFS as well as any other kernel drivers using the same functionality.

* 4079142 (Tracking ID: 4077766)

SYMPTOM:
VxFS kernel module might leak memory during readahead of directory blocks.

DESCRIPTION:
VxFS kernel module might leak memory during readahead of directory blocks due to missing free operation of readahead-related structures.

RESOLUTION:
Code in readahead of directory blocks is modified to free up readahead-related structures.

* 4079173 (Tracking ID: 4070217)

SYMPTOM:
Command fsck might fail with 'cluster reservation failed for volume' message for a disabled cluster-mounted filesystem.

DESCRIPTION:
On a disabled cluster-mounted filesystem, release of cluster reservation might fail during unmount operation resulting in a failure of command fsck with 'cluster reservation failed for volume' message.

RESOLUTION:
Code is modified to release cluster reservation in unmount operation properly even for cluster-mounted filesystem.

* 4082260 (Tracking ID: 4070814)

SYMPTOM:
Security Vulnerability observed in Zlib a third party component VxFS uses.

DESCRIPTION:
In an internal security scans vulnerabilities in Zlib were found.

RESOLUTION:
Upgrading the third party component Zlib to address these vulnerabilities.

* 4082865 (Tracking ID: 4079622)

SYMPTOM:
Migration uses normal read/write file operation instead of read/write iter functions. vxfs requires read/write iter functions from Linux kernel 5.14.

DESCRIPTION:
Starting with 5.14 version of the Linux kernel, vxfs uses a read/write iter file operation for migration.

RESOLUTION:
Developed a common function for read/write which get called for normal and iter read/write file operation.

* 4083335 (Tracking ID: 4076098)

SYMPTOM:
FS migration from ext4 to vxfs on Linux machines with falcon-sensor enabled, may fail

DESCRIPTION:
Falcon-sensor driver installed on test machines is tapping system calls such as close and is doing some additional vfs calls such as read. Due to this vxfs driver received read file - operation call from fsmigbgcp process context. Read operation is allowed only on special files from fsmigbgcp process context. Since the file in picture was not a special file, the vxfs debug code asserted.

RESOLUTION:
As a fix, we are now allowing the read on non special files from fsmigbgcp process context. [Note: - There were other related issues fixed in this incident. But those are not likely to be hit in customer environment as they are negative test scenarios (like trying to overwrite migration special file - deflist) and may not be relevant to customer. - I am not covering them in above

* 4085623 (Tracking ID: 4085624)

SYMPTOM:
While running fsck with -o and full -y on corrupted FS, fsck may dump core.

DESCRIPTION:
Fsck builds various in-core maps based on on-disk structural files, one such map is dotdotmap (which stores info about parent directory). For regular fset (like 999), the dotdotmap is initialized only for primary ilist (inode list for regular inodes). It is skipped for attribute ilist (inode list for attribute inodes). This is because attribute inodes do not have parent directories as is the case for regular inodes. While attempting to resolve inconsistencies in FS metadata, fsck tries to clean up dotdotmap for attribute ilist. In the absence of a check, dotdotmap is re-initialized for attribute ilist causing segmentation fault.

RESOLUTION:
In the codepath where fsck attempts to reinitialize the dotdotmap, a check added to skip reinitialization of dotdotmap for attribute ilist.

* 4085839 (Tracking ID: 4085838)

SYMPTOM:
Command fsck may generate core due to processing of zero size attribute inode.

DESCRIPTION:
Command fsck is modified to skip processing of zero size attribute inode.

RESOLUTION:
Command fsck fails due to allocation of memory and dereferencing it for zero size attribute inode.

* 4086085 (Tracking ID: 4086084)

SYMPTOM:
VxFS mount operation causes system panic when -o context is used.

DESCRIPTION:
VxFS mount operation supports context option to override existing extended attributes, or to specify a different, default context for file systems that do not support extended attributes. System panic observed when -o context is used.

RESOLUTION:
Required code changes are added to avoid panic.

* 4088341 (Tracking ID: 4065575)

SYMPTOM:
Write operation might be unresponsive on a local mounted VxFS filesystem in a no-space condition

DESCRIPTION:
Write operation might be unresponsive on a local mounted VxFS filesystem in a no-space condition due to a race between two writer threads to take read-write lock the file to do a delayed allocation operation on it.

RESOLUTION:
Code is modified to allow thread which is already holding read-write lock to complete delayed allocation operation, other thread will skip over that file.

Patch ID: VRTSvxfs-8.0.0.1700

* 4081150 (Tracking ID: 4079869)

SYMPTOM:
Security Vulnerability found in VxFS while running security scans.

DESCRIPTION:
In our internal security scans we found some Vulnerabilities in VxFS third party components. The Attackers can exploit these security vulnerability to attack on system.

RESOLUTION:
Upgrading the third party components to resolve these vulnerabilities.

* 4083948 (Tracking ID: 4070814)

SYMPTOM:
Security Vulnerability found in VxFS while running security scans.

DESCRIPTION:
In our internal security scans we found some Vulnerabilities in VxFS third party component Zlib.

RESOLUTION:
Upgrading the third party component Zlib to resolve these vulnerabilities.

Patch ID: VRTSvxfs-8.0.0.1200

* 4055808 (Tracking ID: 4062971)

SYMPTOM:
All the operations like ls, create are blocked on file system

DESCRIPTION:
In WORM file system we do not allow directory rename. When partition directory is enabled, new directories are created and files are moved under this leaf directory based on hash. Due to WORM FS this rename operation was blocked and splitting could not complete. Blocking all the operations on file system.

RESOLUTION:
Allow directory renaming in the context of partition directory split and merge.

* 4056684 (Tracking ID: 4056682)

SYMPTOM:
New features information on a filesystem with fsadm(file system administration utility) from a device is not displayed.

DESCRIPTION:
Information about new features like WORM (Write once read many), auditlog is correctly updated with a file system mounted through the fsadm utility, but on the underlying device the new feature information is not displayed.

RESOLUTION:
Updated fsadm utility to display the new feature information correctly.

* 4062606 (Tracking ID: 4062605)

SYMPTOM:
Minimum retention time cannot be set if the maximum retention time is not set.

DESCRIPTION:
The tunable - minimum retention time cannot be set if the tunable - maximum retention time is not set. This was implemented to ensure that the minimum time is lower than the maximum time.

RESOLUTION:
Setting of minimum and maximum retention time is independent of each other. Minimum retention time can be set without the maximum retention time being set.

* 4065565 (Tracking ID: 4065669)

SYMPTOM:
Creating non-WORM checkpoints fails when the tunables - minimum retention time and maximum retention time are set.

DESCRIPTION:
Creation of non-WORM checkpoints fails as all WORM-related validations are extended to non-WORM checkpoints also.

RESOLUTION:
WORM-related validations restricted to WORM fsets only, allowing non-WORM checkpoints to be created.

* 4065651 (Tracking ID: 4065666)

SYMPTOM:
All the operations like ls, create are blocked on file system directory where there are WORM enabled files and retention period not expired

DESCRIPTION:
For WORM file system, files whose retention period is not expired can not be renamed. When partition directory is enabled, new directories are created and files are moved under this leaf directory based on hash. Due to WORM FS this rename operation was blocked and splitting could not complete. Blocking all the operations on file system.

RESOLUTION:
Allow directory renaming of files even if retention period is not expired in the context of partition directory split and merge.

Patch ID: VRTSvxfs-8.0.0.1100

* 4061114 (Tracking ID: 4052883)

SYMPTOM:
The VxFS module fails to load on RHEL 8.5.

DESCRIPTION:
This issue occurs due to changes in the RHEL 8.5 kernel.

RESOLUTION:
VxFS module is updated to accommodate the changes in the kernel and load as expected on RHEL 8.5.

Patch ID: VRTSsfmh-HF0800510

* 4157270 (Tracking ID: 4157265)

SYMPTOM:
NA

DESCRIPTION:
NA

RESOLUTION:
NA

Patch ID: VRTSllt-8.0.0.2600

* 4135413 (Tracking ID: 4084657)

SYMPTOM:
RHEL8/7.4.1 new installation, fencing/LLT panic while using TCP over LLT.

DESCRIPTION:
Customer has new environment configuring Infoscale 7.4.1 with RHEL8.4, using TCP for LLT comms.  The same configuration works fine with RHEL7, but system panics in RHEL8 environment.

RESOLUTION:
Code change done for llt binary to resolve CU panic issue.

* 4135420 (Tracking ID: 3989372)

SYMPTOM:
When the CPU load and memory consumption is high in a VMware environment, some nodes in an InfoScale cluster may get fenced out.

DESCRIPTION:
Occasionally, in a VMware environment, the operating system may not schedule LLT contexts on time. Consequently, heartbeats from some of the cluster nodes may be lost, and those nodes may get fenced out. This situation typically occurs when the CPU load or the memory usage is high or when the VMDK snapshot or vMotion operations are in progress.

RESOLUTION:
This fix attempts to make clusters more resilient to transient issues by heartbeating using threads bound to every vCPU.

* 4136152 (Tracking ID: 4124759)

SYMPTOM:
Panic happened with llt_ioship_recv on a server running in AWS.

DESCRIPTION:
In AWS environment packets can be duplicated even it is configured in UDP. As UDP it is not expected.

RESOLUTION:
To avoid the panic we have checked the packet is already in send queue of bucket and decided invalid/duplicate packet.

* 4145248 (Tracking ID: 4139781)

SYMPTOM:
System panics occasionally in LLT stack where LLT over ether enabled.

DESCRIPTION:
LLT allocates skb memory from own cache for those >4k messages and sets a field of skb_shared_info pointing to a LLT function, later uses the field to determine if skb is allocated from own cache. When receiving a packet, OS also allocates skb from system cache and doesn't reset the field, then passes skb to LLT. Occasionally the stale pointer in memory can mislead LLT to think a skb is from own cache and uses LLT free api by mistake.

RESOLUTION:
LLT uses a hash table to record skb allocated from own cache and doesn't set the field of skb_shared_info.

* 4156794 (Tracking ID: 4135825)

SYMPTOM:
Once root file system is full during llt start, llt module failing to load forever.

DESCRIPTION:
Disk is full, user rebooted system or restart the product. In this case while LLT loading, it deletes llt links and tried creates new one using link names and "/bin/ln -f -s". As disk is full, it unable to create the links. Even after making space, it failed to create the links as those are deleted. So failed to load LLT module.

RESOLUTION:
If existing links are not present, added the logic to get name of file names to create new links.

* 4156815 (Tracking ID: 4087543)

SYMPTOM:
Node panic observed at llt_rdma_process_ack+189

DESCRIPTION:
LLT in llt_rdma_process_ack() function, trying to access the header mblk becomes invalid, and sends an unnecessary acknowledgement from the network/rdma layer. When ib_post_send() function fails, OS returns an error, and LLT takes care of this error by sending the packet through non-rdma channel. Even when the OS has returned the error, that packet is still sent down and LLT get an acknowledgement. LLT thus receives two acknowledgements for the same buffer, one which rdma layer sends although it report errors while sending, and other that LLT simulates (by design) after delivering the packet through non-rdma channel.
The first acknowledgement context frees up the buffer and so when LLT again calls same function (llt_rdma_process_ack() ) for the same buffer, as the buffer has already been freed, LLT hits panic in that function.

RESOLUTION:
Added a check that stops buffer being freed up twice and hence does not panic node at llt_rdma_process_ack+189.

* 4156824 (Tracking ID: 4138779)

SYMPTOM:
Veritas Infoscale Availability modules will not work (function) on Red Hat Enterprise Linux 8 Update 9(RHEL8.9).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 Update 9(RHEL8.9).

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 9(RHEL8.9) is now introduced.

Patch ID: VRTSllt-8.0.0.2400

* 4116421 (Tracking ID: 4113340)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 8(RHEL8.8).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL8 Update 7.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 
8(RHEL8.8) is now introduced.

Patch ID: VRTSllt-8.0.0.2200

* 4108947 (Tracking ID: 4107779)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 7 on latest minor kernel 4.18.0-425.10.1.el8_7.x86_64(RHEL8.7).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL8 Update 7 GA kernel(4.18.0-425.3.1.el8.x86_64).

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 
7 on latest minor kernel 4.18.0-425.10.1.el8_7.x86_64(RHEL8.7) is now introduced.

Patch ID: VRTSllt-8.0.0.2100

* 4101232 (Tracking ID: 4100203)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 7(RHEL8.7).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL8 Update 6.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 
7(RHEL8.7) is now introduced.

Patch ID: VRTSllt-8.0.0.1800

* 4061158 (Tracking ID: 4061156)

SYMPTOM:
IO error on /sys/kernel/slab folder

DESCRIPTION:
After loading LLT module, LS command throws IO error on /sys/kernel/slab folder

RESOLUTION:
IO error on /sys/kernel/slab folder is now fixed after loading LLT module

* 4079637 (Tracking ID: 4079636)

SYMPTOM:
Kernel is getting panicked with null pointer dereference in llt_dump_mblk when LLT is configured over IPsec

DESCRIPTION:
LLT uses skb's sp pointer to chain socekt buffers internally. When LLT is configured over IPsec, llt will receive skb's with sp pointer from ip layer. These skbs were wrongly identified by llt as chained skbs. Now we are resetting the sp pointer field before re-using for interanl chaining.

RESOLUTION:
No panic observed after applying this patch.

* 4079662 (Tracking ID: 3981917)

SYMPTOM:
LLT UDP multiport was previously supported only on 9000 MTU networks.

DESCRIPTION:
Previously LLT UDP multiport configuration required network links to have 9000 MTU. We have enhanced UDP multiport code, so that now this LLT feature can be configured/run on 1500 MTU links as well.

RESOLUTION:
LLT UDP multiport can be configured on 1500 MTU based networks

* 4080630 (Tracking ID: 4046953)

SYMPTOM:
During LLT configuration, messages related to 9000 MTU are getting printed as error.

DESCRIPTION:
On Azure error messages related to 9000 MTU are getting logged. 
These message indicates that to have optimal performance , use 9000 MTU based networks . These are not actual errors but suggestion.

RESOLUTION:
Since customer is going to use it on Azure where 9000 MTU is not supported, hence removed these messages to avoid confusion.

Patch ID: VRTSllt-8.0.0.1500

* 4073695 (Tracking ID: 4072335)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 6(RHEL8.6).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL8 Update 5.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 
6(RHEL8.6) is now introduced.

Patch ID: VRTSllt-8.0.0.1200

* 4066063 (Tracking ID: 4066062)

SYMPTOM:
Node panic

DESCRIPTION:
Node panic observed in llt udp multiport configuration with vx ioship stack.

RESOLUTION:
When llt receives an acknowledgement, it tries to free the packet and corresponding client frags blindly without checking the client status. If the client is unregistered, then the free functions of the frags will be invalid and hence should not be called.

* 4066667 (Tracking ID: 4040261)

SYMPTOM:
During LLT configuration, if set-verbose is set to 1 in /etc/llttab, an lltconfig core dump is observed.

DESCRIPTION:
Some log messages may have IDs like 00000. When such logs are encountered, it may lead to a core dump by the lltconfig process.

RESOLUTION:
VCS is updated to use appropriate message IDs for logs so that such issues do not occur.

Patch ID: VRTSllt-8.0.0.1100

* 4064783 (Tracking ID: 4053171)

SYMPTOM:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 8 
Update 5(RHEL8.5).

DESCRIPTION:
Veritas Infoscale Availability does not support Red Hat Enterprise Linux 
versions later than RHEL8 Update 4.

RESOLUTION:
Veritas Infoscale Availability support for Red Hat Enterprise Linux 8 Update 
5(RHEL8.5) is now introduced.

Patch ID: VRTSvxvm-8.0.0.2800

* 4093140 (Tracking ID: 4093067)

SYMPTOM:
System panicked in the following stack:

#9  [] page_fault at  [exception RIP: bdevname+26]
#10 [] get_dip_from_device  [vxdmp]
#11 [] dmp_node_to_dip at [vxdmp]
#12 [] dmp_check_nonscsi at [vxdmp]
#13 [] dmp_probe_required at [vxdmp]
#14 [] dmp_check_disabled_policy at [vxdmp]
#15 [] dmp_initiate_restore at [vxdmp]
#16 [] dmp_daemons_loop at [vxdmp]

DESCRIPTION:
After got block_device from OS, DMP didn't do the NULL pointer check against block_device->bd_part. This NULL pointer further caused system panic when bdevname() was called.

RESOLUTION:
The code changes have been done to fix the problem.

* 4130643 (Tracking ID: 4130642)

SYMPTOM:
node failed to rejoin the cluster after switched from master to slave due to the failure of the replicated diskgroup import.
The below error message could be found in /var/VRTSvcs/log/CVMCluster_A.log.
CVMCluster:cvm_clus:monitor:vxclustadm nodestate return code:[101] with output: [state: out of cluster
reason: Replicated dg record is found: retry to add a node failed]

DESCRIPTION:
The flag which shows the diskgroup was imported with usereplicatedev=only failed to be marked since the last time the diskgroup got imported. 
The missing flag caused the failure of the replicated diskgroup import, further caused node rejoin failure.

RESOLUTION:
The code changes have been done to flag the diskgroup after it got imported with usereplicatedev=only.

* 4132798 (Tracking ID: 4132799)

SYMPTOM:
If GLM is not loaded, start CVM fails with the following errors:
# vxclustadm -m gab startnode
VxVM vxclustadm INFO V-5-2-9687 vxclustadm: Fencing driver is in disabled mode - 
VxVM vxclustadm ERROR V-5-1-9743 errno 3

DESCRIPTION:
The error number but the error message is printed while joining CVM fails.

RESOLUTION:
The code changes have been made to fix the issue.

* 4134024 (Tracking ID: 4134023)

SYMPTOM:
vxconfigrestore(Diskgroup configuration restoration) for H/W Replicated diskgroup failed with below error:
# vxconfigrestore -p LINUXSRDF
VxVM vxconfigrestore INFO V-5-2-6198 Diskgroup LINUXSRDF configuration restoration started ......
VxVM vxdg ERROR V-5-1-0 Disk group LINUXSRDF: import failed:
Replicated dg record is found.
Did you want to import hardware replicated LUNs?
Try vxdg [-o usereplicatedev=only] import option with -c[s]
Please refer to system log for details.
... ...
VxVM vxconfigrestore ERROR V-5-2-3706 Diskgroup configuration restoration for LINUXSRDF failed.

DESCRIPTION:
H/W Replicated diskgroup can be imported only with option "-o usereplicatedev=only". vxconfigrestore didn't do H/W Replicated diskgroup check, without giving the proper import option diskgroup import failed.

RESOLUTION:
The code changes have been made to do H/W Replicated diskgroup check in vxconfigrestore .

* 4134791 (Tracking ID: 4134790)

SYMPTOM:
Hardware Replicated DG was marked with clone flag on SLAVEs after failover operation was done on storage side.

DESCRIPTION:
udid_mismatch are marked on the H/W replicated devices after switched the source storage with the target storage. Disk with udid_mismatch was treated as a clone device. This caused those replicated disks are treated as the cloned disks as well. With clearclone option, Master would remove this flag in the last stage of dg import, but Slaves couldn't. Hence the clone flag was observed on Slaves only.

RESOLUTION:
The code changes have been made to do extra H/W Replicated disk check.

* 4146456 (Tracking ID: 4128351)

SYMPTOM:
System hung observed when switching log owner.

DESCRIPTION:
VVR mdship SIOs might be throttled due to reaching max allocation count, etc. These SIOs are holding io count. When log owner change kicked in and quiesced RVG. VVR log owner change SIO is waiting for iocount to drop to zero to proceed further. VVR mdship requests from the log client are returned with EAGAIN as RVG quiesced. The throttled mdship SIOs need to be driven by the upcoming mdship requests, hence the deadlock, which caused system hung.

RESOLUTION:
Code changes have been made to flush the mdship queue before VVR log owner change SIO waiting for IO drain.

* 4146458 (Tracking ID: 4122061)

SYMPTOM:
Observing hung after resync operation, vxconfigd was waiting for slaves' response.

DESCRIPTION:
VVR logowner was in a transaction and returned VOLKMSG_EAGAIN to CVM_MSG_GET_METADATA which is expected. Once the client received VOLKMSG_EAGAIN, it would sleep 
10 jiffies and retry the kmsg . In a busy cluster, it might happen the retried kmsgs plus the new kmsgs got built up and hit the kmsg flowcontrol before the vvr logowner transaction completed. Once the client refused any kmsgs due to the flowcontrol. The transaction on vvr logowner might get stuck because it required kmsg response from all the slave node.

RESOLUTION:
Code changes have been made to increase the kmsg flowcontrol and don't let kmsg receiver fall asleep but handle the kmsg in a restart function.

* 4146462 (Tracking ID: 4087628)

SYMPTOM:
When DCM is in replication mode with volumes mounted having large regions for DCM to sync and if slave node reboot is triggered, this might cause CVM to go into faulted state .

DESCRIPTION:
During Resiliency tests, performed sequence of operations as following. 
1. On an AWS FSS-CVR setup, replication is started across the sites for 2 RVGs.
2. The low owner service groups for both the RVGs are online on a Slave node. 
3. Rebooted another Slave node where logowner is not online. 
4. After Slave node come back from reboot, it is unable to join CVM Cluster. 
5. Also vx commands are also hung/stuck on the CVM Master and Logowner slave node.

RESOLUTION:
In RU SIO before requesting vxfs_free_region(), drop IO count and hold it again after. Because the transaction has been locked (vol_ktrans_locked = 1) right 
before calling vxfs_free_region(), we don't need the iocount to hold rvg from being removed.

* 4146472 (Tracking ID: 4115078)

SYMPTOM:
vxconfigd hung was observed when reboot all nodes of the primary site.

DESCRIPTION:
When vvr logowner node wasn't configured on Master. VVR recovery was triggered by node leaving, in case data volume was in recovery, vvr logowner would send ilock request to Master node. Master granted the ilock request and sent a response to vvr logonwer. But due to a bug, ilock requesting node id mismatch was detected by vvr logowner. VVR logowner thought the ilock grant failed, mdship IO went into a permanent hang. vxconfigd was stuck and kept waiting for IO drain.

RESOLUTION:
Code changes have been made to correct the ilock requesting node id in the ilock request in such case.

* 4149249 (Tracking ID: 4149248)

SYMPTOM:
Third-party components (OpenSSL, curl, and libxml) used by VxVM exhibit security vulnerabilities.

DESCRIPTION:
VxVM utilizes current versions of OpenSSL, curl, and libxml, which have been reported to have security vulnerabilities.

RESOLUTION:
Upgrades to newer versions of OpenSSL, curl, and libxml have been implemented to address the reported security vulnerabilities.

* 4149423 (Tracking ID: 4145063)

SYMPTOM:
vxio Module fails to load post VxVM package installation.

DESCRIPTION:
Following message is seen in dmesg:
[root@dl360g10-115-v23 ~]# dmesg | grep symbol
[ 2410.561682] vxio: no symbol version for storageapi_associate_blkg

RESOLUTION:
Because of incorrectly nested IF blocks in the "src/linux/kernel/vxvm/Makefile.target", the code for the RHEL 9 block was not getting executed, because of which certain symbols were not present in the vxio.mod.c file. This in turn caused the above mentioned symbol warning to be seen in dmesg.
Fixed the improper nesting of the IF conditions.

* 4156836 (Tracking ID: 4152014)

SYMPTOM:
the excluded dmpnodes are visible after system reboot when SELinux is disabled.

DESCRIPTION:
During system reboot, disks' hardware soft links failed to be created before DMP exclusion in function, hence DMP failed to recognize the excluded dmpnodes.

RESOLUTION:
Code changes have been made to reduce the latency in creation of hardware soft links and remove tmpfs /dev/vx on an SELinux Disabled platform.

* 4156837 (Tracking ID: 4134790)

SYMPTOM:
Hardware Replicated DG was marked with clone flag on SLAVEs after failover operation was done on storage side.

DESCRIPTION:
udid_mismatch are marked on the H/W replicated devices after switched the source storage with the target storage. Disk with udid_mismatch was treated as a clone device. This caused those replicated disks are treated as the cloned disks as well. With clearclone option, Master would remove this flag in the last stage of dg import, but Slaves couldn't. Hence the clone flag was observed on Slaves only.

RESOLUTION:
The code changes have been made to do extra H/W Replicated disk check.

* 4156839 (Tracking ID: 4077944)

SYMPTOM:
In VVR environment, when I/O throttling gets activated and deactivated by VVR, it may result in an application I/O hang.

DESCRIPTION:
In case VVR throttles and unthrottles I/O, the diving of throttled I/O is not done in one of the cases.

RESOLUTION:
Resolved the issue by making sure the application throttled I/Os get driven in all the cases.

* 4156841 (Tracking ID: 4142054)

SYMPTOM:
System panicked in the following stack:

[ 9543.195915] Call Trace:
[ 9543.195938]  dump_stack+0x41/0x60
[ 9543.195954]  panic+0xe7/0x2ac
[ 9543.195974]  vol_rv_inactive+0x59/0x790 [vxio]
[ 9543.196578]  vol_rvdcm_flush_done+0x159/0x300 [vxio]
[ 9543.196955]  voliod_iohandle+0x294/0xa40 [vxio]
[ 9543.197327]  ? volted_getpinfo+0x15/0xe0 [vxio]
[ 9543.197694]  voliod_loop+0x4b6/0x950 [vxio]
[ 9543.198003]  ? voliod_kiohandle+0x70/0x70 [vxio]
[ 9543.198364]  kthread+0x10a/0x120
[ 9543.198385]  ? set_kthread_struct+0x40/0x40
[ 9543.198389]  ret_from_fork+0x1f/0x40

DESCRIPTION:
- From the SIO stack, we can see that it is a case of done being called twice. 
- Looking at vol_rvdcm_flush_start(), we can see that when child sio is created, it is being directly added to the the global SIO queue. 
- This can cause child SIO to start while vol_rvdcm_flush_start() is still in process of generating other child SIOs. 
- It means that, say the first child SIO gets done, it can find the children count going to zero and calls done.
- The next child SIO, also independently find children count to be zero and call done.

RESOLUTION:
The code changes have been done to fix the problem.

Patch ID: VRTSaslapm 8.0.0.2800

* 4158230 (Tracking ID: 4158234)

SYMPTOM:
Support for ASLAPM on RHEL 8.9

DESCRIPTION:
RHEL 8.9 is new release and hence APM module 
should be recompiled with new kernel.

RESOLUTION:
Compiled APM with RHEL 8.9 kernel.

Patch ID: VRTSvxvm-8.0.0.2700

* 4154821 (Tracking ID: 4149248)

SYMPTOM:
Third-party components (OpenSSL, curl, and libxml) used by VxVM exhibit security vulnerabilities.

DESCRIPTION:
VxVM utilizes current versions of OpenSSL, curl, and libxml, which have been reported to have security vulnerabilities.

RESOLUTION:
Upgrades to newer versions of OpenSSL, curl, and libxml have been implemented to address the reported security vulnerabilities.

Patch ID: VRTSvxvm-8.0.0.2600

* 4067237 (Tracking ID: 4058894)

SYMPTOM:
After package installation and reboot , messages regarding udev rules for ignore_device are observed in /var/log/messages .
systemd-udevd[774]: /etc/udev/rules.d/40-VxVM.rules:25 Invalid value for OPTIONS key, ignoring: 'ignore_device'

DESCRIPTION:
From SLES15 Sp3 onwards , ignore_device is deprecated from udev rules and is not available for use anymore . Hence these messages are observed in system logs .

RESOLUTION:
Required changes have been done to handle this defect.

* 4109554 (Tracking ID: 4105953)

SYMPTOM:
System panic with below stack in CVR environment.

 #9 [] page_fault at 
    [exception RIP: vol_ru_check_update_done+183]
#10 [] vol_rv_write2_done at [vxio]
#11 [] voliod_iohandle at [vxio]
#12 [] voliod_loop at [vxio]
#13 [] kthread at

DESCRIPTION:
In CVR environment, when IO is issued in writeack sync mode we ack to application when datavolwrite is done on either log client or logowner depending on 
where IO is issued on. it could happen that VVR freed the metadata I/O update after SRL write is done incase of writeack sync mode, but later after freeing the update, its accessed again and hence we end up in hitting NULL ptr deference.

RESOLUTION:
Code changes have been made to avoid the accessing NULL pointer.

* 4111442 (Tracking ID: 4066785)

SYMPTOM:
When the replicated disks are in SPLIT mode, importing its disk group failed with "Device is a hardware mirror".

DESCRIPTION:
When the replicated disks are in SPLIT mode, which are readable and writable, importing its disk group failed with "Device is a hardware mirror". Third party doesn't expose disk attribute to show when it's in SPLIT mode. With this new enhancement, the replicated disk group can be imported with option `-o usereplicatedev=only`.

RESOLUTION:
The code is enhanced to import the replicated disk group with option `-o usereplicatedev=only`.

* 4112549 (Tracking ID: 4112701)

SYMPTOM:
Observed reconfig hang on 8 nodes ISO vm setup after rebooting all nodes with a delay of 5min in between them due to Vxconfigd core dumped on master node

DESCRIPTION:
1. reconfig hang on 8 nodes ISO vm setup after rebooting all nodes with a delay of 5min.
             2. This is because fork failed during command shipping which caused vxconfigd core dump on master. So all reconfigurations after that failed to 
                 process.

RESOLUTION:
Reboot master node where vold is core dumped.

* 4113310 (Tracking ID: 4114601)

SYMPTOM:
System gets panicked and rebooted

DESCRIPTION:
RCA:
Start the IO on volume device and pull out it's disk from the machine and hit below panic on RHEL8.

 dmp_process_errbp
 dmp_process_errbuf.cold.2+0x328/0x429 [vxdmp]
 dmpioctl+0x35/0x60 [vxdmp]
 dmp_flush_errbuf+0x97/0xc0 [vxio]
 voldmp_errbuf_sio_start+0x4a/0xc0 [vxio]
 voliod_iohandle+0x43/0x390 [vxio]
 voliod_loop+0xc2/0x330 [vxio]
 ? voliod_iohandle+0x390/0x390 [vxio]
 kthread+0x10a/0x120
 ? set_kthread_struct+0x50/0x50

As disk pulled out from the machine VxIO hit a IO error and it routed that IO to dmp layer via kernel-kernel IOCTL for error analysis.
following is the code path for IO routing,

voldmp_errbuf_sio_start()-->dmp_flush_errbuf()--->dmpioctl()--->dmp_process_errbuf()

dmp_process_errbuf() retrieves device number of the underlying path (os-device).
and it tries to get bdev (i.e. block_device) pointer from path-device number.
As path/os-device is removed by disk pull, linux returns fake bdev for the path-device number.
For this fake bdev there is no gendisk associated with it (bdev->bd_disk is NULL).

We are setting this NULL bdev->bd_disk to the IO buffer routed from vxio.
which leads a panic on dmp_process_errbp.

RESOLUTION:
If bdev->bd_disk found NULL then set DMP_CONN_FAILURE error on the IO buffer and return DKE_ENXIO to vxio driver

* 4113357 (Tracking ID: 4112433)

SYMPTOM:
Vulnerabilities have been reported in third party components, [openssl, curl and libxml] that are used by VxVM.

DESCRIPTION:
Third party components [openssl, curl and libxml] in their current versions,  used by VxVM have been reported with security vulnerabilities which needs

RESOLUTION:
[openssl, curl and libxml] have been upgraded to newer versions in which the reported security vulnerabilities have been addressed.

* 4114963 (Tracking ID: 4114962)

SYMPTOM:
File system data corruption with mirrored volumes in Flexible Storage Sharing (FSS) environments during beyond fault storage failure situations.

DESCRIPTION:
In FSS environments, data change object (DCO) provides functionality to track changes on detached mirrors using bitmaps. This bitmap is later used for re-sync of detached mirrors data (change delta).
When DCO volume and data volume share the same set of devices, DCO volumes last mirror failure means IOs on data volume is going to fail. In such cases instead of invaliding DCO volumes, proactively IO is failed.
This helps in protecting DCO when entire storage comes back and optimal recovery of mirrors can be performed.
When disk for one of the mirror of DCO object become available, the bug in DCO update incorrectly updates metadata of DCO which lead to ignoring valid DCO maps during actual volume recovery and hence newly recovered mirrors of volume missed blocks of valid application data. This lead to corruption when read IO were serviced from the newly recovered mirrors.

RESOLUTION:
The login of FMR map updating transaction of enabling disks is fixed to resolve the bug. This ensures all valid bitmaps are considered for recovery of mirrors and avoid data loss.

* 4115251 (Tracking ID: 4115195)

SYMPTOM:
Data corruption on file-systems with  erasure coded volumes

DESCRIPTION:
In Erasure coded (EC) volume are used in Flexible shared storage (FSS) environments, data change object (DCO) is used to tracking changes in volume with faulted columns. The DCO provides a bitmap of all changed regions during rebuild of the faulted columns. When EC volume starts with few faulted columns, the log-replay IO could not be performed on those columns. Those additional writes are expected to be tracked in DCO bitmap. Due to bug those IOs are not getting tracked in DCO bitmap as DCO bitmaps are not yet enabled when log-replay is triggered. Hence when the remaining columns are attached back, appropriate data blocks of those log-replay IOs are skipped during rebuild. This leads to data corruption when reads are serviced from those columns.

RESOLUTION:
Code changes are done to ensure log-replay on EC volume is triggered only after ensuring DCO tracking is enabled. This ensures that all IOs from log-replay operations are tracked in DCO maps for remaining faulted columns of volume.

* 4115252 (Tracking ID: 4115193)

SYMPTOM:
Data corruption on VVR primary with storage loss beyond fault tolerance level in replicated environment.

DESCRIPTION:
In Flexible Storage Sharing (FSS)  environment  any node fault can lead to storage failure. In VVR primary when last  mirror of SRL  (Storage Replicator Log) volume faulted while application writes are in progress replication is expected to go to pass-through mode.
This information is persistently recorded in the kernel log (KLOG). In the event of cascaded storage node failures, the KLOG updation protocol could not update quorum number of copies. This mis-match in on-disk v/s in-core state of VVR objects leading to data loss due to missing recovery when all storage faults are resolved.

RESOLUTION:
Code changes to handle the KLOG update failure in SRL IO failure handling is done to ensure configuration on-disk and in-core is consistent and subsequent application IO will not be allowed to avoid data corruption.

* 4115381 (Tracking ID: 4091783)

SYMPTOM:
Buildarea creation for unixvm were failing

DESCRIPTION:
unixvm build were failing as there is dependency on the storageapi while creation of build area for unixvm and veki.

This intern were causing issues in Veki packaging during unixvm builds and vxio driver compilation dependency

RESOLUTION:
Added support for storageapi build area creation and building the storageapi internally from unixvm build scripts.

* 4116548 (Tracking ID: 4111254)

SYMPTOM:
vradmind dumps core with the following stack:

#3  0x00007f3e6e0ab3f6 in __assert_fail () from /root/cores/lib64/libc.so.6
#4  0x000000000045922c in RDS::getHandle ()
#5  0x000000000056ec04 in StatsSession::addHost ()
#6  0x000000000045d9ef in RDS::addRVG ()
#7  0x000000000046ef3d in RDS::createDummyRVG ()
#8  0x000000000044aed7 in PriRunningState::update ()
#9  0x00000000004b3410 in RVG::update ()
#10 0x000000000045cb94 in RDS::update ()
#11 0x000000000042f480 in DBMgr::update ()
#12 0x000000000040a755 in main ()

DESCRIPTION:
vradmind was trying to access a NULL pointer (Remote Host Name) in a rlink object, as the Remote Host attribute of the rlink hasn't been set.

RESOLUTION:
The issue has been fixed by making code changes.

* 4116551 (Tracking ID: 4108913)

SYMPTOM:
Vradmind dumps core with the following stacks:
#3  0x00007f2c171be3f6 in __assert_fail () from /root/coredump/lib64/libc.so.6
#4  0x00000000005d7a90 in VList::concat () at VList.C:1017
#5  0x000000000059ae86 in OpMsg::List2Msg () at Msg.C:1280
#6  0x0000000000441bf6 in OpMsg::VList2Msg () at ../../include/Msg.h:389
#7  0x000000000043ec33 in DBMgr::processStatsOpMsg () at DBMgr.C:2764
#8  0x00000000004093e9 in process_message () at srvmd.C:418
#9  0x000000000040a66d in main () at srvmd.C:733

#0  0x00007f4d23470a9f in raise () from /root/core.Jan18/lib64/libc.so.6
#1  0x00007f4d23443e05 in abort () from /root/core.Jan18/lib64/libc.so.6
#2  0x00007f4d234b3037 in __libc_message () from /root/core.Jan18/lib64/libc.so.6
#3  0x00007f4d234ba19c in malloc_printerr () from /root/core.Jan18/lib64/libc.so.6
#4  0x00007f4d234bba9c in _int_free () from /root/core.Jan18/lib64/libc.so.6
#5  0x00000000005d5a0a in ValueElem::_delete_val () at Value.C:491
#6  0x00000000005d5990 in ValueElem::~ValueElem () at Value.C:480
#7  0x00000000005d7244 in VElem::~VElem () at VList.C:480
#8  0x00000000005d8ad9 in VList::~VList () at VList.C:1167
#9  0x000000000040a71a in main () at srvmd.C:743

#0  0x000000000040b826 in DList::head () at ../include/DList.h:82
#1  0x00000000005884c1 in IpmHandle::send () at Ipm.C:1318
#2  0x000000000056e101 in StatsSession::sendUCastStatsMsgToPrimary () at StatsSession.C:1157
#3  0x000000000056dea1 in StatsSession::sendStats () at StatsSession.C:1117
#4  0x000000000046f610 in RDS::collectStats () at RDS.C:6011
#5  0x000000000043f2ef in DBMgr::collectStats () at DBMgr.C:2799
#6  0x00007f98ed9131cf in start_thread () from /root/core.Jan26/lib64/libpthread.so.0
#7  0x00007f98eca4cdd3 in clone () from /root/core.Jan26/lib64/libc.so.6

DESCRIPTION:
There is a race condition in vradmind that may cause memory corruption and unpredictable result. Vradmind periodically forks a child thread to collect VVR statistic data and send them to the remote site. While the main thread may also be sending data using the same handler object, thus member variables in the handler object are accessed in parallel from multiple threads and may become corrupted.

RESOLUTION:
The code changes have been made to fix the issue.

* 4116557 (Tracking ID: 4085404)

SYMPTOM:
Huge perf drop after Veritas Volume Replicator (VVR) entered Data Change Map (DCM) mode, when a large size of Storage Replicator Log (SRL) is configured.

DESCRIPTION:
The active map flush caused RVG serialization. Once RVG gets serialized, all IOs are queued in restart queue, till the active map flush is finished. The too frequent active map flush caused the huge IO drop during flushing SRL to DCM.

RESOLUTION:
The code is modified to adjust the frequency of active map flush and balance the application IO and SRL flush.

* 4116559 (Tracking ID: 4091076)

SYMPTOM:
SRL gets into pass-thru mode when it's about to overflow.

DESCRIPTION:
Primary initiated log search for the requested update sent from secondary. The search aborted with head error as a check condition isn't set correctly.

RESOLUTION:
Fixed the check condition to resolve the issue.

* 4116562 (Tracking ID: 4114257)

SYMPTOM:
VxVM cmd is hung and file system was waiting for io to complete.

file system stack:
#3 [] wait_for_completion at 
#4 [] vx_bc_biowait at [vxfs]
#5 [] vx_biowait at [vxfs]
#6 [] vx_isumupd at [vxfs]
#7 [] __switch_to_asm at 
#8 [] vx_process_revokedele at [vxfs]
#9 [] vx_recv_revokedele at [vxfs]
#10 [] vx_recvdele at [vxfs]
#11 [] vx_msg_process_thread at [vxfs]

vxconfigd stack:
[<0>] volsync_wait+0x106/0x180 [vxio]
[<0>] vol_ktrans+0x9f/0x2c0 [vxio]
[<0>] volconfig_ioctl+0x82a/0xdf0 [vxio]
[<0>] volsioctl_real+0x38a/0x450 [vxio]
[<0>] vols_ioctl+0x6d/0xa0 [vxspec]
[<0>] vols_unlocked_ioctl+0x1d/0x20 [vxspec]

One of vxio thread was waiting for IO drain with below stack.

 #2 [] schedule_timeout at 
 #3 [] vol_rv_change_sio_start at [vxio]
 #4 [] voliod_iohandle at [vxio]

DESCRIPTION:
VVR rvdcm flush SIO was triggered by VVR logowner change and it would set the ru_state throttle flags which caused  MDATA_SHIP SIO got queued in rv_mdship_throttleq. As the MDATA_SHIP SIOs are active, it caused rvdcm flush SIO unable to proceed. In the end, rvdcm_flush SIO was waiting for SIOs in rv_mdship_throttleq to complete. SIOs in rv_mdship_throttleq were waiting rvdcm_flush SIO to complete. Hence a  dead lock situation.

RESOLUTION:
Code changes have been made to solve the dead lock issue.

* 4116565 (Tracking ID: 4034741)

SYMPTOM:
Due to a common RVIOmem pool being used by multiple RVG, a deadlock scenario gets created, causing high load average and system hang.

DESCRIPTION:
The current fix limits IO load on secondary by retaining the updates in NMCOM pool until the DV write done, by which RVIOMEM pool became easy to fill up and 
deadlock situtaion may occur, esp. when high work load on multiple RVGs or cross direction RVGs.Currently all RVGs share the same RVIOMEM pool, while NMCOM 
pool, RDBACK pool and network/dv update list are all per-RVGs, so the RVIOMEM pool becomes the bottle neck on secondary, which is easy to full and run into 
deadlock situation.

RESOLUTION:
Code changes to honor per-RVG RVIOMEM pool to resolve the deadlock issue.

* 4116567 (Tracking ID: 4072862)

SYMPTOM:
In case RVGLogowner resources get onlined on slave nodes, stop the whole cluster may fail and RVGLogowner resources goes in to offline_propagate state.

DESCRIPTION:
While stopping whole cluster, the racing may happen between CVM reconfiguration and RVGLogowner change SIO.

RESOLUTION:
Code changes have been made to fix these racings.

* 4117110 (Tracking ID: 4113841)

SYMPTOM:
VVR panic happened in below code path:

kmsg_sys_poll()
nmcom_get_next_mblk() 
nmcom_get_hdr_msg() 
nmcom_get_next_msg() 
nmcom_wait_msg_tcp() 
nmcom_server_main_tcp()

DESCRIPTION:
When the network scan tool send request to VVR which is unexpected, during the VVR connection handshake, the tcp connection may be terminated immediately by the network scan tool, which may lead to the sock released. Hence, VVR panic when try to refer to it as hit the NULL pointer during the processing.

RESOLUTION:
The code change has been made to check sock is valid, otherwise, return without continue with the VVR connection.

* 4118108 (Tracking ID: 4114867)

SYMPTOM:
Getting these error messages while adding new disks
[root@server101 ~]# cat /etc/udev/rules.d/41-VxVM-selinux.rules | tail -1
KERNEL=="VxVM*", SUBSYSTEM=="block", ACTION=="add", RUN+="/bin/sh -c 'if [ `/usr/sbin/getenforce` != "Disabled" -a `/usr/sbin/
[root@server101 ~]#
[root@server101 ~]# systemctl restart systemd-udevd.service
[root@server101 ~]# udevadm test /block/sdb 2>&1 | grep "invalid"
invalid key/value pair in file /etc/udev/rules.d/41-VxVM-selinux.rules on line 20, starting at character 104 ('D')

DESCRIPTION:
In /etc/udev/rules.d/41-VxVM-selinux.rules double quotation on Disabled and disable is the issue.

RESOLUTION:
Code changes have been made to correct the problem.

* 4118111 (Tracking ID: 4065490)

SYMPTOM:
systemd-udev threads consumes more CPU during system bootup or device discovery.

DESCRIPTION:
During disk discovery when new storage devices are discovered, VxVM udev rules are invoked for creating hardware path
symbolic link and setting SELinux security context on Veritas device files. For creating hardware path symbolic link to each
storage device, "find" command is used internally which is CPU intensive operation. If too many storage devices are attached to
system, then usage of "find" command causes high CPU consumption.

Also, for setting appropriate SELinux security context on VxVM device files, restorecon is done irrespective of SELinux is enabled or disabled.

RESOLUTION:
Usage of "find" command is replaced with "udevadm" command. SELinux security context on VxVM device files is being set
only when SELinux is enabled on system.

* 4118733 (Tracking ID: 4106689)

SYMPTOM:
Solaris Zones cannot be started due to Method "/lib/svc/method/fs-local" failed with exit status 95. The error logs are observed as below:
Mounting ZFS filesystems: cannot mount 'rpool/export' on '/export': directory is not empty
cannot mount 'rpool/export' on '/export': directory is not empty
cannot mount 'rpool/export/home' on '/export/home': failure mounting parent dataset
cannot mount 'rpool/export/home/addm' on /export/home/addm': directory is not empty
.... ....
svc:/system/filesystem/local:default: WARNING: /usr/sbin/zfs mount -a failed: one or more file systems failed.

DESCRIPTION:
When DMP native support is enabled and the "faulted" zpools are found, VxVM will deport the faulty zpools and re-import them. In case fs-local isn't started before vxvm-startup2, this error handling will cause a non-empty /export which further cause zfs mount failure.

RESOLUTION:
Code changes have been made to guarantee the mount order of rpool and zpools.

* 4118845 (Tracking ID: 4116024)

SYMPTOM:
kernel panicked at gab_ifreemsg with following stack:
gab_ifreemsg
gab_freemsg
kmsg_gab_send
vol_kmsg_sendmsg
vol_kmsg_sender

DESCRIPTION:
In a CVR environment there is a RVG of > 600 data volumes, enabling vxvvrstatd daemon through service vxvm-recover. vxvvrstatd calls into ioctl(VOL_RV_APPSTATS) , the latter will generate a kmsg whose length is longer than 64k and trigger a kernel panic due to GAB/LLT no support any message longer than 64k.

RESOLUTION:
Code changes have been done to add a limitation to the maximum number of data volumes for which that ioctl(VOL_RV_APPSTATS) can request the VVR statistics.

* 4119087 (Tracking ID: 4067191)

SYMPTOM:
In CVR environment after rebooting Slave node, Master node may panic with below stack:

Call Trace:
dump_stack+0x66/0x8b
panic+0xfe/0x2d7
volrv_free_mu+0xcf/0xd0 [vxio]
vol_ru_free_update+0x81/0x1c0 [vxio]
volilock_release_internal+0x86/0x440 [vxio]
vol_ru_free_updateq+0x35/0x70 [vxio]
vol_rv_write2_done+0x191/0x510 [vxio]
voliod_iohandle+0xca/0x3d0 [vxio]
wake_up_q+0xa0/0xa0
voliod_iohandle+0x3d0/0x3d0 [vxio]
voliod_loop+0xc3/0x330 [vxio]
kthread+0x10d/0x130
kthread_park+0xa0/0xa0
ret_from_fork+0x22/0x40

DESCRIPTION:
As part of CVM Master switch a rvg_recovery is triggered. In this step race
condition can occured between the VVR objects due to which the object value
is not updated properly and can cause panic.

RESOLUTION:
Code changes are done to handle the race condition between VVR objects.

* 4119257 (Tracking ID: 4090772)

SYMPTOM:
vxconfigd/vx commands hang on secondary site in a CVR environment.

DESCRIPTION:
Due to a window with unmatched SRL positions, if any application (e.g. fdisk) trying
to open the secondary RVG volume will acquire a lock and wait for SRL positions to match.
During this if any vxvm transaction kicked in will also have to wait for same lock.
Further logowner node panic'd which triggered logownership change protocol which hung
as earlier transaction was stuck. As logowner change protocol could not complete,
in absence of valid logowner SRL position could not match and caused deadlock. That lead
to vxconfigd and vx command hang.

RESOLUTION:
Added changes to allow read operation on volume even if SRL positions are
unmatched. We are still blocking write IOs and just allowing open() call for read-only
operations, and hence there will not be any data consistency or integrity issues.

* 4119276 (Tracking ID: 4090943)

SYMPTOM:
On Primary, RLink is continuously getting connected/disconnected with below message seen in secondary syslog:
  VxVM VVR vxio V-5-3-0 Disconnecting replica <rlink_name> since log is full on secondary.

DESCRIPTION:
When RVG logowner node panic, RVG recovery happens in 3 phases.
At the end of 2nd phase of recovery in-memory and on-disk SRL positions remains incorrect
and during this time if there is logowner change then Rlink won't get connected.

RESOLUTION:
Handled in-memory and on-disk SRL positions correctly.

* 4119438 (Tracking ID: 4117985)

SYMPTOM:
Memory/data corruption hit for EC volumes

DESCRIPTION:
This is a porting request original request was already reviewed:http://codereview.engba.veritas.com/r/42056/

Memory corruption hitting in EC was fixed by calling kernel_fpu_begin() for kernel version < rhel8.6. But in latest kernel kernel_fpu_begin() symbol is not 
available, We can not use it. So we have created separate Module with name 'storageapi' which is having implementation of _fpu_begin and _fpu_end
VxIO module is dependent on 'storageapi'

RESOLUTION:
take a fpu lock for FPU related operations

* 4120350 (Tracking ID: 4120878)

SYMPTOM:
System doesn't come up on taking a reboot after enabling dmp_native_support. System goes into maintenance mode.

DESCRIPTION:
"vxio.ko" is dependent on the new "storageapi.ko" module. "storageapi.ko" was missing from VxDMP_initrd file, which is created when dmp_native_support is enabled. So on reboot, without "storageapi.ko" present, "vxio.ko" fails to load.

RESOLUTION:
Code changes have been made to include "strorageapi.ko" in VxDMP_initrd.

* 4121241 (Tracking ID: 4114927)

SYMPTOM:
After enabling dmp_native_support and taking reboot, /boot is not mounted VxDMP node.

DESCRIPTION:
When dmp_native_support is enabled, vxdmproot script is expected to modify the /etc/fstab entry for /boot so that on next boot up, /boot is mounted on dmp device instead of OS device. Also, this operation modifies SELinux context of file /etc/fstab. This causes the machine to go into maintenance mode because of a read permission denied error for /etc/fstab on boot up.

RESOLUTION:
Code changes have been done to make sure SELinux context is preserved for /etc/fstab file and /boot is mounted on dmp device when dmp_native_support is enabled.

* 4121714 (Tracking ID: 4081740)

SYMPTOM:
vxdg flush command slow due to too many luns needlessly access /proc/partitions.

DESCRIPTION:
Linux BLOCK_EXT_MAJOR(block major 259) is used as extended devt for block devices. When partition number of one device is more than 15, the partition device gets assigned under major 259 to solve the sd limitations (16 minors per device), by which more partitions are allowed for one sd device. During "vxdg flush", for each lun in the disk group, vxconfigd reads file /proc/partitions line by line through fgets() to find all the partition devices with major number 259, which would cause vxconfigd to respond sluggishly if there are large amount of luns in the disk group.

RESOLUTION:
Code has been changed to remove the needless access on /proc/partitions for the luns without using extended devt.

Patch ID: VRTSvxvm-8.0.0.2400

* 4110560 (Tracking ID: 4104927)

SYMPTOM:
vxvm-boot.service fails to start on linux platforms other than SLES15

DESCRIPTION:
SLES15 specific attribute changes causes vxvm-boot.service to fail to start on other linux platforms.

RESOLUTION:
A new vxvm-boot.service file to honour vxvm-boot.service for SLES15, the existing vxvm-boot.service file will serve for other linux platforms.

* 4113324 (Tracking ID: 4113323)

SYMPTOM:
Existing package failed to load on RHEL 8.8 server.

DESCRIPTION:
RHEL 8.8 is a new release and hence VxVM module is compiled with this new kernel along with few other changes.

RESOLUTION:
Compiled VxVM code against 8.8 kernel and made changes to make it compatible.

* 4113661 (Tracking ID: 4091076)

SYMPTOM:
SRL gets into pass-thru mode when it's about to overflow.

DESCRIPTION:
Primary initiated log search for the requested update sent from secondary. The search aborted with head error as a check condition isn't set correctly.

RESOLUTION:
Fixed the check condition to resolve the issue.

* 4113663 (Tracking ID: 4095163)

SYMPTOM:
System panic with below stack:
 #6 [] invalid_op at 
    [exception RIP: __slab_free+414]
 #7 [] kfree at 
 #8 [] vol_ru_free_update at [vxio]
 #9 [] vol_ru_free_updateq at  [vxio]
#10 [] vol_rv_write2_done at [vxio]
#11 [] voliod_iohandle at [vxio]
#12 [] voliod_loop at [vxio]

DESCRIPTION:
The update gets freed as a part of VVR recovery. At the same time, this update also gets freed in VVR second phase of write. Hence there is a race in freeing the updates and caused the system panic.

RESOLUTION:
Code changes have been made to avoid

* 4113664 (Tracking ID: 4091390)

SYMPTOM:
vradmind hit the core dump while accessing pHdr, which is already freed.

DESCRIPTION:
While processing the config message - CFG_UPDATE, we incorrectly freed the existing config message objects. Later, objects are accessed again which dumped the vradmind core.

RESOLUTION:
Changes are done to access the correct configuration objects.

* 4113666 (Tracking ID: 4064772)

SYMPTOM:
After enabling slub debug, system could hang with IO load.

DESCRIPTION:
When creating VxVM I/O memory, VxVM does not align the cache size. This unaligned length will be treated as an invalid I/O length in SCSI layer, which causes some I/O requests are stuck in an invalid state and results in the I/Os never being able to complete. Thus system hang could be observed, especially after cache slub debug is enabled.

RESOLUTION:
Code changes have been done to align the cache size.

Patch ID: VRTSvxvm-8.0.0.2200

* 4058590 (Tracking ID: 4058867)

SYMPTOM:
Old VxVM rpm fails to load on RHEL8.7 minor kernel 4.18.0-425.10.1.el8_7.x86_64

DESCRIPTION:
RedHat did some critical changes in latest kernel which causing soft-lockup issue to VxVM kernel modules while installation.

RESOLUTION:
As suggested by RedHat (https://access.redhat.com/solutions/6985596) VxVM modules compiled with RHEL 8.7 minor kernel.

* 4108392 (Tracking ID: 4107802)

SYMPTOM:
vxdmp fails to load and system hangs.

DESCRIPTION:
This issue occurs due to changes in the RHEL8.7 minor kernel and incorrect module is calculated for best-fit.

RESOLUTION:
Modified existing modinst-vxvm script to calculate correct best-fit module.

Patch ID: VRTSaslapm-8.0.0.2200

* 4108933 (Tracking ID: 4107932)

SYMPTOM:
Support for ASLAPM on RHEL8.7 minor kernel 4.18.0-425.10.1.el8_7.x86_64

DESCRIPTION:
RedHat did some critical changes in latest kernel which causing soft-lockup issue kernel modules while installation.

RESOLUTION:
As suggested by RedHat (https://access.redhat.com/solutions/6985596) modules compiled with RHEL 8.7 minor kernel.

Patch ID: VRTSvxvm-8.0.0.2100

* 4102502 (Tracking ID: 4102501)

SYMPTOM:
A security vulnerability exists in the third-party component libcurl.

DESCRIPTION:
VxVM uses a third-party component named libcurl in which a security vulnerability exists.

RESOLUTION:
VxVM is updated to use a newer version of libcurl in which the security vulnerability has been addressed.

Patch ID: VRTSaslapm-8.0.0.2100

* 4102502 (Tracking ID: 4102501)

SYMPTOM:
A security vulnerability exists in the third-party component libcurl.

DESCRIPTION:
VxVM uses a third-party component named libcurl in which a security vulnerability exists.

RESOLUTION:
VxVM is updated to use a newer version of libcurl in which the security vulnerability has been addressed.

Patch ID: VRTSvxvm-8.0.0.1900

* 4102924 (Tracking ID: 4101128)

SYMPTOM:
Old VxVM rpm fails to load on RHEL8.7

DESCRIPTION:
The RHEL8.7 is a new OS release and has multiple kernel changes which were making VxVM incompatible with OS kernel version 4.18.0-425.3.1

RESOLUTION:
Required code changes have been done. VxVM module compiled with RHEL 8.7 kernel.

Patch ID: VRTSaslapm-8.0.0.1900

* 4102973 (Tracking ID: 4101139)

SYMPTOM:
Support for ASLAPM on RHEL 8.7 kernel

DESCRIPTION:
The RHEL8.7 is new release and hence APM module should be recompiled with new kernel.

RESOLUTION:
Compiled APM with new kernel.

Patch ID: VRTSvxvm-8.0.0.1800

* 4067609 (Tracking ID: 4058464)

SYMPTOM:
vradmin resizevol fails when FS is not mounted on master.

DESCRIPTION:
vradmin resizevol cmd resizes datavolume, FS on the primary site whereas on the secondary site it resizes only datavolume as FS is not mounted on the secondary site. vradmin resizevol cmd ships the cmd to logowner at vradmind level and vradmind on logowner in turn tries to ship the lowlevel vxcommands to master at vradmind level and then finally cmd gets executed on master.

RESOLUTION:
Changes introduced to ship the cmd to the node on which FS is mounted. cvm nodename must be provided where FS gets mounted which is then used by vradmind to ship cmd to that respective mounted node.

* 4067635 (Tracking ID: 4059982)

SYMPTOM:
In container environment, vradmin migrate cmd fails multiple times due to rlink not in connected state.

DESCRIPTION:
In VVR, rlinks are disconnected and connected back during the process of replication lifecycle. And, in this mean time when vradmin migrate cmd gets executed it experience errors. It internally causes vradmind to make configuration changes multiple times which impact further vradmin commands getting executed.

RESOLUTION:
vradmin migrate cmd requires rlink data to be up-to-date on both primary and secondary. It internally executes low-level cmds like vxrvg makesecondary and vxrvg makeprimary to change the role of primary and secondary. These cmds doesn't depend on rlink to be in connected state. Changes are done to remove the rlink connection handling.

* 4070098 (Tracking ID: 4071345)

SYMPTOM:
Replication is unresponsive after failed site is up.

DESCRIPTION:
Autosync and unplanned fallback synchronisation had issues in a mix of cloud and non-cloud Volumes in RVG. After a cloud volume is found rest of the volumes were getting ignored for synchronisation

RESOLUTION:
Fixed condition to make it iterate over all Volumes.

* 4078531 (Tracking ID: 4075860)

SYMPTOM:
On RHEL8 NBFS/Access commands like python3, sort, sudo, ssh, etc are generating core dump during execution of the command mkfs.vxfs mkfs.ext4 in parallel

DESCRIPTION:
On RHEL8 NBFS/Access commands like python3, sort, sudo, ssh, etc are generating core dump during execution of the command mkfs.vxfs mkfs.ext4 in parallel. This was happening due to missing fpu armor protection for FPU instruction set.

RESOLUTION:
Fix is added to use FPU protection while using FPU instruction set

* 4079345 (Tracking ID: 4069940)

SYMPTOM:
FS mount failed during Cluster configuration on 24-node physical BOM setup.

DESCRIPTION:
FS mount failed during Cluster configuration on 24-node physical BOM setup due to vxvm transactions were taking time more that vcs timeouts.

RESOLUTION:
Fix is added to reduce unnecessary transaction time on large node setup.

* 4080041 (Tracking ID: 4056953)

SYMPTOM:
3PAR PE LUNs are reported in error state by 3PAR ASL

DESCRIPTION:
3PAR storage presents some special STORAGE LUNs(3PAR PE) and these need to be SKIPPED by VxVM and not claimed. This causes an issue for VxDMP to handle as multiple PE LUNs from different 3PAR enclosures.

RESOLUTION:
Fix added to SKIP the 3PAR PE luns by 3PAR ASL to avoid disks being reported in error state.

* 4080105 (Tracking ID: 4045837)

SYMPTOM:
DCL volume subdisks doesnot relocate after node fault timeout and remains in RELOCATE state

DESCRIPTION:
If DCO has failed plexs and dco is on different disks than data, dco relocation need to be triggered explicitly as try_fss_reloc will only perform dco relocation in context of data which may not succeed if sufficient data disks not available (additional host/disks may be available where dco can relocate)

RESOLUTION:
Fix is added to relocate DCL subdisks to available spare disks

* 4080122 (Tracking ID: 4044068)

SYMPTOM:
Replace Node is failing at Configuring NetBackup stage due to vxdisk init failed with error "Could not obtain requested lock".

DESCRIPTION:
Replace Node is failing at Configuring NetBackup stage due to vxdisk init failed with error "Could not obtain requested lock".

RESOLUTION:
Fix is added to retry transaction few times if it fails with this error

* 4080269 (Tracking ID: 4044898)

SYMPTOM:
we were unable to see rlink tags from info records with the vxrlink listtag command.

DESCRIPTION:
Making rlinks FIPS compliant has 2nd phase in which we are dealing with disk group upgrade path, where rlink enc tags needs to be copied to info record and needs to be FIPS compliant one. here vxdg upgrade will internally call vxrlink and vxencrypt to upgrade the rlink and rekey the rlink keys respectively.

RESOLUTION:
copied all the encryption tags for rlink to info record and when we are upgrading DG we will internally upgrade the rlink, this upgradation process will copy rlink tags to info records.

* 4080276 (Tracking ID: 4065145)

SYMPTOM:
During addsec we were unable to processencrypted volume tags for multiple volumes and vsets. Error we saw: $ vradmin -g dg2 -encrypted addsec dg2_rvg1 10.210.182.74 10.210.182.75 Error: Duplicate tag name vxvm.attr.enckeytype provided in input.

DESCRIPTION:
The number of tags was not defined and we were processing all the tags at a time instead of processing max number of tags for a volume.

RESOLUTION:
Introduced a number of tags variable depend on the cipher method (CBC/GCM), as well fixed minor code issues.

* 4080277 (Tracking ID: 3966157)

SYMPTOM:
the feature of SRL batching was broken and we were not able to enable it as it might caused problems.

DESCRIPTION:
Batching of updates needs to be done as to get benefit of batching multiple updates and getting performance increased

RESOLUTION:
we have decided to simplify the working as we are now aligning each of the small update within a total batch to 4K size so that, by default we will get the whole batch aligned one, and then there is no need of book keeping for last update and hence reducing the overhead of different calculations. we are padding individual updates to reduce overhead of book keeping things around last update in a batch, by padding each updates to 4k, we will be having a batch of updates which is 4k aligned itself.

* 4080579 (Tracking ID: 4077876)

SYMPTOM:
When one cluster node is rebooted, EC log replay is triggered for shared EC volume. It is seen that system is crashed during this EC log replay.

DESCRIPTION:
It is seen that two flags are assigned same value. So, system panicked during flag check.

RESOLUTION:
Changed the code flow to avoid checking values of flags having same value.

* 4080845 (Tracking ID: 4058166)

SYMPTOM:
While setting up VVR/CVR on large size data volumes (size > 3TB) with filesystems mounted on them, initial autosync operation takes a lot of time to complete.

DESCRIPTION:
While performing autosync on VVR/CVR setup for a volume with filesystem mounted, if smartmove feature is enabled, the operation does smartsync by syncing only the regions dirtied by filesystem, instead of syncing entire volume, which completes faster than normal case. However, for large size volumes (size > 3TB), smartmove feature does not get enabled, even with filesystem mounted on them and hence autosync operation syncs entire volume. This behaviour is due to smaller size DCM plexes allocated for such large size volumes, autosync ends up performing complete volume sync, taking lot more time to complete.

RESOLUTION:
Increase the limit of DCM plex size (loglen) beyond 2MB so that smart move feature can be utilised properly.

* 4080846 (Tracking ID: 4058437)

SYMPTOM:
Replication between 8.0 and 7.4.x fails with an error due to sector size field.

DESCRIPTION:
7.4.x branch has sectorsize set to zero which internally is indicated as 512 byte. It caused the startrep, resumerep to fail with the below error message. Message from Primary: VxVM VVR vxrlink ERROR V-5-1-20387 sector size mismatch, Primary is having sector size 512, Secondary is having sector size 0

RESOLUTION:
A check added to support replication between 8.0 and 7.4.x

* 4081790 (Tracking ID: 4080373)

SYMPTOM:
SFCFSHA configuration failed on RHEL 8.4 due to 'chmod -R' error.

DESCRIPTION:
Failure messages are getting logged as all log permissions are changed to 600 during the upgrade and all log files moved to '/var/log/vx'.

RESOLUTION:
Added -f option to chmod command to suppress warning and redirect errors from mv command to /dev/null.

* 4083337 (Tracking ID: 4081890)

SYMPTOM:
On RHEL8 NBFS/Access commands like python3, sort, sudo, ssh, etc are generating core dump during execution of the command mkfs.vxfs  mkfs.ext4 in parallel.

DESCRIPTION:
On RHEL8 NBFS/Access commands like python3, sort, sudo, ssh, etc are generating core dump during execution of the command mkfs.vxfs  mkfs.ext4 in parallel. This was happening due to missing fpu armor protection for FPU instruction set.

RESOLUTION:
Fix is added to use FPU protection while using FPU instruction set

* 4085619 (Tracking ID: 4086718)

SYMPTOM:
VxVM fails to install because vxdmp module fails to load on latest minor kernel of SLES15SP2.

DESCRIPTION:
VxVM modules fail to load on latest minor kernel of SLES15SP2. Following messages can be seen logged in system logs: vxvm-boot[32069]: ERROR: No appropriate modules found. vxvm-boot[32069]: Error in loading module "vxdmp". See documentation. vxvm-boot[32069]: Modules not Loaded

RESOLUTION:
Code changes have been done to fix this issue.

* 4087233 (Tracking ID: 4086856)

SYMPTOM:
For Appliance FLEX product using VRTSdocker-plugin, docker.service needs to be replaced as it is not supported on RHEL8.

DESCRIPTION:
Appliance FLEX product using VRTSdocker-plugin is switching to RHEL8 on which docker.service does not exist. vxinfoscale-docker.service must stop after all container services are stopped. podman.service shuts down after all container services are stopped, so docker.service can be replaced with podman.service.

RESOLUTION:
Added platform-specific dependencies for VRTSdocker-plugin. For RHEL8 podman.service introduced.

* 4087439 (Tracking ID: 4088934)

SYMPTOM:
"dd" command on a simple volume results in kernel panic.

DESCRIPTION:
Kernel panic is observed with following stack trace: #0 [ffffb741c062b978] machine_kexec at ffffffffa806fe01 #1 [ffffb741c062b9d0] __crash_kexec at ffffffffa815959d #2 [ffffb741c062ba98] crash_kexec at ffffffffa815a45d #3 [ffffb741c062bab0] oops_end at ffffffffa8036d3f #4 [ffffb741c062bad0] general_protection at ffffffffa8a012c2 [exception RIP: __blk_rq_map_sg+813] RIP: ffffffffa84419dd RSP: ffffb741c062bb88 RFLAGS: 00010202 RAX: 0c2822c2621b1294 RBX: 0000000000010000 RCX: 0000000000000000 RDX: ffffb741c062bc40 RSI: 0000000000000000 RDI: ffff8998fc947300 RBP: fffff92f0cbe6f80 R8: ffff8998fcbb1200 R9: fffff92f0cbe0000 R10: ffff8999bf4c9818 R11: 000000000011e000 R12: 000000000011e000 R13: fffff92f0cbe0000 R14: 00000000000a0000 R15: 0000000000042000 ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018 #5 [ffffb741c062bc38] scsi_init_io at ffffffffc03107a2 [scsi_mod] #6 [ffffb741c062bc78] sd_init_command at ffffffffc056c425 [sd_mod] #7 [ffffb741c062bcd8] scsi_queue_rq at ffffffffc0311f6e [scsi_mod] #8 [ffffb741c062bd20] blk_mq_dispatch_rq_list at ffffffffa8447cfe #9 [ffffb741c062bdc0] __blk_mq_do_dispatch_sched at ffffffffa844cae0 #10 [ffffb741c062be28] __blk_mq_sched_dispatch_requests at ffffffffa844d152 #11 [ffffb741c062be68] blk_mq_sched_dispatch_requests at ffffffffa844d290 #12 [ffffb741c062be78] __blk_mq_run_hw_queue at ffffffffa84466a3 #13 [ffffb741c062be98] process_one_work at ffffffffa80bcd74 #14 [ffffb741c062bed8] worker_thread at ffffffffa80bcf8d #15 [ffffb741c062bf10] kthread at ffffffffa80c30ad #16 [ffffb741c062bf50] ret_from_fork at ffffffffa8a001ff

RESOLUTION:
Code changes have been done to fix this issue.

* 4087791 (Tracking ID: 4087770)

SYMPTOM:
Data corruption post mirror attach operation seen after complete storage fault for DCO volumes.

DESCRIPTION:
DCO (data change object) tracks delta changes for faulted mirrors. During complete storage loss of DCO volume mirrors in, DCO object will be marked as BADLOG and becomes unusable for bitmap tracking. Post storage reconnect (such as node rejoin in FSS environments) DCO will be re-paired for subsequent tracking. During this if VxVM finds any of the mirrors detached for data volumes, those are expected to be marked for full-resync as bitmap in DCO has no valid information. Bug in repair DCO operation logic prevented marking mirror for full-resync in cases where repair DCO operation is triggered before data volume is started. This resulted into mirror getting attached without any data being copied from good mirrors and hence reads serviced from such mirrors have stale data, resulting into file-system corruption and data loss.

RESOLUTION:
Code has been added to ensure repair DCO operation is performed only if volume object is enabled so as to ensure detached mirrors are marked for full-resync appropriately.

* 4088076 (Tracking ID: 4054685)

SYMPTOM:
RVG recovery gets hung in case of reconfiguration scenarios in CVR environments leading to vx commands hung on master node.

DESCRIPTION:
As a part of rvg recovery we perform DCM, datavolume recovery. But datavolume recovery takes long time due to wrong IOD handling done in linux platforms.

RESOLUTION:
Fix the IOD handling mechanism to resolve the rvg recovery handling.

* 4088483 (Tracking ID: 4088484)

SYMPTOM:
DMP_APM module is not getting loaded and throwing following message in the dmesg logs: Mod load failed for dmpnvme module: dependency conflict VxVM vxdmp V-5-0-1015 DMP_APM: DEPENDENCY CONFLICT

DESCRIPTION:
NVMe module loading failed as dmpaa module dependency added in APM and system doesn't have any A/A type disk which inturn nvme module load failed.

RESOLUTION:
Removed A/A dependency from NVMe APM.

* 4088762 (Tracking ID: 4087099)

SYMPTOM:
DG is not imported after upgrade to InfoScale 8.0u1 on RHEL8.6 and NVME disks are in an error state.

DESCRIPTION:
NVME disks minor number was getting changed when scandisks was performed. This was leading to incorrect major / minor information being present in vold of the core database.

RESOLUTION:
Fixed device open by passing O_RDONLY. With write permissions it was changing minor number.

Patch ID: VRTSaslapm-8.0.0.1800

* 4080041 (Tracking ID: 4056953)

SYMPTOM:
3PAR PE LUNs are reported in error state by 3PAR ASL

DESCRIPTION:
3PAR storage presents some special STORAGE LUNs(3PAR PE) and these need to be SKIPPED by VxVM and not claimed. This causes an issue for VxDMP to handle as multiple PE LUNs from different 3PAR enclosures.

RESOLUTION:
Fix added to SKIP the 3PAR PE luns by 3PAR ASL to avoid disks being reported in error state.

* 4088762 (Tracking ID: 4087099)

SYMPTOM:
DG is not imported after upgrade to InfoScale 8.0u1 on RHEL8.6 and NVME disks are in an error state.

DESCRIPTION:
NVME disks minor number was getting changed when scandisks was performed. This was leading to incorrect major / minor information being present in vold of the core database.

RESOLUTION:
Fixed device open by passing O_RDONLY. With write permissions it was changing minor number.

Patch ID: VRTSvxvm-8.0.0.1700

* 4081684 (Tracking ID: 4082799)

SYMPTOM:
A security vulnerability exists in the third-party component libcurl.

DESCRIPTION:
VxVM uses a third-party component named libcurl in which a security vulnerability exists.

RESOLUTION:
VxVM is updated to use a newer version of libcurl in which the security vulnerability has been addressed.

Patch ID: VRTSvxvm-8.0.0.1600

* 4057420 (Tracking ID: 4060462)

SYMPTOM:
System is unresponsive while adding new nodes.

DESCRIPTION:
After a node is removed, and adding node with different node name is attempted; system turns unresponsive. When a node leaves a cluster, in-memory information related to the node is not cleared due to the race condition.

RESOLUTION:
Fixed race condition to clear in-memory information of the node that leaves the cluster.

* 4062799 (Tracking ID: 4064208)

SYMPTOM:
Node is unresponsive while it gets added to the cluster.

DESCRIPTION:
While a node joins the cluster, if bits on the node are upgraded; size of the object is interpreted incorrectly. Issue is observed when number of objects is higher and on InfoScale 7.3.1 and above.

RESOLUTION:
Correct sizes are calculated for the data received from the master node.

* 4065841 (Tracking ID: 4065495)

SYMPTOM:
This is new array and we need to add support for EMC PowerStore.

DESCRIPTION:
EMC PowerStore is new array and current ASL doesn't support it. So it will not be claimed with current ASL. This array support has been now added in the current ASL.

RESOLUTION:
Code changes to support EMC PowerStore have been done.

* 4066213 (Tracking ID: 4052580)

SYMPTOM:
Multipathing not supported for NVMe devices under VxVM.

DESCRIPTION:
NVMe devices being non-SCSI devices, are not considered for multipathing.

RESOLUTION:
Changes introduced to support multipathing for NVMe devices.

* 4068407 (Tracking ID: 4068404)

SYMPTOM:
We need to add support to claim ALUA Disks on HPE 3PAR/Primera/Alletra 9000 arrays.

DESCRIPTION:
Current ASL doesn't support HPE 3PAR/Primera/Alletra 9000 ALUA array type. This ALUA array support has been now added in the current ASL.

RESOLUTION:
Code changes to support HPE 3PAR/Primera/Alletra 9000 ALUA array have been done.

Patch ID: VRTSaslapm-8.0.0.1600

* 4065841 (Tracking ID: 4065495)

SYMPTOM:
This is new array and we need to add support for EMC PowerStore.

DESCRIPTION:
EMC PowerStore is new array and current ASL doesn't support it. So it will not be claimed with current ASL. This array support has been now added in the current ASL.

RESOLUTION:
Code changes to support EMC PowerStore have been done.

* 4068407 (Tracking ID: 4068404)

SYMPTOM:
We need to add support to claim ALUA Disks on HPE 3PAR/Primera/Alletra 9000 arrays.

DESCRIPTION:
Current ASL doesn't support HPE 3PAR/Primera/Alletra 9000 ALUA array type. This ALUA array support has been now added in the current ASL.

RESOLUTION:
Code changes to support HPE 3PAR/Primera/Alletra 9000 ALUA array have been done.

Patch ID: VRTSvxvm-8.0.0.1200

* 4066259 (Tracking ID: 4062576)

SYMPTOM:
When hastop -local is used to stop the cluster, dg deport command hangs. Below stack trace is observed in system logs : #0 [ffffa53683bf7b30] __schedule at ffffffffa834a38d #1 [ffffa53683bf7bc0] schedule at ffffffffa834a868 #2 [ffffa53683bf7bd0] blk_mq_freeze_queue_wait at ffffffffa7e4d4e6 #3 [ffffa53683bf7c18] blk_cleanup_queue at ffffffffa7e433b8 #4 [ffffa53683bf7c30] vxvm_put_gendisk at ffffffffc3450c6b [vxio] #5 [ffffa53683bf7c50] volsys_unset_device at ffffffffc3450e9d [vxio] #6 [ffffa53683bf7c60] vol_rmgroup_devices at ffffffffc3491a6b [vxio] #7 [ffffa53683bf7c98] voldg_delete at ffffffffc34932fc [vxio] #8 [ffffa53683bf7cd8] vol_delete_group at ffffffffc3494d0d [vxio] #9 [ffffa53683bf7d18] volconfig_ioctl at ffffffffc3555b8e [vxio] #10 [ffffa53683bf7d90] volsioctl_real at ffffffffc355fc8a [vxio] #11 [ffffa53683bf7e60] vols_ioctl at ffffffffc124542d [vxspec] #12 [ffffa53683bf7e78] vols_unlocked_ioctl at ffffffffc124547d [vxspec] #13 [ffffa53683bf7e80] do_vfs_ioctl at ffffffffa7d2deb4 #14 [ffffa53683bf7ef8] ksys_ioctl at ffffffffa7d2e4f0 #15 [ffffa53683bf7f30] __x64_sys_ioctl at ffffffffa7d2e536

DESCRIPTION:
This issue is seen due to some updation from kernel side w.r.t to handling request queue.Existing VxVM code set the request handling area (make_request_fn) as vxvm_gen_strategy, this functionality is getting impacted.

RESOLUTION:
Code changes are added to handle the request queues using blk_mq_init_allocated_queue.

Patch ID: VRTSvxvm-8.0.0.1100

* 4064786 (Tracking ID: 4053230)

SYMPTOM:
RHEL 8.5 support is to be provided with IS 8.0

DESCRIPTION:
RHEL 8.5 ZDS support is being provided with IS 8.0

RESOLUTION:
VxVM packages are available with RHEL 8.5 compatibility

* 4065628 (Tracking ID: 4065627)

SYMPTOM:
VxVM modules are not loaded after OS upgrade followed by a reboot .

DESCRIPTION:
Once the stack installation is completed with configuration , after OS upgrade vxvm directory is not formed under /lib/modules/&lt;upgraded_kernel&gt;veritas/ .

RESOLUTION:
VxVM code is updated with the required changes .

Patch ID: VRTSodm-8.0.0.3200

* 4144128 (Tracking ID: 4126256)

SYMPTOM:
no symbol version warning for "ki_get_boot" in dmesg after SFCFSHA configuration

DESCRIPTION:
modpost is unable to read VEKI's Module.symvers while building ODM module, which results in no symbol version warning for "ki_get_boot" symbol of VEKI.

RESOLUTION:
Modified the code to make sure that modpost picks all the dependent symbols while building ODM module.

* 4144301 (Tracking ID: 4118154)

SYMPTOM:
System may panic in simple_unlock_mem() when errcheckdetail enabled with stack trace as follows.
		simple_unlock_mem()
		odm_io_waitreq()
		odm_io_waitreqs()
		odm_request_wait()
		odm_io()
		odm_io_stat()
		vxodmioctl()

DESCRIPTION:
odm_io_waitreq() has taken a lock and waiting to complete the IO request but it is interrupted by odm_iodone() to perform IO and unlocked a lock taken by odm_io_waitreq(). So when odm_io_waitreq() tries to unlock the lock it leads to panic as lock was unlocked already.

RESOLUTION:
Code has been modified to resolve this issue.

Patch ID: VRTSodm-8.0.0.3100

* 4154894 (Tracking ID: 4144269)

SYMPTOM:
After installing, ODM fails to start.

DESCRIPTION:
Because of the VxFS version update, the ODM module needs to be repackaged due to an
internal dependency on the VxFS version.

RESOLUTION:
As part of this fix, the ODM module has been repackaged to support the updated
VxFS version.

Patch ID: VRTSodm-8.0.0.2900

* 4057432 (Tracking ID: 4056673)

SYMPTOM:
Rebooting the system results into emergency mode.

DESCRIPTION:
Module dependency files get corrupted due to parallel invocation of depmod.

RESOLUTION:
Serialized the invocation of depmod through file lock. Corrected vxgms dependency in odm service file.

Patch ID: VRTSodm-8.0.0.2700

* 4113912 (Tracking ID: 4113118)

SYMPTOM:
The ODM module fails to load on RHEL8.8.

DESCRIPTION:
This issue occurs due to changes in the RHEL8.8.

RESOLUTION:
Updated ODM to support RHEL 8.8.

Patch ID: VRTSodm-8.0.0.2600

* 4114656 (Tracking ID: 4114655)

SYMPTOM:
The ODM module fails to load on RHEL8.7 minor kernel 4.18.0-425.19.2.

DESCRIPTION:
This issue occurs due to changes in the RHEL8.7 minor kernel.

RESOLUTION:
Updated ODM to support RHEL 8.7 minor kernel 4.18.0-425.19.2.

Patch ID: VRTSodm-8.0.0.2300

* 4108585 (Tracking ID: 4107778)

SYMPTOM:
The ODM module fails to load on RHEL8.7 minor kernel.

DESCRIPTION:
This issue occurs due to changes in the RHEL8.7 minor kernel.

RESOLUTION:
Modified existing modinst-odm script to accommodate the changes in the kernel and load the correct module.

Patch ID: VRTSodm-8.0.0.2200

* 4100923 (Tracking ID: 4100922)

SYMPTOM:
ODM module failed to load on RHEL8.7

DESCRIPTION:
The RHEL8.7 is new release and it has some changes in kernel which caused ODM module failed to load
on it.

RESOLUTION:
Added code to support ODM on RHEL8.7.



INSTALLING THE PATCH
--------------------
Run the Installer script to automatically install the patch:
-----------------------------------------------------------
Please be noted that the installation of this P-Patch will cause downtime.

To install the patch perform the following steps on at least one node in the cluster:
1. Copy the patch infoscale-rhel8_x86_64-Patch-8.0.0.3200.tar.gz to /tmp
2. Untar infoscale-rhel8_x86_64-Patch-8.0.0.3200.tar.gz to /tmp/hf
    # mkdir /tmp/hf
    # cd /tmp/hf
    # gunzip /tmp/infoscale-rhel8_x86_64-Patch-8.0.0.3200.tar.gz
    # tar xf /tmp/infoscale-rhel8_x86_64-Patch-8.0.0.3200.tar
3. Install the hotfix(Please be noted that the installation of this P-Patch will cause downtime.)
    # pwd /tmp/hf
    # ./installVRTSinfoscale800P3200 [<host1> <host2>...]

You can also install this patch together with 8.0 base release using Install Bundles
1. Download this patch and extract it to a directory
2. Change to the Veritas InfoScale 8.0 directory and invoke the installer script
   with -patch_path option where -patch_path should point to the patch directory
    # ./installer -patch_path [<path to this patch>] [<host1> <host2>...]

Install the patch manually:
--------------------------
Manual installation is not recommended.


REMOVING THE PATCH
------------------
Manual uninstallation is not recommended.


KNOWN ISSUES
------------
* Tracking ID: 4091160

SYMPTOM: While doing two or more mount (of vxfs file system) operations in parallel, underneath an already
existing vxfs mount point, if a force umount is attempted on the parent vxfs mount point, then sometimes
the force unmount operation hangs permanently.

WORKAROUND: - There is no workaround, except rebooting the system.



SPECIAL INSTRUCTIONS
--------------------
NONE


OTHERS
------
NONE