|
---|
Release type: | Patch |
Release date: | 2023-02-13 |
OS update support: | None |
Technote: | None |
Documentation: | None |
Popularity: | 147 viewed downloaded |
Download size: | 86.67 MB |
Checksum: | 3509725176 |
InfoScale Availability 7.4.2 On SLES12 x86-64
InfoScale Enterprise 7.4.2 On SLES12 x86-64 InfoScale Foundation 7.4.2 On SLES12 x86-64 InfoScale Storage 7.4.2 On SLES12 x86-64 |
None.
|
4012765, 4013420, 4014720, 4015287, 4015834, 4015835, 4016721, 4017282, 4017818, 4017820, 4019877, 4020055, 4020056, 4020912, 4023556, 4040238, 4040608, 4040612, 4040618, 4042686, 4044184, 4046265, 4046266, 4046267, 4046271, 4046272, 4046829, 4047568, 4049091, 4049097, 4049440, 4057429, 4060549, 4060566, 4060584, 4060585, 4060805, 4061203, 4061527, 4079532, 4079828, 4080777
|
VRTSpython-3.7.4.36-SLES12
VRTSperl-5.30.0.3-SLES12 VRTSvxfs-7.4.2.3400-SLES12 VRTSodm-7.4.2.3400-SLES12 |
* * * READ ME * * * * * * InfoScale 7.4.2 * * * * * * Patch 2600 * * * Patch Date: 2022-07-04 This document provides the following information: * PATCH NAME * OPERATING SYSTEMS SUPPORTED BY THE PATCH * PACKAGES AFFECTED BY THE PATCH * BASE PRODUCT VERSIONS FOR THE PATCH * SUMMARY OF INCIDENTS FIXED BY THE PATCH * DETAILS OF INCIDENTS FIXED BY THE PATCH * INSTALLATION PRE-REQUISITES * INSTALLING THE PATCH * REMOVING THE PATCH PATCH NAME ---------- InfoScale 7.4.2 Patch 2600 OPERATING SYSTEMS SUPPORTED BY THE PATCH ---------------------------------------- SLES12 x86-64 PACKAGES AFFECTED BY THE PATCH ------------------------------ VRTSodm VRTSperl VRTSpython VRTSvxfs BASE PRODUCT VERSIONS FOR THE PATCH ----------------------------------- * InfoScale Availability 7.4.2 * InfoScale Enterprise 7.4.2 * InfoScale Foundation 7.4.2 * InfoScale Storage 7.4.2 SUMMARY OF INCIDENTS FIXED BY THE PATCH --------------------------------------- Patch ID: VRTSodm-7.4.2.3400 * 4080777 (4080776) VRTSodm driver will not load with 7.4.1.3400 VRTSvxfs patch. Patch ID: VRTSodm-7.4.2.2600 * 4057429 (4056673) Rebooting the system results into emergency mode due to corruption of module dependency files. Incorrect vxgms dependency in odm service file. * 4060584 (3868609) High CPU usage by vxfs thread. Patch ID: VRTSodm-7.4.2.2200 * 4049440 (4049438) VRTSodm driver will not load with 7.4.2.2200 VRTSvxfs patch. Patch ID: VRTSodm-7.4.2.1500 * 4023556 (4023555) Unable to load the vxodm module on linux. Patch ID: VRTSpython-3.7.4.36 * 4079828 (4079827) Security vulnerabilities detected in OpenSSL packaged with VRTSperl/VRTSpython for Infoscale 7.4.2 and its update release. Patch ID: VRTSperl-5.30.0.3 * 4079828 (4079827) Security vulnerabilities detected in OpenSSL packaged with VRTSperl/VRTSpython for Infoscale 7.4.2 and its update release. Patch ID: VRTSvxfs-7.4.2.3400 * 4079532 (4079869) Security Vulnerability in VxFS third party components Patch ID: VRTSvxfs-7.4.2.2600 * 4015834 (3988752) Use ldi_strategy() routine instead of bdev_strategy() for IO's in solaris. * 4040612 (4033664) Multiple different issues occur with hardlink replication using VFR. * 4040618 (4040617) Veritas file replicator is not performing as per the expectation. * 4060549 (4047921) Replication job getting into hung state when pause/resume operations performed repeatedly. * 4060566 (4052449) Cluster goes in an 'unresponsive' mode while invalidating pages due to duplicate page entries in iowr structure. * 4060585 (4042925) Intermittent Performance issue on commands like df and ls. * 4060805 (4042254) A new feature has been added in vxupgrade which fails disk-layout upgrade if sufficient space is not available in the filesystem. * 4061203 (4005620) Internal counter of inodes from Inode Allocation Unit (IAU) can be negative if IAU is marked bad. * 4061527 (4054386) If systemd service fails to load vxfs module, the service still shows status as active instead of failed. Patch ID: VRTSvxfs-7.4.2.2200 * 4013420 (4013139) The abort operation on an ongoing online migration from the native file system to VxFS on RHEL 8.x systems. * 4040238 (4035040) vfradmin stats command failed to show all the fields in the command output in-case job paused and resume. * 4040608 (4008616) fsck command got hung. * 4042686 (4042684) ODM resize fails for size 8192. * 4044184 (3993140) Compclock was not giving accurate results. * 4046265 (4037035) Added new tunable "vx_ninact_proc_threads" to control the number of inactive processing threads. * 4046266 (4043084) panic in vx_cbdnlc_lookup * 4046267 (4034910) Asynchronous access/updatation of global list large_dirinfo can corrupt its values in multi-threaded execution. * 4046271 (3993822) fsck stops running on a file system * 4046272 (4017104) Deleting a lot of files can cause resource starvation, causing panic or momentary hangs. * 4046829 (3993943) The fsck utility hit the coredump due to segmentation fault in get_dotdotlst() * 4047568 (4046169) On RHEL8, while doing a directory move from one FS (ext4 or vxfs) to migration VxFS, the migration can fail and FS will be disable. * 4049091 (4035057) On RHEL8, IOs done on FS, while other FS to VxFS migration is in progress can cause panic. * 4049097 (4049096) Dalloc change ctime in background while extent allocation Patch ID: VRTSvxfs-7.4.2.1600 * 4012765 (4011570) WORM attribute replication support in VxFS. * 4014720 (4011596) Multiple issues were observed during glmdump using hacli for communication * 4015287 (4010255) "vfradmin promote" fails to promote target FS with selinux enabled. * 4015835 (4015278) System panics during vx_uiomove_by _hand. * 4016721 (4016927) For multi cloud tier scenario, system panic with NULL pointer dereference when we try to remove second cloud tier * 4017282 (4016801) filesystem mark for fullfsck * 4017818 (4017817) VFR performance enhancement changes. * 4017820 (4017819) Adding cloud tier operation fails while trying to add AWS GovCloud. * 4019877 (4019876) Remove license library dependency from vxfsmisc.so library * 4020055 (4012049) Documented "metasave" option and added one new option in fsck binary. * 4020056 (4012049) Documented "metasave" option and added one new option in fsck binary. * 4020912 (4020758) Filesystem mount or fsck with -y may see hang during log replay DETAILS OF INCIDENTS FIXED BY THE PATCH --------------------------------------- This patch fixes the following incidents: Patch ID: VRTSodm-7.4.2.3400 * 4080777 (Tracking ID: 4080776) SYMPTOM: VRTSodm driver will not load with 7.4.1.3400 VRTSvxfs patch. DESCRIPTION: Need recompilation of VRTSodm with latest VRTSvxfs. RESOLUTION: Recompiled the VRTSodm with new VRTSvxfs . Patch ID: VRTSodm-7.4.2.2600 * 4057429 (Tracking ID: 4056673) SYMPTOM: Rebooting the system results into emergency mode. DESCRIPTION: Module dependency files get corrupted due to parallel invocation of depmod. RESOLUTION: Serialized the invocation of depmod through file lock. Corrected vxgms dependency in odm service file. * 4060584 (Tracking ID: 3868609) SYMPTOM: While applying Oracle redo logs, a significant increase is observed in the CPU usage by the vxfs thread. DESCRIPTION: To avoid memory deadlocks and to track exiting threads with outstanding ODM requests, the kernels memory management was analysed. While the Oracle threads are being rescheduled, they hold the mmap_sem. The FDD threads keep waiting for mmap_sem to be released, which causes the contention and the high CPU usage. RESOLUTION: The bouncing of the spinlock between the CPUs is removed to reduce the CPU spike. Patch ID: VRTSodm-7.4.2.2200 * 4049440 (Tracking ID: 4049438) SYMPTOM: VRTSodm driver will not load with 7.4.2.2200 VRTSvxfs patch. DESCRIPTION: Need recompilation of VRTSodm due to recent changes in VRTSvxfs. RESOLUTION: Recompiled the VRTSodm with new changes in VRTSvxfs. Patch ID: VRTSodm-7.4.2.1500 * 4023556 (Tracking ID: 4023555) SYMPTOM: VRTSodm module is not able to load on linux. DESCRIPTION: Need recompilation of VRTSodm due to recent changes in VRTSodm due to which some symbols are not being resolved. RESOLUTION: Recompiled the VRTSodm to load vxodm module. Patch ID: VRTSpython-3.7.4.36 * 4079828 (Tracking ID: 4079827) SYMPTOM: Security vulnerabilities detected in OpenSSL packaged with VRTSperl/VRTSpython for Infoscale 7.4.2 and its update releases. DESCRIPTION: Security vulnerabilities detected in the OpenSSL. RESOLUTION: Upgraded OpenSSL version and re-created VRTSperl/VRTSpython version to fix the vulnerability . Patch ID: VRTSperl-5.30.0.3 * 4079828 (Tracking ID: 4079827) SYMPTOM: Security vulnerabilities detected in OpenSSL packaged with VRTSperl/VRTSpython for Infoscale 7.4.2 and its update releases. DESCRIPTION: Security vulnerabilities detected in the OpenSSL. RESOLUTION: Upgraded OpenSSL version and re-created VRTSperl/VRTSpython version to fix the vulnerability . Patch ID: VRTSvxfs-7.4.2.3400 * 4079532 (Tracking ID: 4079869) SYMPTOM: Security Vulnerability found in VxFS while running security scans. DESCRIPTION: In our internal security scans we found some Vulnerabilities in VxFS third party components. The Attackers can exploit these security vulnerability to attack on system. RESOLUTION: Upgrading the third party components to resolve these vulnerabilities. Patch ID: VRTSvxfs-7.4.2.2600 * 4015834 (Tracking ID: 3988752) SYMPTOM: Use ldi_strategy() routine instead of bdev_strategy() for IO's in solaris. DESCRIPTION: bdev_strategy() is deprecated from solaris code and was causing performance issues when used for IO's. Solaris has recommended to use LDI framework for all IO's. RESOLUTION: Code is modified to use ldi framework for all IO's in solaris. * 4040612 (Tracking ID: 4033664) SYMPTOM: Multiple issues occur with hardlink replication using VFR. DESCRIPTION: Multiple different issues occur with hardlink replication using Veritas File Replicator (VFR). RESOLUTION: VFR is updated to fix issues with hardlink replication in the following cases: 1. Files with multiple links 2. Data inconsistency after hardlink file replication 3. Rename and move operations dumping core in multiple different scenarios 4. WORM feature support * 4040618 (Tracking ID: 4040617) SYMPTOM: Veritas file replicator is not performing as per the expectation. DESCRIPTION: Veritas FIle replicator was having some bottlenecks at networking layer as well as data transfer level. This was causing additional throttling in the Replication. RESOLUTION: Performance optimisations done at multiple places to make use of available resources properly so that Veritas File replicator * 4060549 (Tracking ID: 4047921) SYMPTOM: Replication job was getting into hung state because of the deadlock involving below threads : Thread : 1 #0 0x00007f160581854d in __lll_lock_wait () from /lib64/libpthread.so.0 #1 0x00007f1605813e9b in _L_lock_883 () from /lib64/libpthread.so.0 #2 0x00007f1605813d68 in pthread_mutex_lock () from /lib64/libpthread.so.0 #3 0x000000000043be1f in replnet_sess_bulk_free () #4 0x000000000043b1e3 in replnet_server_dropchan () #5 0x000000000043ca07 in replnet_client_connstate () #6 0x00000000004374e3 in replnet_conn_changestate () #7 0x0000000000437c18 in replnet_conn_evalpoll () #8 0x000000000044ac39 in vxev_loop () #9 0x0000000000405ab2 in main () Thread 2 : #0 0x00007f1605815a35 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #1 0x000000000043902b in replnet_msgq_waitempty () #2 0x0000000000439082 in replnet_bulk_recv_func () #3 0x00007f1605811ea5 in start_thread () from /lib64/libpthread.so.0 #4 0x00007f1603ef29fd in clone () from /lib64/libc.so.6 DESCRIPTION: When replication job is paused/resumed in a succession multiple times because of the race condition it may lead to a deadlock situation involving two threads. RESOLUTION: Fix the locking sequence and add additional holds on resources to avoid race leading to deadlock situation. * 4060566 (Tracking ID: 4052449) SYMPTOM: Cluster goes in an 'unresponsive' mode while invalidating pages due to duplicate page entries in iowr structure. DESCRIPTION: While finding pages for invalidation of inodes, VxFS traverses radix tree by taking RCU lock and fills the IO structure with dirty/writeback pages that need to be invalidated in an array. This lock is efficient for read but does not protect the parallel creation/deletion of node. Hence, when VxFS finds page, consistency for the page in checked through radix_tree_exception()/radix_tree_deref_retry(). And if it fails, VxFS restarts the page finding from start offset. But VxFs does not reset the array index, leading to incorrect filling of IO structure's array which was causing duplicate entries of pages. While trying to destroy these pages, VxFS takes page lock on each page. Because of duplicate entries, VxFS tries to take page lock couple of times on same page, leading to self-deadlock. RESOLUTION: Code is modified to reset the array index correctly in case of failure to find pages. * 4060585 (Tracking ID: 4042925) SYMPTOM: Intermittent Performance issue on commands like df and ls. DESCRIPTION: Commands like "df" "ls" issue stat system call on node to calculate the statistics of the file system. In a CFS, when stat system call is issued, it compiles statistics from all nodes. When multiple df or ls are fired within specified time limit, vxfs is optimized. vxfs returns the cached statistics, instead of recalculating statistics from all nodes. If multiple such commands are fired in succession and one of the old caller of stat system call takes time, this optimization fails and VxFS recompiles statistics from all nodes. This can lead to bad performance of stat system call, leading to unresponsive situations for df, ls commands. RESOLUTION: Code is modified to protect last modified time of stat system call with a sleep lock. * 4060805 (Tracking ID: 4042254) SYMPTOM: vxupgrade sets fullfsck flag in the filesystem if it is unable to upgrade the disk layout version because of ENOSPC. DESCRIPTION: If the filesystem is 100 % full and its disk layout version is upgraded by using vxupgrade, then this utility starts the upgrade and later it fails with ENOSPC and ends up setting fullfsck flag in the filesystem. RESOLUTION: Code changes introduced which first calculate the required space to perform the disk layout upgrade. If the required space is not available, it fails the upgrade gracefully without setting fullfsck flag. * 4061203 (Tracking ID: 4005620) SYMPTOM: Inode count maintained in the inode allocation unit (IAU) can be negative when an IAU is marked bad. An error such as the following is logged. V-2-4: vx_mapbad - vx_inoauchk - /fs1 file system free inode bitmap in au 264 marked bad Due to the negative inode count, errors like the following might be observed and processes might be stuck at inode allocation with a stack trace as shown. V-2-14: vx_iget - inode table overflow vx_inoauchk vx_inofindau vx_findino vx_ialloc vx_dirmakeinode vx_dircreate vx_dircreate_tran vx_pd_create vx_create1_pd vx_do_create vx_create1 vx_create0 vx_create vn_open open DESCRIPTION: The inode count can be negative if somehow VxFS tries to allocate an inode from an IAU where the counter for regular file and directory inodes is zero. In such a situation, the inode allocation fails and the IAU map is marked bad. But the code tries to further reduce the already-zero counters, resulting in negative counts that can cause subsequent unresponsive situation. RESOLUTION: Code is modified to not reduce inode counters in vx_mapbad code path if the result is negative. A diagnostic message like the following flashes. "vxfs: Error: Incorrect values of ias->ifree and Aus rifree detected." * 4061527 (Tracking ID: 4054386) SYMPTOM: VxFS systemd service may show active status despite the module not being loaded. DESCRIPTION: If systemd service fails to load vxfs module, the service still shows status as active instead of failed. RESOLUTION: The script is modified to show the correct status in case of such failures. Patch ID: VRTSvxfs-7.4.2.2200 * 4013420 (Tracking ID: 4013139) SYMPTOM: The abort operation on an ongoing online migration from the native file system to VxFS on RHEL 8.x systems. DESCRIPTION: The following error messages are logged when the abort operation fails: umount: /mnt1/lost+found/srcfs: not mounted UX:vxfs fsmigadm: ERROR: V-3-26835: umount of source device: /dev/vx/dsk/testdg/vol1 failed, with error: 32 RESOLUTION: The fsmigadm utility is updated to address the issue with the abort operation on an ongoing online migration. * 4040238 (Tracking ID: 4035040) SYMPTOM: After replication job paused and resumed some of the fields got missed in stats command output and never shows missing fields on onward runs. DESCRIPTION: rs_start for the current stat initialized to the start time of the replication and default value of rs_start is zero. Stat don't show some fields in-case rc_start is zero. if (rs->rs_start && dis_type == VX_DIS_CURRENT) { if (!rs->rs_done) { diff = rs->rs_update - rs->rs_start; } else { diff = rs->rs_done - rs->rs_start; } /* * The unit of time is in seconds, hence * assigning 1 if the amount of data * was too small */ diff = diff ? diff : 1; rate = rs->rs_file_bytes_synced / (diff - rs->rs_paused_duration); printf("\t\tTransfer Rate: %s/sec\n", fmt_bytes(h, rate)); } In replication we initialize the rs_start to zero and update with the start time but we don't save the stats to disk. That small window leave a case where in-case, we pause the replication and start again we always see the rs_start to zero. Now after initializing the rs_start we write to disk in the same function. In-case in resume case we found rs_start to zero, we again initialize the rs_start field to current replication start time. RESOLUTION: Write rs_start to disk and added a check in resume case to initialize rs_start value in-case found 0. * 4040608 (Tracking ID: 4008616) SYMPTOM: fsck command got hung. DESCRIPTION: fsck got stuck due to deadlock when a thread which marked buffer aliased is waiting for itself for the reference drain, while getting block code was called with NOBLOCK flag. RESOLUTION: honour NOBLOCK flag * 4042686 (Tracking ID: 4042684) SYMPTOM: Command fails to resize the file. DESCRIPTION: There is a window where a parallel thread can clear IDELXWRI flag which it should not. RESOLUTION: setting the delayed extending write flag incase any parallel thread has cleared it. * 4044184 (Tracking ID: 3993140) SYMPTOM: In every 60 seconds, compclock was lagging behind approximate 1.44 seconds from actual time elapsed. DESCRIPTION: In every 60 seconds, compclock was lagging behind approximate 1.44 seconds from actual time elapsed. RESOLUTION: Made adjustment to logic responsible for calculating and updating compclock timer. * 4046265 (Tracking ID: 4037035) SYMPTOM: Added new tunable "vx_ninact_proc_threads" to control the number of inactive processing threads. DESCRIPTION: On high end servers, heavy lock contention was seen during inactive removal processing, which was caused by the large number of inactive worker threads spawned by VxFS. To avoid the contention, new tunable "vx_ninact_proc_threads" was added so that customer can adjust the number of inactive processing threads based on their server config and workload. RESOLUTION: Added new tunable "vx_ninact_proc_threads" to control the number of inactive processing threads. * 4046266 (Tracking ID: 4043084) SYMPTOM: panic in vx_cbdnlc_lookup DESCRIPTION: Panic observed in the following stack trace: vx_cbdnlc_lookup+000140 () vx_int_lookup+0002C0 () vx_do_lookup2+000328 () vx_do_lookup+0000E0 () vx_lookup+0000A0 () vnop_lookup+0001D4 (??, ??, ??, ??, ??, ??) getFullPath+00022C (??, ??, ??, ??) getPathComponents+0003E8 (??, ??, ??, ??, ??, ??, ??) svcNameCheck+0002EC (??, ??, ??, ??, ??, ??, ??) kopen+000180 (??, ??, ??) syscall+00024C () RESOLUTION: Code changes to handle memory pressure while changing FC connectivity * 4046267 (Tracking ID: 4034910) SYMPTOM: Garbage values inside global list large_dirinfo. DESCRIPTION: Garbage values inside global list large_dirinfo, which will lead to fsck failure. RESOLUTION: Make access/updataion to global list large_dirinfo synchronous throughout the fsck binary, so that garbage values due to race condition can be avoided. * 4046271 (Tracking ID: 3993822) SYMPTOM: running fsck on a file system core dumps DESCRIPTION: buffer was marked as busy without taking buffer lock while getting buffer from freelist in 1 thread and there was another thread that was accessing this buffer through its local variable RESOLUTION: marking buffer busy within the buffer lock while getting free buffer. * 4046272 (Tracking ID: 4017104) SYMPTOM: Deleting a huge number of inodes can consume a lot of system resources during inactivations which cause hangs or even panic. DESCRIPTION: Delicache inactivations dumps all the inodes in its inventory, all at once for inactivation. This causes a surge in the resource consumptions due to which other processes can starve. RESOLUTION: Gradually process the inode inactivation. * 4046829 (Tracking ID: 3993943) SYMPTOM: The fsck utility hit the coredump due to segmentation fault in get_dotdotlst(). Below is stack trace of the issue. get_dotdotlst check_dotdot_tbl iproc_do_work start_thread clone () DESCRIPTION: Due to a bug in fsck utility the coredump was generated while running the fsck on the filesystem. The fsck operation aborted in between due to the coredump. RESOLUTION: Code changes are done to fix this issue * 4047568 (Tracking ID: 4046169) SYMPTOM: On RHEL8, while doing a directory move from one FS (ext4 or vxfs) to migration VxFS, the migration can fail and FS will be disable. In debug testing, the issue was caught by internal assert, with following stack trace. panic ted_call_demon ted_assert vx_msgprint vx_mig_badfile vx_mig_linux_removexattr_int __vfs_removexattr __vfs_removexattr_locked vfs_removexattr removexattr path_removexattr __x64_sys_removexattr do_syscall_64 DESCRIPTION: Due to different implementation of "mv" operation in RHEL8 (as compared to RHEL7), there is a removexattr call on the target FS - which in migration case will be migration VxFS. In this removexattr call, kernel asks "system.posix_acl_default" attribute to be removed from the directory to be moved. But since the directory is not present on the target side yet (and hence no extended attributes for the directory), the code returns ENODATA. When code in vx_mig_linux_removexattr_int() encounter this error, it disables the FS and in debug pkg calls assert. RESOLUTION: The fix is to ignore ENODATA error and not assert or disable the FS. * 4049091 (Tracking ID: 4035057) SYMPTOM: On RHEL8, IOs done on FS, while other FS to VxFS migration is in progress can cause panic, with following stack trace. machine_kexec __crash_kexec crash_kexec oops_end no_context do_page_fault page_fault [exception RIP: memcpy+18] _copy_to_iter copy_page_to_iter generic_file_buffered_read new_sync_read vfs_read kernel_read vx_mig_read vfs_read ksys_read do_syscall_64 DESCRIPTION: - As part of RHEL8 support changes, vfs_read, vfs_write calls were replaced with kernel_read, kernel_write as the vfs_ calls are no longer exported. The kernel_read, kernel_write calls internally set the memory segment of the thread to KERNEL_DS and expects the buffer passed to have been allocated in kernel space. - In migration code, if the read/write operation cannot be completed using target FS (VxFS), then the IO is redirected to source FS. And in doing so, the code passes the same buffer - which is a user buffer to kernel call. This worked well with vfs_read, vfs_write calls. But is does not work with kernel_read, kernel_write calls, causing a panic. RESOLUTION: - Fix is to use vfs_iter_read, vfs_iter_write calls, which work with user buffer. To use these methods the user buffer needs to passed as part of struct iovec.iov_base * 4049097 (Tracking ID: 4049096) SYMPTOM: Tar command errors out with 1 throwing warnings. DESCRIPTION: This is happening due to dalloc which is changing the ctime of the file after allocating the extents `(worklist thread)->vx_dalloc_flush -> vx_dalloc_off` in between the 2 fsstat calls in tar. RESOLUTION: Avoiding changing ctime while allocating delayed extents in background. Patch ID: VRTSvxfs-7.4.2.1600 * 4012765 (Tracking ID: 4011570) SYMPTOM: WORM attribute replication support in VxFS. DESCRIPTION: WORM attribute replication is not supported in VFR. Modified code to replicate WORM attribute during attribute processing in VFR. RESOLUTION: Code is modified to replicate WORM attributes in VFR. * 4014720 (Tracking ID: 4011596) SYMPTOM: It throws error saying "No such file or directory present" DESCRIPTION: Bug observed during parallel communication between all the nodes. Some required temp files were not present on other nodes. RESOLUTION: Fixed to have consistency maintained while parallel node communication. Using hacp for transferring temp files. * 4015287 (Tracking ID: 4010255) SYMPTOM: "vfradmin promote" fails to promote target FS with selinux enabled. DESCRIPTION: During promote operation, VxFS remounts FS at target. When remounting FS to remove "protected on" flag from target, VxFS first fetch current mount options. With Selinux enabled (either in permissive mode/enabled), OS adds default "seclable" option to mount. When VxFS fetch current mount options, "seclabel" was not recognized by VxFS. Hence it fails to mount FS. RESOLUTION: Code is modified to remove "seclabel" mount option during mount processing on target. * 4015835 (Tracking ID: 4015278) SYMPTOM: System panics during vx_uiomove_by _hand DESCRIPTION: During uiomove, VxFS get the pages from OS through get_user_pages() to copy user data. Oracle use hugetablfs internally for performance reason. This can allocate hugepages. Under low memory condition, it is possible that get_user_pages() might return VxFS compound pages. In case of compound pages, only head page has valid mapping set and all other pages are mapped as TAIL_MAPPING. In case of uiomove, if VxFS gets compound page, then it try to check writable mapping for all pages from this compound page. This can result into dereferencing illegal address (TAIL_MAPPING) which was causing panic in stack. VxFS doesn't support huge pages but it is possible that compound page is present on the system and VxFS might get one through get_user_pages. RESOLUTION: Code is modified to get head page in case of tail pages from compound page when VxFS checks writeable mapping. * 4016721 (Tracking ID: 4016927) SYMPTOM: Remove tier command panics the system, crash has panic reason "BUG: unable to handle kernel NULL pointer dereference at 0000000000000150" DESCRIPTION: When fsvoladm removes device all devices are not moved. Number of device count also remains same unless it is the last device in the array. So check for free slot before trying to access device. RESOLUTION: In the device list check for free slot before accessing the device in that slot. * 4017282 (Tracking ID: 4016801) SYMPTOM: filesystem mark for fullfsck DESCRIPTION: In cluster environment, some operation can be perform on primary node only. When such operations are executed from secondary node, message is passed to primary node. During this, it may possible sender node has some transaction and not yet reached to disk. In such scenario, if sender node rebooted then primary node can see stale data. RESOLUTION: Code is modified to make sure transactions are flush to log disk before sending message to primary. * 4017818 (Tracking ID: 4017817) SYMPTOM: NA DESCRIPTION: In order to increase the overall throughput of VFR, code changes have been done to replicate files parallelly. RESOLUTION: Code changes have been done to replicate file's data & metadata parallely over multiple socket connections. * 4017820 (Tracking ID: 4017819) SYMPTOM: Cloud tier add operation fails when user is trying to add the AWS GovCloud. DESCRIPTION: Adding AWS GovCloud as a cloud tier was not supported in InfoScale. With these changes, user will be able to add AWS GovCloud type of cloud. RESOLUTION: Added support for AWS GovCloud * 4019877 (Tracking ID: 4019876) SYMPTOM: vxfsmisc.so is publicly shared library for samba and doesn't require infoscale license for its usage DESCRIPTION: vxfsmisc.so is publicly shared library for samba and doesn't require infoscale license for its usage RESOLUTION: Removed license dependency in vxfsmisc library * 4020055 (Tracking ID: 4012049) SYMPTOM: "fsck" supports the "metasave" option but it was not documented anywhere. DESCRIPTION: "fsck" supports the "metasave" option while executing with the "-y" option. but it is not documented anywhere. Also, it tries to store metasave in a particular location. The user doesn't have the option to specify the location. If that location doesn't have enough space, "fsck" fails to take the metasave and it continues to change filesystem state. RESOLUTION: Code changes have been done to add one new option with which the user can specify the location to store metasave. "metasave" and "target", these two options have been added in the "usage" message of "fsck" binary. * 4020056 (Tracking ID: 4012049) SYMPTOM: "fsck" supports the "metasave" option but it was not documented anywhere. DESCRIPTION: "fsck" supports the "metasave" option while executing with the "-y" option. but it is not documented anywhere. Also, it tries to store metasave in a particular location. The user doesn't have the option to specify the location. If that location doesn't have enough space, "fsck" fails to take the metasave and it continues to change filesystem state. RESOLUTION: Code changes have been done to add one new option with which the user can specify the location to store metasave. "metasave" and "target", these two options have been added in the "usage" message of "fsck" binary. * 4020912 (Tracking ID: 4020758) SYMPTOM: Filesystem mount or fsck with -y may see hang during log replay DESCRIPTION: fsck utility is used to perform the log replay. This log replay is performed during mount operation or during filesystem check with -y option, if needed. In certain cases if there are lot of logs that needs to be replayed then it end up into consuming entire buffer cache. This results into out of buffer scenario and results into hang. RESOLUTION: Code is modified to make sure enough buffers are always available. INSTALLING THE PATCH -------------------- Run the Installer script to automatically install the patch: ----------------------------------------------------------- Please be noted that the installation of this P-Patch will cause downtime. To install the patch perform the following steps on at least one node in the cluster: 1. Copy the patch infoscale-sles12_x86_64-Patch-7.4.2.2600.tar.gz to /tmp 2. Untar infoscale-sles12_x86_64-Patch-7.4.2.2600.tar.gz to /tmp/hf # mkdir /tmp/hf # cd /tmp/hf # gunzip /tmp/infoscale-sles12_x86_64-Patch-7.4.2.2600.tar.gz # tar xf /tmp/infoscale-sles12_x86_64-Patch-7.4.2.2600.tar 3. Install the hotfix(Please be noted that the installation of this P-Patch will cause downtime.) # pwd /tmp/hf # ./installVRTSinfoscale742P2600 [<host1> <host2>...] You can also install this patch together with 7.4.2 base release using Install Bundles 1. Download this patch and extract it to a directory 2. Change to the Veritas InfoScale 7.4.2 directory and invoke the installer script with -patch_path option where -patch_path should point to the patch directory # ./installer -patch_path [<path to this patch>] [<host1> <host2>...] Install the patch manually: -------------------------- Manual installation is not recommended. REMOVING THE PATCH ------------------ Manual uninstallation is not recommended. SPECIAL INSTRUCTIONS -------------------- Vulnerabilities Fixed : Following vulnerabilities are fixed in this security SP – CVE-2022-23852, CVE-2022-25315, CVE-2022-25235, CVE-2022-22823, CVE-2022-25236, CVE-2022-22822, CVE-2022-22824, CVE-2022-23990, CVE-2022-1292, CVE-2021-3711, CVE-2022-22826, CVE-2022-22827, CVE-2022-22825, CVE-2021-45960, CVE-2021-46143, CVE-2022-25314, CVE-2022-0778, CVE-2021-23840 , CVE-2020-1967, CVE-2021-29424 , CVE-2021-3712, ** CVE-2013-0340 , CVE-2022-25313, BDSA-2020-2711 , BDSA-2021-0399 , ** CVE-2021-23841 , CVE-2021-4160, CVE-2020-1971 , CVE-2021-3449 , BDSA-2022-1716, CVE-2019-1551, CVE-2019-1549 , CVE-2019-1547 , BDSA-2021-2995, CVE-2020-1968, CVE-2021-23839, CVE-2019-1563 OTHERS ------ NONE |
Why Register?
Get notifications about ASLs/APMs, HCLs, patches, and high availability agents
As a registered user, you can create notifications to receive updates about NetBackup Future Platform and Feature Plans, NetBackup hot fixes/EEBs in released versions, Array Support Libraries (ASLs)/Array Policy Modules (APMs), hardware compatibility lists (HCLs), patches and high availability agents. In addition, you can create system-specific notifications customized to your environment.
Compare configurations
The Compare Configurations feature lets you compare different system scans by the data collector. When you sign in, you can choose a target system, compare reports run at different times, and easily see how the system's configuration has changed.
Save configurations
After logging in, you can retrieve past reports, share reports with colleagues, review notifications you received, and retain custom settings. Anonymous users cannot access these features.
Bulk uploader
As a registered user,you can upload multiple reports, using the Bulk Uploader.