infoscale-aix-Patch-7.4.1.1000

 Basic information
Release type: Patch
Release date: 2022-01-16
OS update support: None
Technote: None
Documentation: None
Popularity: 580 viewed    downloaded
Download size: 470.12 MB
Checksum: 3777250108

 Applies to one or more of the following products:
InfoScale Availability 7.4.1 On AIX 7.1
InfoScale Availability 7.4.1 On AIX 7.2
InfoScale Enterprise 7.4.1 On AIX 7.1
InfoScale Enterprise 7.4.1 On AIX 7.2
InfoScale Foundation 7.4.1 On AIX 7.1
InfoScale Foundation 7.4.1 On AIX 7.2
InfoScale Storage 7.4.1 On AIX 7.1
InfoScale Storage 7.4.1 On AIX 7.2

 Obsolete patches, incompatibilities, superseded patches, or other requirements:

This patch supersedes the following patches: Release date
vcsea-aix-Patch-7.4.1.1100 (obsolete) 2020-01-28
vcs-aix-Patch-7.4.1.1100 (obsolete) 2019-11-05

 Fixes the following incidents:
3982248, 3982912, 3984156, 3984162, 3999794, 4001399, 4001750, 4001752, 4001755, 4001757, 4011097, 4011105, 4016876, 4018180, 4020207, 4020438, 4021346, 4022943, 4023095, 4026389, 4039249, 4039527, 4042947, 4045494, 4049416, 4050229, 4050664, 4051040, 4051815, 4051887, 4051889, 4051896, 4053149, 4054243, 4054244, 4054264, 4054265, 4054266, 4054267, 4054269, 4054270, 4054271, 4054272, 4054273, 4054276, 4054323, 4054325, 4054387, 4054412, 4054416, 4054697, 4054724, 4054725, 4054726, 4055653, 4055660, 4055668, 4055697, 4055772, 4055858, 4055894, 4055895, 4055899, 4055905, 4055925, 4055938, 4056024, 4056107, 4056124, 4056146, 4056154, 4056567, 4056779, 4056918, 4057073, 4059638, 4061338, 4062802

 Patch ID:
VRTSvlic-04.01.0741.0300
VRTSpython-03.06.0006.0010
VRTSvxfs-07.04.0001.3400
VRTSodm-07.04.0001.3400
VRTSllt-07.04.0001.1100
VRTSgab-07.04.0001.1100
VRTSamf-07.04.0001.1100
VRTSvxfen-07.04.0001.1100
VRTSvcs-07.04.0001.1200
VRTSvcsag-07.04.0001.1100
VRTSvcsea-07.04.0001.1200
VRTSsfmh-07.04.0000.0901

Readme file
                          * * * READ ME * * *
                      * * * InfoScale 7.4.1 * * *
                         * * * Patch 1000 * * *
                         Patch Date: 2022-01-12


This document provides the following information:

   * PATCH NAME
   * OPERATING SYSTEMS SUPPORTED BY THE PATCH
   * PACKAGES AFFECTED BY THE PATCH
   * BASE PRODUCT VERSIONS FOR THE PATCH
   * SUMMARY OF INCIDENTS FIXED BY THE PATCH
   * DETAILS OF INCIDENTS FIXED BY THE PATCH
   * INSTALLATION PRE-REQUISITES
   * INSTALLING THE PATCH
   * REMOVING THE PATCH


PATCH NAME
----------
InfoScale 7.4.1 Patch 1000


OPERATING SYSTEMS SUPPORTED BY THE PATCH
----------------------------------------
AIX


PACKAGES AFFECTED BY THE PATCH
------------------------------
VRTSamf
VRTSaslapm
VRTSgab
VRTSllt
VRTSodm
VRTSpython
VRTSsfmh
VRTSvcs
VRTSvcsag
VRTSvcsea
VRTSvlic
VRTSvxfen
VRTSvxfs
VRTSvxvm


BASE PRODUCT VERSIONS FOR THE PATCH
-----------------------------------
   * InfoScale Availability 7.4.1
   * InfoScale Enterprise 7.4.1
   * InfoScale Foundation 7.4.1
   * InfoScale Storage 7.4.1


SUMMARY OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
Patch ID: VRTSvxvm-7.4.1.3300
* 3984156 (3852146) A shared disk group (DG) fails to be imported when "-c" and "-o noreonline" are specified together.
* 3984162 (3983759) While mirroring the volume on AIX system panic was observed because of interrupt disabled state.
* 3999794 (3978453) Reconfig hang during master takeover
* 4001399 (3995946) CVM Slave unable to join cluster - VxVM vxconfigd ERROR V-5-1-11092 cleanup_client: (Memory allocation failure) 12
* 4001750 (3976392) Memory corruption might happen in VxVM (Veritas Volume Manager) while processing Plex detach request.
* 4001752 (3969487) Data corruption observed with layered volumes when mirror of the volume is detached and attached back.
* 4001755 (3980684) Kernel panic in voldrl_hfind_an_instant while accessing agenode.
* 4001757 (3969387) VxVM(Veritas Volume Manager) caused system panic when handle received request response in FSS environment.
* 4011097 (4010794) When storage activity was going on, Veritas Dynamic Multi-Pathing (DMP) caused system panic in a cluster.
* 4011105 (3972433) IO hang might be seen while issuing heavy IO load on volumes having cache objects.
* 4018180 (3958062) After a boot LUN is migrated, enabling and disabling dmp_native_support fails.
* 4020207 (4018086) The system hangs when the RVG in DCM resync with SmartMove is set to ON.
* 4020438 (4020046) DRL log plex gets detached unexpectedly.
* 4021346 (4010207) System panicked due to hard-lockup due to a spinlock not released properly during the vxstat collection.
* 4022943 (4017656) Add support for XP8 arrays in the current ASL.
* 4023095 (4007920) Control auto snapshot deletion when cache obj is full.
* 4039249 (3984240) AIX builds were failing on AIX7.2
* 4039527 (4018086) The system hangs when the RVG in DCM resync with SmartMove is set to ON.
* 4045494 (4021939) The "vradmin syncvol" command fails due to recent changes related to binding sockets without specifying IP addresses.
* 4051815 (4031597) vradmind generates a core dump in __strncpy_sse2_unaligned.
* 4051887 (3956607) A core dump occurs when you run the vxdisk reclaim command.
* 4051889 (4019182) In case of a VxDMP configuration, an InfoScale server panics when applying a patch.
* 4051896 (4010458) In a Veritas Volume Replicator (VVR) environment, the rlink might inconsistently disconnect due to unexpected transactions.
* 4055653 (4049082) I/O read error is displayed when remote FSS node rebooting.
* 4055660 (4046007) The private disk region gets corrupted if the cluster name is changed in FSS environment.
* 4055668 (4045871) vxconfigd crashed at ddl_get_disk_given_path.
* 4055697 (4047793) Unable to import diskgroup even replicated disks are in SPLIT mode
* 4055772 (4043337) logging fixes for VVR
* 4055894 (4043495) DMP get incorrect path information on AIX System.
* 4055895 (4038865) The system panics due to deadlock between inode_hash_lock and DMP shared lock.
* 4055899 (3993242) vxsnap prepare command when run on vset sometimes fails.
* 4055905 (4052191) Unexcepted scripts or commands are run due to an incorrect comments format in the vxvm-configure script.
* 4055925 (4031064) Master switch operation is hung in VVR secondary environment.
* 4055938 (3999073) The file system corrupts when the cfsmount group goes into offline state.
* 4056024 (4043518) Could not disable DMP Native Support on AIX.
* 4056107 (4036181) Volumes that are under a RVG (Replicated Volume Group), report an IO error.
* 4056124 (4008664) System panic when signal vxlogger daemon that has ended.
* 4056146 (3983832) VxVM commands hang in CVR environment.
* 4056154 (3975081) DMP (Dynamic Multipathing) Native support fails to get enabled after reboot.
* 4056918 (4056917) Import of disk group in Flexible Storage Sharing (FSS) with missing disks can lead to data corruption.
* 4061338 (3975110) DMP native root support not working due to changes in log file location from /tmp/ to /var/VRTS/vxvm/
* 4062802 (3941037) VxVM (Veritas Volume Manager) creates some required files under /tmp and
/var/tmp directories. These directories could be modified by non-root users and
will affect the Veritas Volume Manager Functioning.
Patch ID: VRTSllt-7.4.1.1100
* 4050664 (4046199) LLT configurations over UDP accept only ethernet interface names as link tag names.
* 4051040 (3989372) When the CPU load and memory consumption is high in a VMware environment, some nodes in an InfoScale cluster may get fenced out.
* 4054272 (4045607) Performance improvement of the UDP multiport feature of LLT on 1500 MTU-based networks.
* 4054697 (3985775) Sometimes, the system log may get flooded with LLT heartbeat loss messages that do not necessarily indicate any actual issues with LLT.
Patch ID: VRTSgab-7.4.1.1100
* 4054264 (4046413) After a node is added to or removed from a cluster, the GAB node count or the fencing quorum is not updated.
* 4054265 (4046418) The GAB module starts up even if LLT is not configured.
Patch ID: VRTSamf-7.4.1.1100
* 4054323 (4001565) On Solaris 11.4, IMF fails to provide notifications when Oracle processes stop.
Patch ID: VRTSvxfen-7.4.1.1100
* 4016876 (4000745) The VxFEN process fails to start due to late discovery of the VxFEN disk group.
Patch ID: VRTSvcs-7.4.1.1200
* 4054266 (4040705) When a command exceeds 4096 characters, hacli hangs indefinitely.
* 4054267 (4040656) When the ENOMEM error occurs, HAD does not shut down gracefully.
* 4054271 (4043700) In case of failover, parallel, or hybrid service groups, multiple PreOnline triggers can be executed on the same node or on different nodes in a cluster while an online operation is in already progress.
Patch ID: VRTSvcs-7.4.1.1100
* 3982912 (3981992) A potentially critical security vulnerability in VCS needs to be addressed.
Patch ID: VRTSvcsag-7.4.1.1100
* 4042947 (4042944) In a hardware replication environment, a disk group resource may fail to be imported when the HARDWARE_MIRROR flag is set.
* 4054269 (4030215) The InfoScale agents for Azure agents did not support credential validation methods based on the azure-identity library.
* 4054270 (4046286) The InfoScale agents for Azure do not handle generic exceptions.
* 4054273 (4044567) The HostMonitor agent faults while logging the memory usage of a system.
* 4054276 (4048164) When a cloud API that an InfoScale agent has called hangs, an unwanted failover of the associated service group may occur.
Patch ID: VRTSvcsea-7.4.1.1200
* 4054325 (4043289) In an Oracle ASM 19c environment on Solaris, the ASMInst agent fails to come online or to detect the state of the related resources.
Patch ID: VRTSvcsea-7.4.1.1100
* 3982248 (3989510) The VCS agent for Oracle does not support Oracle 19c databases.
Patch ID: VRTSpython-3.6.6.10
* 4056779 (4049771) The VRTSpython package needs to be made available on AIX and Solaris to support the InfoScale licensing component.
Patch ID: VRTScavf-7.4.1.3400
* 4056567 (4054462) In a hardware replication environment, a shared disk group resource may fail to be imported when the HARDWARE_MIRROR flag is set.
Patch ID: VRTSvxfs-7.4.1.3400
* 4026389 (4026388) Unable to unpin a file back after pinning while testing SMARTIO.
* 4050229 (4018995) Panic in AIX while doing read ahead of size greater than 4 GB.
* 4053149 (4043084) panic in vx_cbdnlc_lookup
* 4054243 (4014274) Gsed cmd might report a bad address error
* 4054244 (4052449) Cluster goes in an 'unresponsive' mode while invalidating pages due to duplicate page entries in iowr structure.
* 4054387 (4054386) If systemd service fails to load vxfs module, the service still shows status as active instead of failed.
* 4054412 (4042254) A new feature has been added in vxupgrade which fails disk-layout upgrade if sufficient space is not available in the filesystem.
* 4054416 (4005620) Internal counter of inodes from Inode Allocation Unit (IAU) can be negative if IAU is marked bad.
* 4054724 (4018697) After installing InfoScale 7.4.2, the system fails to start.
* 4054725 (4051108) While storing multiple attributes for a file in an immediate area of inode, system might be unresponsive due to a wrong loop increment statement.
* 4054726 (4051026) If file system is created with inode size 512, the file system might report inconsistencies with the bitmap after running fsck .
* 4055858 (4042925) Intermittent Performance issue on commands like df and ls.
Patch ID: VRTSodm-7.4.1.3400
* 4057073 (4057072) Unable to load the vxodm module on AIX.
Patch ID: vom-HF074901
* 4059638 (4059635) VIOM Agent for InfoScale 7.4.1 Update 6
Patch ID: VRTSvlic-4.01.741.300
* 4049416 (4049416) Migrate Licensing Collector service from Java to Python.


DETAILS OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
This patch fixes the following incidents:

Patch ID: VRTSvxvm-7.4.1.3300

* 3984156 (Tracking ID: 3852146)

SYMPTOM:
In a CVM cluster, when a shared DG is imported by specifying both, the "-c" and the "-o noreonline" options, you may encounter the following error: 
VxVM vxdg ERROR V-5-1-10978 Disk group <disk_group_name>: import failed: Disk for disk group not found.

DESCRIPTION:
The "-c" option updates the disk ID and the DG ID on the private region of the disks in the DG that is being imported. Such updated information is not yet seen by the slave because the disks have not been brought online again because the "noreonline" option was specified. As a result, the slave cannot identify the disk(s) based on the updated information sent from the master, which caused the import to fail with the error: Disk for disk group not found.

RESOLUTION:
VxVM is updated so that a shared DG import completes successfully even when the "-c" and the "-o noreonline" options are specified together.

* 3984162 (Tracking ID: 3983759)

SYMPTOM:
While mirroring volume on AIX system panic was observed with below stack:

[00360098]specfs_intsdisabled+000018 ([??])
[00360094]specfs_intsdisabled+000014 (??, ??, ??)
[00609938]rdevioctl+000138 (??, ??, ??, ??, ??, ??)
[008062F4]spec_ioctl+000074 (??, ??, ??, ??, ??, ??)
[00693F7C]vnop_ioctl+00005C (??, ??, ??, ??, ??, ??)
[0069E76C]vno_ioctl+00016C (??, ??, ??, ??, ??)
[006E6310]common_ioctl+0000F0 (??, ??, ??, ??)
[00003938]mfspurr_sc_flih01+000174 ()
[kdb_get_virtual_memory] no real storage @ 200C1928
[D011A7B4]D011A7B4 ()
[kdb_read_mem] no real storage @ FFFFFFFFFFF6100

DESCRIPTION:
In CVM (Cluster Volume Manager) while doing volume mirroring an ILOCK (intent lock) is required to complete the operation. As part of the mirroring operation assumption is made is slave is requesting an ilock to master for the mirroring operation to complete. Now if the SIO (staged IO) gets complete as part of this ilock acquire operation from slave then the ilock associated would be freed. Now when the original thread continues with the operation then the ilock available would be stale since it already got freed and leads to this interrupt disabled state where the earlier acquired spinlock is not released before exiting from the thread.

RESOLUTION:
Code changes have been done to make decisions based on other variables and not ilock and free the spinlock before the thread exits.

* 3999794 (Tracking ID: 3978453)

SYMPTOM:
Reconfig hang during master takeover with below stack:
volsync_wait+0xa7/0xf0 [vxio]
volsiowait+0xcb/0x110 [vxio]
vol_commit_iolock_objects+0xd3/0x270 [vxio]
vol_ktrans_commit+0x5d3/0x8f0 [vxio]
volconfig_ioctl+0x6ba/0x970 [vxio]
volsioctl_real+0x436/0x510 [vxio]
vols_ioctl+0x62/0xb0 [vxspec]
vols_unlocked_ioctl+0x21/0x30 [vxspec]
do_vfs_ioctl+0x3a0/0x5a0

DESCRIPTION:
There is a hang in dcotoc protocol on slave and that is causing couple of slave nodes not respond with LEAVE_DONE to master, hence the issue.

RESOLUTION:
Code changes have been made to add handling of transaction overlapping with a shared toc update. Passing toc update sio flag from old to new object during transaction, to resume recovery if required.

* 4001399 (Tracking ID: 3995946)

SYMPTOM:
CVM Slave unable to join cluster with below error:
VxVM vxconfigd ERROR V-5-1-11092 cleanup_client: (Memory allocation failure) 12
VxVM vxconfigd ERRORV-5-1-11467 kernel_fail_join(): Reconfiguration interrupted: Reason is retry to add a node failed (13, 0)

DESCRIPTION:
vol_vvr_tcp_keepalive and vol_vvr_tcp_timeout are introduced in 7.4.1 U1 for Linux only. For other platforms like Solaris and AIX, it isn't supported. Due a bug in code, those two tunables were exposed, and cvm couldn't get those two tunables info from master node. Hence the issue.

RESOLUTION:
Code change has been made to hide vol_vvr_tcp_keepalive and vol_vvr_tcp_timeout for other platforms like Solaris and AIX.

* 4001750 (Tracking ID: 3976392)

SYMPTOM:
Memory corruption might happen in VxVM (Veritas Volume Manager) while processing Plex detach request.

DESCRIPTION:
During processing of plex detach request, the VxVM volume is operated in serial manner. During serialization it might happen that current thread has queued the I/O and still accessing the same. In the meantime the same I/O is picked up by one of VxVM threads for processing. The processing of the I/O is completed and the same is deleted after that. The current thread is still accessing the same memory which was already deleted which might lead to memory corruption.

RESOLUTION:
Fix is to not use the same I/O in the current thread once the I/O is queued as part of serialization and the processing is done before queuing the I/O.

* 4001752 (Tracking ID: 3969487)

SYMPTOM:
Data corruption observed with layered volumes after resynchronisation when mirror of the volume is detached and attached back.

DESCRIPTION:
In case of layered volume, if the IO fails at the underlying subvolume layer before doing the mirror detach the top volume in layered volume has to be serialized (run IO's in serial fashion). When volume is serialized IO's on the volume are directly tracked into detach map of DCO (Data Change Object). During this time period if some of the new IO's occur on the volume then those IO's would not be tracked as part of the detach map inside DCO since detach map tracking is not yet enabled by failed IO's. The new IO's which are not being tracked in detach map would be missed when the plex resynchronisation happens later which leads to corruption.

RESOLUTION:
Fix is to delay the unserialization of the volume till the point failed IO's actually detach the plex and enable detach map tracking. This would make sure new IO's are tracked as part of detach map of DCO.

* 4001755 (Tracking ID: 3980684)

SYMPTOM:
Kernel panic in voldrl_hfind_an_instant while accessing agenode with stack
[exception RIP: voldrl_hfind_an_instant+49]
#11 voldrl_find_mark_agenodes
#12 voldrl_log_internal_30
#13 voldrl_log_30
#14 volmv_log_drlfmr
#15 vol_mv_write_start
#16 volkcontext_process
#17 volkiostart
#18 vol_linux_kio_start
#19 vxiostrategy
...

DESCRIPTION:
Agenode corruption is hit in case of use of per file sequential hint. Agenode's linked list is corrupted as pointer was not set to NULL
when reusing the agenode.

RESOLUTION:
Changes are done in VxVM code to avoid Agenode list corruption.

* 4001757 (Tracking ID: 3969387)

SYMPTOM:
In FSS(Flexible Storage Sharing) environment, system might panic with below stack:
vol_get_ioscb [vxio]
vol_ecplex_rhandle_resp [vxio]
vol_ioship_rrecv [vxio]
gab_lrrecv [gab]
vx_ioship_llt_rrecv [llt]
vx_ioship_process_frag_packets [llt]
vx_ioship_process_data [llt]
vx_ioship_recv_data [llt]

DESCRIPTION:
In certain scenario, it may happen that request got purged and response came after that. Then system might panic due to access the freed resource.

RESOLUTION:
Code changes have been made to fix the issue.

* 4011097 (Tracking ID: 4010794)

SYMPTOM:
Veritas Dynamic Multi-Pathing (DMP) caused system panic in a cluster with below stack when storage activities were going on:
dmp_start_cvm_local_failover+0x118()
dmp_start_failback+0x398()
dmp_restore_node+0x2e4()
dmp_revive_paths+0x74()
gen_update_status+0x55c()
dmp_update_status+0x14()
gendmpopen+0x4a0()

DESCRIPTION:
The system panic occurred due to invalid dmpnode's current primary path when disks were attached/detached in a cluster. When DMP accessed the current primary path without doing sanity check, the system panics due to an invalid pointer.

RESOLUTION:
Code changes have been made to avoid accessing any invalid pointer.

* 4011105 (Tracking ID: 3972433)

SYMPTOM:
IO hang might be seen while issuing heavy IO load on volumes having cache objects.

DESCRIPTION:
While issuing heavy IO on volumes having cache objects, the IO on cache volumes may stall due to locking(region lock) involved 
for overlapping IO requests on the same cache object. When appropriate locks are granted to IOs, all the IOs were getting processed 
in serial fashion through single VxVM IO daemon thread. This serial processing was causing slowness, 
resulting in a IO hang like situation and application timeouts.

RESOLUTION:
The code changes are done to properly perform multi-processing of the cache volume IOs.

* 4018180 (Tracking ID: 3958062)

SYMPTOM:
After a boot LUN is migrated, disabling dmp_native_support fails with following error.

VxVM vxdmpadm ERROR V-5-1-15883 check_bosboot open failed /dev/r errno 2
VxVM vxdmpadm ERROR V-5-1-15253 bosboot would not succeed, please run  
manually to find the cause of failure
VxVM vxdmpadm ERROR V-5-1-15251 bosboot check failed
VxVM vxdmpadm INFO V-5-1-18418 restoring protofile
+ final_ret=18
+ f_exit 18
VxVM vxdmpadm ERROR V-5-1-15690 Operation failed for one or more volume 
groups

VxVM vxdmpadm ERROR V-5-1-15686 The following VG(s) could not be migrated as 
could not disable DMP support for LVM bootability -
        rootvg

DESCRIPTION:
After performing a boot LUN migration, while enabling or disabling DMP native support, VxVM performs the 'bosboot' verification with the old boot disk name instead of the name of the migrated disk. This issue occurs on AIX, where the OS command returns the old boot disk name.

RESOLUTION:
VxVM is updated to use the correct OS command to get the boot disk name after migration.

* 4020207 (Tracking ID: 4018086)

SYMPTOM:
vxiod with ID as 128 was stuck with below stack:

 #2 [] vx_svar_sleep_unlock at [vxfs]
 #3 [] vx_event_wait at [vxfs]
 #4 [] vx_async_waitmsg at [vxfs]
 #5 [] vx_msg_send at [vxfs]
 #6 [] vx_send_getemapmsg at [vxfs]
 #7 [] vx_cfs_getemap at [vxfs]
 #8 [] vx_get_freeexts_ioctl at [vxfs]
 #9 [] vxportalunlockedkioctl at [vxportal]
 #10 [] vxportalkioctl at [vxportal]
 #11 [] vxfs_free_region at [vxio]
 #12 [] vol_ru_start_replica at [vxio]
 #13 [] vol_ru_start at [vxio]
 #14 [] voliod_iohandle at [vxio]
 #15 [] voliod_loop at [vxio]

DESCRIPTION:
With SmartMove feature set to ON, the vxiod with ID as 128 starts the replication where RVG is in DCM mode. Thus, the vxiod awaits the filesystem's response if the given region is used by filesystem or not. Filesystem will trigger MDSHIP IO on logowner. Due to a bug in the code, MDSHIP IO always gets queued in vxiod with ID as 128. Hence a deadlock situation occurs.

RESOLUTION:
Code changes have been made to avoid handling the MDSHIP IO in vxiod whose ID is bigger than 127.

* 4020438 (Tracking ID: 4020046)

SYMPTOM:
The following IO errors are reported on VxVM sub-disks result in DRL log detached without any SCSI errors detected.

VxVM vxio V-5-0-1276 error on Subdisk [xxxx] while writing volume [yyyy][log] offset 0 length [zzzz]
VxVM vxio V-5-0-145 DRL volume yyyy[log] is detached

DESCRIPTION:
DRL plexes detached as an atomic write flag (BIT_ATOMIC) was set on BIO unexpectedly. The BIT_ATOMIC flag gets set on bio only if VOLSIO_BASEFLAG_ATOMIC_WRITE flag is set on SUBDISK SIO and its parent MVWRITE SIO's sio_base_flags. When generating MVWRITE SIO,  it's sio_base_flags was copied from a gio structure, because the gio structure memory isn't initialized it may contain gabarge values, hence the issue.

RESOLUTION:
Code changes have been made to fix the issue.

* 4021346 (Tracking ID: 4010207)

SYMPTOM:
System panic occurred with the below stack:

native_queued_spin_lock_slowpath()
queued_spin_lock_slowpath()
_raw_spin_lock_irqsave()
volget_rwspinlock()
volkiodone()
volfpdiskiodone()
voldiskiodone_intr()
voldmp_iodone()
bio_endio()
gendmpiodone()
dmpiodone()
bio_endio()
blk_update_request()
scsi_end_request()
scsi_io_completion()
scsi_finish_command()
scsi_softirq_done()
blk_done_softirq()
__do_softirq()
call_softirq()

DESCRIPTION:
As part of collecting the IO statistics collection, the vxstat thread acquires a spinlock and tries to copy data to the user space. During the data copy, if some page fault happens, then the thread would relinquish the CPU and provide the same to some other thread. If the thread which gets scheduled on the CPU requests the same spinlock which vxstat thread had acquired, then this results in a hard lockup situation.

RESOLUTION:
Code has been changed to properly release the spinlock before copying out the data to the user space during vxstat collection.

* 4022943 (Tracking ID: 4017656)

SYMPTOM:
This is new array and we need to add support for claiming XP8 arrays.

DESCRIPTION:
XP8 is new array and current ASL doesn't support it. So it will not be claimed with current ASL. This array support has been now added in the current ASL.

RESOLUTION:
Code changes to support XP8 array have been done.

* 4023095 (Tracking ID: 4007920)

SYMPTOM:
vol_snap_fail_source tunable is set still largest and oldest snapshot automatically deleted when cache object becomes full

DESCRIPTION:
If vol_snap_fail_source tunable is set then oldest snapshot should not be deleted in case of cache object full. Flex requires these snapshots for rollback.

RESOLUTION:
Added fix to stop auto snapshot deletion in vxcached

* 4039249 (Tracking ID: 3984240)

SYMPTOM:
AIX builds were failing on AIX7.2 BE.

DESCRIPTION:
VxVM builds were failing on AIX7.2 BE.

RESOLUTION:
Made build environment and packaging changes so as to support VxVM builds on AIX7.2 BE.

* 4039527 (Tracking ID: 4018086)

SYMPTOM:
vxiod with ID as 128 was stuck with below stack:

 #2 [] vx_svar_sleep_unlock at [vxfs]
 #3 [] vx_event_wait at [vxfs]
 #4 [] vx_async_waitmsg at [vxfs]
 #5 [] vx_msg_send at [vxfs]
 #6 [] vx_send_getemapmsg at [vxfs]
 #7 [] vx_cfs_getemap at [vxfs]
 #8 [] vx_get_freeexts_ioctl at [vxfs]
 #9 [] vxportalunlockedkioctl at [vxportal]
 #10 [] vxportalkioctl at [vxportal]
 #11 [] vxfs_free_region at [vxio]
 #12 [] vol_ru_start_replica at [vxio]
 #13 [] vol_ru_start at [vxio]
 #14 [] voliod_iohandle at [vxio]
 #15 [] voliod_loop at [vxio]

DESCRIPTION:
With SmartMove feature set to ON, the vxiod with ID as 128 starts the replication where RVG is in DCM mode. Thus, the vxiod awaits the filesystem's response if the given region is used by filesystem or not. Filesystem will trigger MDSHIP IO on logowner. Due to a bug in the code, MDSHIP IO always gets queued in vxiod with ID as 128. Hence a deadlock situation occurs.

RESOLUTION:
Code changes have been made to avoid handling the MDSHIP IO in vxiod whose ID is bigger than 127.

* 4045494 (Tracking ID: 4021939)

SYMPTOM:
The "vradmin syncvol" command fails and the following message is logged: "VxVM VVR vxrsync ERROR V-5-52-10206 no server host systems specified".

DESCRIPTION:
VVR sockets now bind without specifying IP addresses. This recent change causes issues when such interfaces are used to identify whether the associated remote host is same as the localhost. For example, in case of the "vradmin syncvol" command, VVR incorrectly assumes that the local host has been provided as the remote host, logs the error message and exits.

RESOLUTION:
Updated the vradmin utility to correctly identify the remote hosts that are passed to the "vradmin syncvol" command.

* 4051815 (Tracking ID: 4031597)

SYMPTOM:
vradmind generates a core dump in __strncpy_sse2_unaligned.

DESCRIPTION:
The following core dump is generated:
(gdb)bt
Thread 1 (Thread 0x7fcd140b2780 (LWP 90066)):
#0 0x00007fcd12b1d1a5 in __strncpy_sse2_unaligned () from /lib64/libc.so.6
#1 0x000000000059102e in IpmServer::accept (this=0xf21168, new_handlesp=0x0) at Ipm.C:3406
#2 0x0000000000589121 in IpmHandle::events (handlesp=0xf12088, new_eventspp=0x7ffc8e80a4e0, serversp=0xf120c8, new_handlespp=0x0, ms=100) at Ipm.C:613
#3 0x000000000058940b in IpmHandle::events (handlesp=0xfc8ab8, vlistsp=0xfc8938, ms=100) at Ipm.C:645
#4 0x000000000040ae2a in main (argc=1, argv=0x7ffc8e80e8e8) at srvmd.C:722

RESOLUTION:
vradmind is updated to properly handle getpeername(), which addresses this issue.

* 4051887 (Tracking ID: 3956607)

SYMPTOM:
When removing a VxVM disk using the vxdg-rmdisk operation, the following error occurs while requesting a disk reclaim:
VxVM vxdg ERROR V-5-1-0 Disk <device_name> is used by one or more subdisks which are pending to be reclaimed.
Use "vxdisk reclaim <device_name>" to reclaim space used by these subdisks, and retry "vxdg rmdisk" command.
Note: The reclamation operation is irreversible. However, a core dump occurs when vxdisk-reclaim is executed.

DESCRIPTION:
This issue occurs due to a memory allocation failure in the disk-reclaim code, which fails to be detected and causes an invalid address to be referenced. Consequently, a core dump occurs.

RESOLUTION:
The disk-reclaim code is updated to handle memory allocation failures properly.

* 4051889 (Tracking ID: 4019182)

SYMPTOM:
In case of a VxDMP configuration, an InfoScale server panics when applying a patch. The following stack trace is generated:
unix:panicsys+0x40()
unix:vpanic_common+0x78()
unix:panic+0x1c()
unix:mutex_enter() - frame recycled
vxdmp(unloaded text):0x108b987c(jmpl?)()
vxdmp(unloaded text):0x108ab380(jmpl?)(0)
genunix:callout_list_expire+0x5c()
genunix:callout_expire+0x34()
genunix:callout_execute+0x10()
genunix:taskq_thread+0x42c()
unix:thread_start+4()

DESCRIPTION:
Some VxDMP functions create callouts. The VxDMP module may already be unloaded when a callout expires, which may cause the server to panic. VxDMP should cancel any previous timeout function calls before it unloads itself.

RESOLUTION:
VxDMP is updated to cancel any previous timeout function calls before unloading itself.

* 4051896 (Tracking ID: 4010458)

SYMPTOM:
In a VVR environment, the rlink might inconsistently disconnect due to unexpected transactions, and the following message might get logged:
"VxVM VVR vxio V-5-0-114 Disconnecting rlink <rlink_name> to permit transaction to proceed"

DESCRIPTION:
In a VVR environment, a transaction is triggered when a change in the VxVM or the VVR objects needs to be persisted on disk. In some scenarios, a few unnecessary transactions get triggered in a loop, which was causes multiple rlink disconnects and the aforementioned message gets logged frequently. One such unexpected transaction occurs when the open/close command is issued for a volume as part of SmartIO caching. The vradmind daemon also issues some open/close commands on volumes as part of the I/O statistics collection, which triggers unnecessary transactions. Additionally, some unexpected transactions occur due to incorrect references to some temporary flags on the volumes.

RESOLUTION:
VVR is updated to first check whether SmartIO caching is configured on a system. If it is not configured, VVR disables SmartIO caching on the associated volumes. VVR is also updated to avoid the unexpected transactions that may occur due to incorrect references on certain temporary flags on the volumes.

* 4055653 (Tracking ID: 4049082)

SYMPTOM:
I/O read error is displayed when remote FSS node rebooting.

DESCRIPTION:
When rebooting remote FSS node, I/O read requests to a mirror volume that is scheduled on the remote disk from the FSS node should be redirected to the remaining plex. However, current vxvm does not handle this correctly. The retrying I/O requests could still be sent to the offline remote disk, which cause to final I/O read failure.

RESOLUTION:
Code changes have been done to schedule the retrying read request on the remaining plex.

* 4055660 (Tracking ID: 4046007)

SYMPTOM:
In FSS environment if the cluster name is changed then the private disk region gets corrupted.

DESCRIPTION:
Under some conditions, when vxconfigd tries to update the TOC (table of contents) blocks of disk private region, the allocation maps cannot be initialized in the memory. This could make allocation maps incorrect and lead to corruption of the private region on the disk.

RESOLUTION:
Code changes have been done to avoid corruption of private disk region.

* 4055668 (Tracking ID: 4045871)

SYMPTOM:
vxconfigd crashed at ddl_get_disk_given_path with following stacks:
ddl_get_disk_given_path
ddl_reconfigure_all
ddl_find_devices_in_system
find_devices_in_system
mode_set
setup_mode
startup
main
_start

DESCRIPTION:
Under some situations, duplicate paths can be added in one dmpnode in vxconfigd. If the duplicate paths are removed then the empty path entry can be generated for that dmpnode. Thus, later when vxconfigd accesses the empty path entry, it crashes due to NULL pointer reference.

RESOLUTION:
Code changes have been done to avoid the duplicate paths that are to be added.

* 4055697 (Tracking ID: 4047793)

SYMPTOM:
When replicated disks are in SPLIT mode, importing its diskgroup failed with "Device is a hardware mirror".

DESCRIPTION:
When replicated disks are in SPLIT mode, which are r/w, importing its diskgroup failed with "Device is a hardware mirror". Third party doesn't expose disk attribute to show when it's in SPLIT mode. Now DMP refers to its REPLICATED status to judge if diskgroup import is allowed or not. `-o usereplicatedev=on/off` is enhanced to archive it.

RESOLUTION:
The code is enhanced to allow diskgroup import when replicated disks are in SPLIT mode.

* 4055772 (Tracking ID: 4043337)

SYMPTOM:
rp_rv.log file uses space for logging.

DESCRIPTION:
rp_rv log files needs to be removed and logger file should have 16 mb rotational log files.

RESOLUTION:
The code changes are implemented to disabel logging for rp_rv.log files

* 4055894 (Tracking ID: 4043495)

SYMPTOM:
DMP is unable to discover all the fscsi controllers when physical location is same for them.
1. Allocate disks to 4 HBAs in the current OS
2. After that, check o/p of vxdmpadm getctlr, and finds that the path is only recognized by the fscsi2 and fscsi3 controllers.
Observed that paths mapped to fscsi4 and fscsi5 are also assigned to the fscsi2 and fscsi3 controllers.

lsdev_C:
fcs2 Available 37-T1 Virtual Fibre Channel Client Adapter
fcs3 Available 38-T1 Virtual Fibre Channel Client Adapter
fcs4 Available 37-T1 Virtual Fibre Channel Client Adapter
fcs5 Available 38-T1 Virtual Fibre Channel Client Adapter

fscsi2 Available 37-T1-01 FC SCSI I/O Controller Protocol Device <<<<<< same for fscsi2 & fscsi4
fscsi3 Available 38-T1-01 FC SCSI I/O Controller Protocol Device <<<<<< same for fscsi3 & fscsi5
fscsi4 Available 37-T1-01 FC SCSI I/O Controller Protocol Device
fscsi5 Available 38-T1-01 FC SCSI I/O Controller Protocol Device

#vxdmpadm getctlr all output
LNAME PNAME VENDOR CTLR-ID
===========================================================
fscsi2 37-T1-01 - -
fscsi3 38-T1-01 - -

DESCRIPTION:
From AIX 7.2 TL5 onwards, PdDvLn attribute has changed. Hence, code is not able to detect it as NPIV devices. Due to this PNAME is not showing as expected.

RESOLUTION:
Made code changes for detecting newer NPIV devices. After these changes getctlr is displaying proper PNAME attributes.

* 4055895 (Tracking ID: 4038865)

SYMPTOM:
In IRQ stack, the system panics at VxDMP module with the following calltrace:
native_queued_spin_lock_slowpath
queued_spin_lock_slowpath
_raw_spin_lock_irqsave7
dmp_get_shared_lock
gendmpiodone
dmpiodone
bio_endio
blk_update_request
scsi_end_request
scsi_io_completion
scsi_finish_command
scsi_softirq_done
blk_done_softirq
__do_softirq
call_softirq
do_softirq
irq_exit
do_IRQ
 <IRQ stack>

DESCRIPTION:
A deadlock issue occurred between inode_hash_lock and DMP shared lock when one process was holding inode_hash_lock, but acquired the DMP shared lock in IRQ context and the other processes holding the DMP shared lock acquired the inode_hash_lock.

RESOLUTION:
To avoid the deadlock issue, the code changes are done.

* 4055899 (Tracking ID: 3993242)

SYMPTOM:
vxsnap prepare on vset might throw error : "VxVM vxsnap ERROR V-5-1-19171 Cannot perform prepare operation on cloud 
volume"

DESCRIPTION:
There were  some wrong volume-records entries being fetched for VSET and due to which required validations were failing and triggering the issue .

RESOLUTION:
Code changes have been done to resolve the issue .

* 4055905 (Tracking ID: 4052191)

SYMPTOM:
Any scripts or command files in the / directory may run unexpectedly when the system starts and vxvm volumes will not be available until those scripts or commands are complete.

DESCRIPTION:
If this issue occurs, /var/svc/log/system-vxvm-vxvm-configure:default.log indicates that a script or a command located in the / directory has been executed.
For example,
ABC Script ran!!
/lib/svc/method/vxvm-configure[241] abc.sh not found
/lib/svc/method/vxvm-configure[242] abc.sh not found
/lib/svc/method/vxvm-configure[243] abc.sh not found
/lib/svc/method/vxvm-configure[244] app/ cannot execute
In this example, abc.sh is located in the / directory and just echoes "ABC script ran !!". vxvm-configure launched abc.sh.

RESOLUTION:
The incorrect comments format in the SunOS_5.11.vxvm-configure.sh script is corrected.

* 4055925 (Tracking ID: 4031064)

SYMPTOM:
During master switch with replication in progress, cluster wide hang is seen on VVR secondary.

DESCRIPTION:
With application running on primary, and replication setup between VVR primary & secondary, when master switch operation is attempted on secondary, it gets hung permanently.

RESOLUTION:
Appropriate code changes are done to handle scenario of master switch operation and replication data on secondary.

* 4055938 (Tracking ID: 3999073)

SYMPTOM:
Data corruption occurred when the fast mirror resync (FMR) was enabled and the failed plex of striped-mirror layout was attached.

DESCRIPTION:
To determine and recover the regions of volumes using contents of detach, a plex attach operation with FMR tracking has been enabled.

For the given volume region, the DCO region size being higher than the stripe-unit of volume, the code logic in plex attached code path was incorrectly skipping the bits in detach maps. Thus, some of the regions (offset-len) of volume did not sync with the attached plex leading to inconsistent mirror contents.

RESOLUTION:
To resolve the data corruption issue, the code has been modified to consider all the bits for given region (offset-len) in plex attached code.

* 4056024 (Tracking ID: 4043518)

SYMPTOM:
vxdmpadm settune dmp_native_support=off
VxVM vxdmpadm ERROR V-5-1-18522 could not get pvid for hdisk0
hdisk1 dmpapi_sys.c:798
VxVM vxdmpadm WARNING V-5-1-16134 Unable to find dmpnode for path
VxVM vxdmpadm ERROR V-5-1-15883 check_bosboot open failed /dev/rhdisk0
hdisk1 errno 2
VxVM vxdmpadm ERROR V-5-1-15253 bosboot would not succeed, please run isk0
hdisk1 manually to find the cause of failure
VxVM vxdmpadm ERROR V-5-1-15251 bosboot check failed
VxVM vxdmpadm INFO V-5-1-18418 restoring protofile
VxVM vxdmpadm ERROR V-5-1-15690 Operation failed for one or more volume groups
VxVM vxdmpadm ERROR V-5-1-15686 The following VG(s) could not be migrated as could not disable DMP support for LVM bootability -
rootvg
#

DESCRIPTION:
DMP Native support is not getting disable because its expecting single hdisk entry. In presence of mirror partition of rootvg, "lslv -l hd5 'bootinfo -v`" giving multiple disk entries. Hence disabling of DMP native support is failing.

RESOLUTION:
Made code changes to grep single rootvg hdisk name. After these changes, dmp native support is working fine.

* 4056107 (Tracking ID: 4036181)

SYMPTOM:
IO error has been reported when RVG is not in enabled state after boot-up.

DESCRIPTION:
When RVG is not enabled/active, the volumes under a RVG will report an IO error.
Messages logged:
systemd[1]: Starting File System Check on /dev/vx/dsk/vvrdg/vvrdata1...
systemd-fsck[4977]: UX:vxfs fsck.vxfs: ERROR: V-3-20113: Cannot open : No such device or address  
systemd-fsck[4977]: fsck failed with error code 31.
systemd-fsck: UX:vxfs fsck.vxfs: ERROR: V-3-20005: read of super-block on /dev/vx/dsk/vvrdg/vvrdata1 failed: Input/output error

RESOLUTION:
Issue got fixed by enabling the RVG using vxrvg command if the RVG is in disabled/recover state.

* 4056124 (Tracking ID: 4008664)

SYMPTOM:
System panic occurs with the following stack:

void genunix:psignal+4()
void vxio:vol_logger_signal_gen+0x40()
int vxio:vollog_logentry+0x84()
void vxio:vollog_logger+0xcc()
int vxio:voldco_update_rbufq_chunk+0x200()
int vxio:voldco_chunk_updatesio_start+0x364()
void vxio:voliod_iohandle+0x30()
void vxio:voliod_loop+0x26c((void *)0)
unix:thread_start+4()

DESCRIPTION:
Vxio keeps vxloggerd proc_t that is used to send a signal to vxloggerd. In case vxloggerd has been ended for some reason, the signal may be sent to an unexpected process, which may cause panic.

RESOLUTION:
Code changes have been made to correct the problem.

* 4056146 (Tracking ID: 3983832)

SYMPTOM:
When the disk groups are deleted, multiple VxVM commands get hang in CVR secondary site.

DESCRIPTION:
VxVM command hangs when a deadlock was encountered during kmsg broadcast while deleting disk group and IBC unfreeze operation.

RESOLUTION:
Changes are done in VxVM code check either by transactions or avoiding deadlock.

* 4056154 (Tracking ID: 3975081)

SYMPTOM:
For AIX, After DMP Native support is enabled on the system, vxdmpadm native list command fails with following error.
VxVM vxdmpadm ERROR V-5-1-15206 DMP support for LVM bootability is disabled.

DESCRIPTION:
FOR AIX, VxVM (Veritas Volume Manager) has a binary i.e vxdmpboot which runs at boot time to enable DMP Native Support for Root FileSystem.
The binary executes at very early stage in boot phase on AIX platform and performs some licensing checks during execution. Earlier the licensing specific library used to be a static library but now it has been changed to Dynamic library. At the boot time when vxdmpboot binary is executed, most of the file systems are not mounted. The directory containing the Dynamic library is also not mounted and hence library would not be available at boot time when the vxdmpboot binary is executed causing the failure in the licensing checks and failure of vxdmpboot execution. This failure prevents native support from getting enabled at boot time for Root Filesystem.

RESOLUTION:
FOR AIX, Licensing checks as part of vxdmpboot binary have been disabled  to enable DMP native support successfully for root filesystem.

* 4056918 (Tracking ID: 4056917)

SYMPTOM:
In Flexible Storage Sharing (FSS) environments, disk group import operation with few disks missing leads to data corruption.

DESCRIPTION:
In FSS environments, import of disk group with missing disks is not allowed. If disk with highest updated configuration information is not present during import, the import operation fired was leading incorrectly incrementing the config TID on remaining disks before failing the operation. When missing disk(s) with latest configuration came back, import was successful. But because of earlier failed transaction, import operation incorrectly choose wrong configuration to import the diskgroup leading to data corruption.

RESOLUTION:
Code logic in disk group import operation is modified to ensure failed/missing disks check happens early before attempting perform any on-disk update as part of import.

* 4061338 (Tracking ID: 3975110)

SYMPTOM:
vxdmpadm native enable vgname=rootvg
Please reboot the system to enable DMP support for LVM bootability

After rebooting, It is still in disabled.

vxdmpadm native list vgname=rootvg
VxVM vxdmpadm ERROR V-5-1-15206 DMP support for LVM bootability is disabled.

DESCRIPTION:
DMP native root support not working due to changes in log file location from /tmp/ to /var/VRTS/vxvm/
As file system is not mounted at the time of execution, new location is not available for binary to create log file.
Thus, execution does not happen.

RESOLUTION:
Made code changes for log location change from /var/VRTS/vxvm to /tmp. After these changes, dmp native support is working fine.

* 4062802 (Tracking ID: 3941037)

SYMPTOM:
VxVM (Veritas Volume Manager) creates some required files under /tmp
and /var/tmp directories.

DESCRIPTION:
VxVM (Veritas Volume Manager) creates some .lock files under /etc/vx directory. 

The non-root users have access to these .lock files, and they may accidentally
modify, move or delete those files.
Such actions may interfere with the normal functioning of the Veritas Volume
Manager.

RESOLUTION:
This Fix address the issue by masking the write permission for non-root users
for these .lock files.

Patch ID: VRTSllt-7.4.1.1100

* 4050664 (Tracking ID: 4046199)

SYMPTOM:
LLT configurations over UDP accept only ethernet interface names as link tag names.

DESCRIPTION:
The tag field in the link definition accepts only the ethernet interface name as a value.

RESOLUTION:
The LLT module is updated to accept any string a as link tag name.

* 4051040 (Tracking ID: 3989372)

SYMPTOM:
When the CPU load and memory consumption is high in a VMware environment, some nodes in an InfoScale cluster may get fenced out.

DESCRIPTION:
Occasionally, in a VMware environment, the operating system may not schedule LLT contexts on time. Consequently, heartbeats from some of the cluster nodes may be lost, and those nodes may get fenced out. This situation typically occurs when the CPU load or the memory usage is high or when the VMDK snapshot or vMotion operations are in progress.

RESOLUTION:
This fix attempts to make clusters more resilient to transient issues by heartbeating using threads bound to every vCPU.

* 4054272 (Tracking ID: 4045607)

SYMPTOM:
LLT over UDP support for transmission and reception of data over 1500 MTU networks.

DESCRIPTION:
The UDP multiport feature in LLT performs poorly in case of 1500 MTU-based networks. Data packets larger than 1500 bytes cannnot be transmitted over 1500 MTU-based networks, so the IP layer fragments them appropriately for transmission. The loss of a single fragment from the set leads to a total packet (I/O) loss. LLT then retransmits the same packet repeatedly until the transmission is successful. Eventually, you may encounter issues with the Flexible Storage Sharing (FSS) feature. For example, the vxprint process or the disk group creation process may stop responding, or the I/O-shipping performance may degrade severely.

RESOLUTION:
The UDP multiport feature of LLT is updated to fragment the packets such that they can be accommodated in the 1500-byte network frame. The fragments are rearranged on the receiving node at the LLT layer. Thus, LLT can track every fragment to the destination, and in case of transmission failures, retransmit the lost fragments based on the current RTT time.

* 4054697 (Tracking ID: 3985775)

SYMPTOM:
Sometimes, the system log may get flooded with LLT heartbeat loss messages that do not necessarily indicate any actual issues with LLT.

DESCRIPTION:
LLT heartbeat loss messages can appear in the system log either due to actual heartbeat drops in the network or due to heartbeat packets arriving out of order. In either case, these messages are only informative and do not indicate any issue in the LLT functionality. Sometimes, the system log may get flooded with these messages, which are not useful.

RESOLUTION:
The LLT module is updated to lower the frequency of printing LLT heartbeat loss messages. This is achieved by increasing the number of missed sequential HB packets required to print this informative message.

Patch ID: VRTSgab-7.4.1.1100

* 4054264 (Tracking ID: 4046413)

SYMPTOM:
After a node is added to or removed from a cluster, the GAB node count or the fencing quorum is not updated.

DESCRIPTION:
The gabconfig -m <node_count> command returns an error even if the correct node count is provided.

RESOLUTION:
To address this issue, a parsing issue with the GAB module is fixed.

* 4054265 (Tracking ID: 4046418)

SYMPTOM:
The GAB module starts up even if LLT is not configured.

DESCRIPTION:
Since the GAB service depends on the LLT service, if LLT fails to start or if it is not configured, GAB should not start.

RESOLUTION:
The GAB module is updated to start only if LLT is configured.

Patch ID: VRTSamf-7.4.1.1100

* 4054323 (Tracking ID: 4001565)

SYMPTOM:
On Solaris 11.4, IMF fails to provide notifications when Oracle processes stop.

DESCRIPTION:
On Solaris 11.4, when Oracle processes stop, IMF provides notification to Oracle agent, but the monitor is not scheduled. As as result, agent fails intelligent monitoring.

RESOLUTION:
Oracle agent now provides notifications when Oracle processes stop.

Patch ID: VRTSvxfen-7.4.1.1100

* 4016876 (Tracking ID: 4000745)

SYMPTOM:
The VxFEN process fails to start due to late discovery of the VxFEN disk group.

DESCRIPTION:
When I/O fencing starts, the VxFEN startup script creates this /etc/vxfentab file on each node. During disk-based fencing, the VxVM module may take longer time to discover the VxFEN disk group. Because of this delay, the 'generate disk list' opreration times out. Therefore, the VxFEN process fails to start and reports the following error: 'ERROR: VxFEN cannot generate vxfentab because vxfendg does not exist'

RESOLUTION:
A new tunable, getdisks_timeout, is introduced to specify the timeout value for the VxFEN disk group discovery. The maximum and the default value for this tunable is 600 seconds. You can set the value of this tunable by adding an getdisks_timeout=<time_in_sec> entry in the /etc/vxfenmode file.

Patch ID: VRTSvcs-7.4.1.1200

* 4054266 (Tracking ID: 4040705)

SYMPTOM:
When a command exceeds 4096 characters, hacli hangs indefinitely.

DESCRIPTION:
Instead of returning the appropriate error message, hacli waits indefinitely for a reply from the VCS engine.

RESOLUTION:
Increased the limit of the hacli '-cmd' option to 7680 characters. Validations for the various hacli options are also now handled better. Thus, when the value of the '-cmd' option exceeds the new limit, hacli no longer hangs, but instead returns the appropriate proper error message.

* 4054267 (Tracking ID: 4040656)

SYMPTOM:
When the ENOMEM error occurs, HAD does not shut down gracefully.

DESCRIPTION:
When HAD encounters the ENOMEM error, it reattempts the operation a few times until it reaches a predefined maximum limit, and then it exits. The hashadow daemon restarts HAD with the '-restart' option. The failover service group in the cluster is not started automatically, because it appears that one of the cluster nodes is in the 'restarting' mode.

RESOLUTION:
HAD is enhanced to exit gracefully when it encounters the ENOMEM error, and the hashadow daemon is updated to restart HAD without the '-restart' option. This enhancement ensures that in such a scenario, the autostart of a failover service group is triggered as expected.

* 4054271 (Tracking ID: 4043700)

SYMPTOM:
In case of failover, parallel, or hybrid service groups, multiple PreOnline triggers can be executed on the same node or on different nodes in a cluster while an online operation is in already progress.

DESCRIPTION:
This issue occurs because the validation of online operations did not take the ongoing execution of PreOnline triggers into consideration. Thus, subsequent online operations are accepted while a PreOnline trigger is already being executed. Consequently, multiple PreOnline trigger instances are executed.

RESOLUTION:
A check for PreOnline triggers is added to the validation of ongoing online operations, and subsequent calls for online operations are rejected. Thus, the PreOnline trigger for failover groups is executed only once.

Patch ID: VRTSvcs-7.4.1.1100

* 3982912 (Tracking ID: 3981992)

SYMPTOM:
A potentially critical security vulnerability in VCS needs to be addressed.

DESCRIPTION:
A potentially critical security vulnerability in VCS needs to be addressed.

RESOLUTION:
This hotfix addresses the security vulnerability. For details, refer to the security advisory at: https://www.veritas.com/content/support/en_US/security/VTS19-003.html

Patch ID: VRTSvcsag-7.4.1.1100

* 4042947 (Tracking ID: 4042944)

SYMPTOM:
In a hardware replication environment, a disk group resource may fail to be imported when the HARDWARE_MIRROR flag is set.

DESCRIPTION:
After the VCS hardware replication agent resource fails over control to the secondary site, the DiskGroup agent does not rescan all the required device paths in case of a multi-pathing configuration. The vxdg import operation fails, because the hardware device characteristics for all the paths are not refreshed.

RESOLUTION:
A new resource-level attribute, ScanDisks, is introduced for the DiskGroup agent. The ScanDisks attribute lets you perform a selective devices scan for all the disk paths that are associated with a VxVM disk group. Before attempting to import a hardware clone or a hardware replicated device, the VxVM and the DMP attributes of a disk are refreshed. The default value of ScanDisks is 0, which indicates that a selective device scan is not performed. Even when ScanDisks is set to 0, if the disk group fails with an error string containing HARDWARE_MIRROR during the first disk group import attempt, the DiskGroup agent performs a selective device scan to increase the chances of a successful import.
Sample resource configuration for hardware clone disk groups:
DiskGroup tc_dg (
DiskGroup = datadg
DGOptions = "-o useclonedev=on -o updateid"
ForceImport = 0
ScanDisks = 1
)
Sample resource configuration for hardware replicated disk groups:
DiskGroup tc_dg (
DiskGroup = datadg
ForceImport = 0
ScanDisks = 1
)

* 4054269 (Tracking ID: 4030215)

SYMPTOM:
The InfoScale agents for Azure agents did not support credential validation methods based on the azure-identity library.

DESCRIPTION:
The Microsoft Azure credential system is revamped, and the new system is available in the azure-identity library.

RESOLUTION:
The InfoScale agents for Azure have been enhanced to support credential validation methods based on the azure-identity library. Now, the agents support the following Azure Python SDK versions:
azure-common==1.1.25
azure-core==1.10.0
azure-identity==1.4.1
azure-mgmt-compute==19.0.0
azure-mgmt-core==1.2.2
azure-mgmt-dns==8.0.0
azure-mgmt-network==17.1.0
azure-storage-blob==12.8.0
msrestazure==0.6.4

* 4054270 (Tracking ID: 4046286)

SYMPTOM:
The InfoScale agents for Azure do not handle generic exceptions.

DESCRIPTION:
The InfoScale agents can handle only the CloudError exception of the Azure APIs. It cannot handle other errors that may occur during certain failure conditions.

RESOLUTION:
The InfoScale agents for Azure are enhanced to handle several API failure conditions.

* 4054273 (Tracking ID: 4044567)

SYMPTOM:
The HostMonitor agent faults while logging the memory usage of a system.

DESCRIPTION:
The HostMonitor agent runs in the background to monitor the usage of the resources of a system. It faults and terminates unexpectedly while logging the memory usage of a system and generates a core dump.

RESOLUTION:
This patch updates the HostMonitor agent to handle the issue that it encounters while logging the memory usage data of a system.

* 4054276 (Tracking ID: 4048164)

SYMPTOM:
When a cloud API that an InfoScale agent has called hangs, an unwanted failover of the associated service group may occur.

DESCRIPTION:
When a cloud SDK API or a CLI command hangs, the monitor function of an InfoScale agent that has called the API or the command may time out. Consequently, the agent may report incorrect resource states, and an unwanted failover of the associated service group may occur.

RESOLUTION:
To avoid this issue, the default value of the FaultOnMonitorTimeout attribute is set to 0 for all the InfoScale agents for cloud support.

Patch ID: VRTSvcsea-7.4.1.1200

* 4054325 (Tracking ID: 4043289)

SYMPTOM:
In an Oracle ASM 19c environment on Solaris, the ASMInst agent fails to come online or to detect the state of the related resources.

DESCRIPTION:
This issue occurs due to incorrect permissions on certain Oracle files on Solaris.

RESOLUTION:
The ASMInst agent is updated to handle the change in permissions that causes this issue.

Patch ID: VRTSvcsea-7.4.1.1100

* 3982248 (Tracking ID: 3989510)

SYMPTOM:
The VCS agent for Oracle does not support Oracle 19c databases.

DESCRIPTION:
In case of non-CDB or CDB-only databases, the VCS agent for Oracle does not recognize that an Oracle 19c resource is intentionally offline after a graceful shutdown. This functionality is never supported for PDB-type databases.

RESOLUTION:
The agent is updated to recognize the graceful shutdown of an Oracle 19c resource as intentional offline in case of non-CDB or CDB-only databases. For details, refer to the article at: https://www.veritas.com/support/en_US/article.100046803.

Patch ID: VRTSpython-3.6.6.10

* 4056779 (Tracking ID: 4049771)

SYMPTOM:
The VRTSpython package needs to be made available on AIX and Solaris to support the InfoScale licensing component.

DESCRIPTION:
Certain Python modules are required for the InfoScale Core Plus licensing component to function. To support this component, the VRTSpython package needs to be made available for the AIX and the Solaris platforms.

RESOLUTION:
This patch deploys the VRTSpython package on AIX and Solaris.

Patch ID: VRTScavf-7.4.1.3400

* 4056567 (Tracking ID: 4054462)

SYMPTOM:
In a hardware replication environment, a shared disk group resource may fail to be imported when the HARDWARE_MIRROR flag is set.

DESCRIPTION:
After the VCS hardware replication agent resource fails over control to the secondary site, the CVMVolDg agent does not rescan all the required device paths in case of a multi-pathing configuration. The vxdg import operation fails, because the hardware device characteristics for all the paths are not refreshed.

RESOLUTION:
This hotfix addresses the issue by providing two new resource-level attributes for the CVMVolDg agent.
- The ScanDisks attribute specifies whether to perform a selective for all the disk paths that are associated with a VxVM disk group. When ScanDisks is set to 1, the agent performs a selective devices scan. Before attempting to import a hardware clone or a hardware replicated device, the VxVM and the DMP attributes of a disk are refreshed. ScanDisks is set to 0 by default, which indicates that a selective device scan is not performed. However, even when ScanDisks is set to 0, if the disk group fails during the first import attempt, the agent checks the error string. If the string contains the text HARDWARE_MIRROR, the agent performs a selective device scan to increase the chances of a successful import.
- The DGOptions attribute specifies options to be used with the vxdg import command that is executed by the agent to bring the CVMVolDg resource online.
Sample resource configuration for hardware replicated shared disk groups:
CVMVolDg tc_dg (
    CVMDiskGroup = datadg
    CVMVolume = { vol01 }
    CVMActivation = sw
    CVMDeportOnOffline = 1
    ClearClone = 1
    ScanDisks = 1
    DGOptions = "-t -o usereplicatedev=on"
    )

Patch ID: VRTSvxfs-7.4.1.3400

* 4026389 (Tracking ID: 4026388)

SYMPTOM:
fscache and sfcache command exited with the following error while performing unpin file
UX:vxfs fscache: ERROR: V-3-28059:  -o load is supported only for pin option

DESCRIPTION:
fscache and sfcache  commands report error while unpinning file from VxFS cache device.

RESOLUTION:
Code changes introduced to fix the error.

* 4050229 (Tracking ID: 4018995)

SYMPTOM:
System gets panic in AIX when doing read ahead of size greater than 4 GB.

DESCRIPTION:
There is a data type mismatch (64bit - 32bit) in VxFS read ahead code path which leads to page list size being truncated and causing the system to panic with the following stack:
vx_getpage_cleanup
vx_do_getpage
vx_mm_getpage
vx_do_read_ahead
vx_cache_read_noinline
vx_read_common_noinline
vx_read1
vx_read
vx_rdwr_attr

RESOLUTION:
The data type mismatch has been fixed in code to handle such scenarios.

* 4053149 (Tracking ID: 4043084)

SYMPTOM:
panic in vx_cbdnlc_lookup

DESCRIPTION:
Panic observed in the following stack trace:
vx_cbdnlc_lookup+000140 ()
vx_int_lookup+0002C0 ()
vx_do_lookup2+000328 ()
vx_do_lookup+0000E0 ()
vx_lookup+0000A0 ()
vnop_lookup+0001D4 (??, ??, ??, ??, ??, ??)
getFullPath+00022C (??, ??, ??, ??)
getPathComponents+0003E8 (??, ??, ??, ??, ??, ??, ??)
svcNameCheck+0002EC (??, ??, ??, ??, ??, ??, ??)
kopen+000180 (??, ??, ??)
syscall+00024C ()

RESOLUTION:
Code changes to handle memory pressure while changing FC connectivity

* 4054243 (Tracking ID: 4014274)

SYMPTOM:
Gsed cmd might report a bad address error when VxFS receives 
ACE(access control entry) masking flag.

DESCRIPTION:
While performing Gsed cmd on VxFS filesystem, if VxFS receives ACE_GETACLCNT 
and ACE_GETACL masking flags; VxFS might report bad address error as VxFS 
does not support ACE(access control entry) flags.

RESOLUTION:
Added a code to handle ACE flags

* 4054244 (Tracking ID: 4052449)

SYMPTOM:
Cluster goes in an 'unresponsive' mode while invalidating pages due to duplicate page entries in iowr structure.

DESCRIPTION:
While finding pages for invalidation of inodes, VxFS traverses radix tree by taking RCU lock and fills the IO structure with dirty/writeback pages that need to be invalidated in an array. This lock is efficient for read but does not protect the parallel creation/deletion of node. Hence, when VxFS finds page, consistency for the page in checked through radix_tree_exception()/radix_tree_deref_retry(). And if it fails, VxFS restarts the page finding from start offset. But VxFs does not reset the array index, leading to incorrect filling of IO structure's array which was causing  duplicate entries of pages. While trying to destroy these pages, VxFS takes page lock on each page. Because of duplicate entries, VxFS tries to take page lock couple of times on same page, leading to self-deadlock.

RESOLUTION:
Code is modified to reset the array index correctly in case of failure to find pages.

* 4054387 (Tracking ID: 4054386)

SYMPTOM:
VxFS systemd service may show active status despite the module not being loaded.

DESCRIPTION:
If systemd service fails to load vxfs module, the service still shows status as active instead of failed.

RESOLUTION:
The script is modified to show the correct status in case of such failures.

* 4054412 (Tracking ID: 4042254)

SYMPTOM:
vxupgrade sets fullfsck flag in the filesystem if it is unable to upgrade the disk layout version because of ENOSPC.

DESCRIPTION:
If the filesystem is 100 % full and  its disk layout version is upgraded by using vxupgrade, then this utility starts the upgrade and later it fails with ENOSPC and ends up setting fullfsck flag in the filesystem.

RESOLUTION:
Code changes introduced which first calculate the required space to perform the disk layout upgrade. If the required space is not available, it fails the upgrade gracefully without setting fullfsck flag.

* 4054416 (Tracking ID: 4005620)

SYMPTOM:
Inode count maintained in the inode allocation unit (IAU) can be negative when an IAU is marked bad. An error such as the following is logged.

V-2-4: vx_mapbad - vx_inoauchk - /fs1 file system free inode bitmap in au 264 marked bad

Due to the negative inode count, errors like the following might be observed and processes might be stuck at inode allocation with a stack trace as shown.

V-2-14: vx_iget - inode table overflow

	vx_inoauchk 
	vx_inofindau 
	vx_findino 
	vx_ialloc 
	vx_dirmakeinode 
	vx_dircreate 
	vx_dircreate_tran 
	vx_pd_create 
	vx_create1_pd 
	vx_do_create 
	vx_create1 
	vx_create0 
	vx_create 
	vn_open 
	open

DESCRIPTION:
The inode count can be negative if somehow VxFS tries to allocate an inode from an IAU where the counter for regular file and directory inodes is zero. In such a situation, the inode allocation fails and the IAU map is marked bad. But the code tries to further reduce the already-zero counters, resulting in negative counts that can cause subsequent unresponsive situation.

RESOLUTION:
Code is modified to not reduce inode counters in vx_mapbad code path if the result is negative. A diagnostic message like the following flashes.
"vxfs: Error: Incorrect values of ias->ifree and Aus rifree detected."

* 4054724 (Tracking ID: 4018697)

SYMPTOM:
After installing InfoScale 7.4.2, when the system is started, it fails to start with the following message:
an inconsistency in the boot archive was detected the boot archive will be updated, then the system will reboot

DESCRIPTION:
Due to a defect in the vxfs-modload service, vxfs modules get copied to the system on each reboot. As a result, the system goes into an inconsistent state and gets stuck in a reboot loop.

RESOLUTION:
The vxfs-modload script is modified to address this defect.

* 4054725 (Tracking ID: 4051108)

SYMPTOM:
While storing multiple attributes for a file in an immediate area of inode, system might be unresponsive due to a wrong loop increment statement.

DESCRIPTION:
While storing attribute for a file in an immediate area of inode, loop is used to store multiple attributes one-by-one. Presence of wrong loop increment statement might cause loop to execute indefinitely. The system might be unresponsive as a result.

RESOLUTION:
Code is corrected to increment loop variable properly after loop iterations.

* 4054726 (Tracking ID: 4051026)

SYMPTOM:
If file system is created with inode size 512, the file system might report inconsistencies with the bitmap after running fsck .

DESCRIPTION:
With inode size 512, while running fsck some of the inodes are getting marked as allocated even though they are free . Bitmaps are actually correct on-disk but fsck reports it wrong.

RESOLUTION:
Fixed FSCK binary to mark indoes allocated/free correctly.

* 4055858 (Tracking ID: 4042925)

SYMPTOM:
Intermittent Performance issue on commands like df and ls.

DESCRIPTION:
Commands like "df" "ls" issue stat system call on node to calculate the statistics of the file system. In a CFS, when stat system call is issued, it compiles statistics from all nodes. When multiple df or ls are fired within specified time limit, vxfs is optimized. vxfs returns the cached statistics, instead of recalculating statistics from all nodes. If multiple such commands are fired in succession and one of the old caller of stat system call takes time, this optimization fails and VxFS recompiles statistics from all nodes. This can lead to bad performance of stat system call, leading to unresponsive situations for df, ls commands.

RESOLUTION:
Code is modified to protect last modified time of stat system call with a sleep lock.

Patch ID: VRTSodm-7.4.1.3400

* 4057073 (Tracking ID: 4057072)

SYMPTOM:
VRTSodm module is not getting loaded on AIX.

DESCRIPTION:
Need recompilation of VRTSodm. Due to recent changes in VRTSodm 
some symbols are not being resolved.

RESOLUTION:
Recompiled the VRTSodm to load vxodm module.

Patch ID: vom-HF074901

* 4059638 (Tracking ID: 4059635)

SYMPTOM:
N/A

DESCRIPTION:
VIOM Agent for InfoScale 7.4.1 Update 6

RESOLUTION:
N/A

Patch ID: VRTSvlic-4.01.741.300

* 4049416 (Tracking ID: 4049416)

SYMPTOM:
Frequent Security vulnerabilities reported in JRE.

DESCRIPTION:
There are many vulnerabilities reported in JRE every quarter. To overcome this vulnerabilities issue migrate Licensing Collector service from Java to Python.
All other behavior of Licensing Collector service will remain the same.

RESOLUTION:
Migrated Licensing Collector service from Java to Python.



INSTALLING THE PATCH
--------------------
Run the Installer script to automatically install the patch:
-----------------------------------------------------------
Please be noted that the installation of this P-Patch will cause downtime.

To install the patch perform the following steps on at least one node in the cluster:
1. Copy the patch infoscale-aix-Patch-7.4.1.1000.tar.gz to /tmp
2. Untar infoscale-aix-Patch-7.4.1.1000.tar.gz to /tmp/hf
    # mkdir /tmp/hf
    # cd /tmp/hf
    # gunzip /tmp/infoscale-aix-Patch-7.4.1.1000.tar.gz
    # tar xf /tmp/infoscale-aix-Patch-7.4.1.1000.tar
3. Install the hotfix(Please be noted that the installation of this P-Patch will cause downtime.)
    # pwd /tmp/hf
    # ./installVRTSinfoscale741P1000 [<host1> <host2>...]

You can also install this patch together with 7.4.1 base release using Install Bundles
1. Download this patch and extract it to a directory
2. Change to the Veritas InfoScale 7.4.1 directory and invoke the installer script
   with -patch_path option where -patch_path should point to the patch directory
    # ./installer -patch_path [<path to this patch>] [<host1> <host2>...]

Install the patch manually:
--------------------------
Run the Installer script to automatically install the patch:
-----------------------------------------------------------
Please be noted that the installation of this P-Patch will cause downtime.

To install the patch perform the following steps on at least one node in the cluster:
1. Copy the patch vm-aix71-Patch-7.4.1.3300.tar.gz to /tmp
2. Untar vm-aix71-Patch-7.4.1.3300.tar.gz to /tmp/hf
    # mkdir /tmp/hf
    # cd /tmp/hf
    # gunzip /tmp/vm-aix71-Patch-7.4.1.3300.tar.gz
    # tar xf /tmp/vm-aix71-Patch-7.4.1.3300.tar
3. Install the hotfix(Please be noted that the installation of this P-Patch will cause downtime.)
    # pwd /tmp/hf
    # ./installVRTSvxvm741P3300 [<host1>   <host2>  ...]

You can also install this patch together with 7.4.1 base release using Install Bundles
1. Download this patch and extract it to a directory
2. Change to the Veritas InfoScale 7.4.1 directory and invoke the installer script
   with -patch_path option where -patch_path should point to the patch directory
    # ./installer -patch_path [<path to this patch>  ] [<host1>   <host2>  ...]

Install the patch manually:
--------------------------
Run the Installer script to automatically install the patch:
-----------------------------------------------------------
Please be noted that the installation of this P-Patch will cause downtime.

To install the patch perform the following steps on at least one node in the cluster:
1. Copy the patch vm-aix-Patch-7.4.1.3300.tar.gz to /tmp
2. Untar vm-aix-Patch-7.4.1.3300.tar.gz to /tmp/hf
    # mkdir /tmp/hf
    # cd /tmp/hf
    # gunzip /tmp/vm-aix-Patch-7.4.1.3300.tar.gz
    # tar xf /tmp/vm-aix-Patch-7.4.1.3300.tar
3. Install the hotfix(Please be noted that the installation of this P-Patch will cause downtime.)
    # pwd /tmp/hf
    # ./installVRTSvxvm741P3300 [<host1>    <host2>   ...]

You can also install this patch together with 7.4.1 base release using Install Bundles
1. Download this patch and extract it to a directory
2. Change to the Veritas InfoScale 7.4.1 directory and invoke the installer script
   with -patch_path option where -patch_path should point to the patch directory
    # ./installer -patch_path [<path to this patch>   ] [<host1>    <host2>   ...]

Install the patch manually:
--------------------------
1. Since the patch process will configure the new kernel extensions,
        a) Stop I/Os to all the VxVM volumes.
        b) Ensure that no VxVM volumes are in use or open or mounted before starting the installation procedure.
        c) Stop applications using any VxVM volumes.
2. Check whether root support or DMP native support is enabled. If it is enabled, it will be retained after patch upgrade.
# vxdmpadm gettune dmp_native_support
If the current value is 'on', DMP native support is enabled on this machine.
# vxdmpadm native list vgname=rootvg
If the output is some list of hdisks, root support is enabled on this machine
3. Proceed with patch installation as mentioned below
    a. Before applying this VxVM 7.4.1.3300 patch, stop the VEA Server's vxsvc process:
        # /opt/VRTSob/bin/vxsvcctrl stop
    b. If your system has Veritas Operation Manager(VOM) configured then check whether vxdclid daemon is running, if it is running then stop vxdclid daemon.
       Command to check the status of vxdclid daemon
       #/opt/VRTSsfmh/etc/vxdcli.sh status
       Command to stop the vxdclid daemon
       #/opt/VRTSsfmh/etc/vxdcli.sh stop
    c. To apply this patch, use following command:
       # installp -ag -d ./VRTSvxvm.bff VRTSvxvm
    d. To apply and commit this patch, use following command:
       # installp -acg -d ./VRTSvxvm.bff VRTSvxvm
NOTE: Please refer installp(1M) man page for clear understanding on APPLY &amp; COMMIT state of the package/patch.
    e. Reboot the system to complete the patch  upgrade.
        #reboot
    f. If you have stopped vxdclid daemon before upgrade then re-start vxdclid daemon using following command
       #/opt/VRTSsfmh/etc/vxdcli.sh start


REMOVING THE PATCH
------------------
Run the Uninstaller script to automatically remove the patch:
------------------------------------------------------------
To uninstall the patch perform the following step on at least one node in the cluster:
    # /opt/VRTS/install/uninstallVRTSinfoscale741P1000 [<host1> <host2>...]

Remove the patch manually:
-------------------------
Run the Uninstaller script to automatically remove the patch:
------------------------------------------------------------
To uninstall the patch perform the following step on at least one node in the cluster:
    # /opt/VRTS/install/uninstallVRTSvxvm741P3300 [<host1>   <host2>  ...]

Remove the patch manually:
-------------------------
1. Check whether root support or DMP native support is enabled or not:
      # vxdmpadm gettune dmp_native_support
If the current value is "on", DMP native support is enabled on this machine.
      # vxdmpadm native list vgname=rootvg
If the output is some list of hdisks, root support is enabled on this machine
If disabled: goto step 3.
If enabled: goto step 2.
2. If root support or DMP native support is enabled:
        a. It is essential to disable DMP native support.
        Run the following command to disable DMP native support as well as root support
              # vxdmpadm settune dmp_native_support=off
        b. If only root support is enabled, run the following command to disable root support
              # vxdmpadm native disable vgname=rootvg
        c. Reboot the system
              # reboot
3.
   a. Before backing out patch, stop the VEA server's vxsvc process:
           # /opt/VRTSob/bin/vxsvcctrl stop
   b. If your system has Veritas Operation Manager(VOM) configured then check whether vxdclid daemon is running, if it is running then stop vxdclid daemon.
      Command to check the status of vxdclid daemon
      #/opt/VRTSsfmh/etc/vxdcli.sh status
      Command to stop the vxdclid daemon
      #/opt/VRTSsfmh/etc/vxdcli.sh stop
   c. To reject the patch if it is in "APPLIED"; state, use the following command and re-enable DMP support
      # installp -r VRTSvxvm 7.4.1.3300
   d. #  reboot
   e. If you have stopped vxdclid daemon before upgrade then re-start vxdclid daemon using following command
      #/opt/VRTSsfmh/etc/vxdcli.sh start


SPECIAL INSTRUCTIONS
--------------------
Following vulnerabilities has been resolved in 741U6(VRTSpython 3.6.6.10)
CVE-2017-18342
CVE-2020-14343
CVE-2020-1747
Resolution: Upgraded pyyaml to 5.4.1 version
CVE-2019-10160
CVE-2019-9636
CVE-2020-27619
Resolution: Patched the source code of the python itself referenced in corresponding BDSA to resolve the vulnerabilities.


OTHERS
------
NONE