infoscale-sol11_sparc-Patch-7.1.0.1100

 Basic information
Release type: Patch
Release date: 2019-07-15
OS update support: None
Technote: None
Documentation: None
Popularity: 4001 viewed    downloaded
Download size: 85.04 MB
Checksum: 2043231334

 Applies to one or more of the following products:
InfoScale Availability 7.1 On Solaris 11 SPARC
InfoScale Enterprise 7.1 On Solaris 11 SPARC
InfoScale Foundation 7.1 On Solaris 11 SPARC
InfoScale Storage 7.1 On Solaris 11 SPARC

 Obsolete patches, incompatibilities, superseded patches, or other requirements:
None.

 Fixes the following incidents:
3916822, 3921280, 3977787, 3977852, 3977854, 3977990, 3977992, 3977993, 3977996, 3978000, 3978001, 3978003, 3978183, 3978186, 3978188, 3978189, 3978197, 3978199, 3978200, 3978217, 3978219, 3978221, 3978222, 3978223, 3978225, 3978228, 3978233, 3978234, 3978239, 3978248, 3978306, 3978309, 3978310, 3978325, 3978328, 3978334, 3978340, 3978347, 3978357, 3978361, 3978370, 3978384, 3978602, 3978609, 3980027, 3980167, 3980168, 3980453, 3980962, 3981018, 3981436, 3981437, 3981552, 3981587

 Patch ID:
VRTSvxvm-7.1.0.4200
VRTSvxfs-7.1.0.4300
VRTSodm-7.1.0.3300
VRTSllt-7.1.0.1100
VRTSamf-7.1.0.1100

Readme file
                          * * * READ ME * * *
                       * * * InfoScale 7.1 * * *
                         * * * Patch 1100 * * *
                         Patch Date: 2019-07-12


This document provides the following information:

   * PATCH NAME
   * OPERATING SYSTEMS SUPPORTED BY THE PATCH
   * PACKAGES AFFECTED BY THE PATCH
   * BASE PRODUCT VERSIONS FOR THE PATCH
   * SUMMARY OF INCIDENTS FIXED BY THE PATCH
   * DETAILS OF INCIDENTS FIXED BY THE PATCH
   * INSTALLATION PRE-REQUISITES
   * INSTALLING THE PATCH
   * REMOVING THE PATCH


PATCH NAME
----------
InfoScale 7.1 Patch 1100


OPERATING SYSTEMS SUPPORTED BY THE PATCH
----------------------------------------
Solaris 11 SPARC


PACKAGES AFFECTED BY THE PATCH
------------------------------
VRTSamf
VRTSllt
VRTSodm
VRTSvxfs
VRTSvxvm


BASE PRODUCT VERSIONS FOR THE PATCH
-----------------------------------
   * InfoScale Availability 7.1
   * InfoScale Enterprise 7.1
   * InfoScale Foundation 7.1
   * InfoScale Storage 7.1


SUMMARY OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
Patch ID: VRTSvxfs-7.1.0.4300
* 3978197 (3894712) ACL permissions are not inherited correctly on cluster 
file system.
* 3978228 (3928046) VxFS kernel panic BAD TRAP: type=34 in vx_assemble_hdoffset().
* 3978248 (3927419) System panic due to a race between freeze and unmount.
* 3978340 (3898565) Solaris no longer supports F_SOFTLOCK.
* 3978347 (3896920) Secondary mount could get into a deadlock.
* 3978357 (3911048) LDH corrupt and filesystem hang.
* 3978361 (3905576) CFS hang during a cluster wide freeze
* 3978370 (3941942) Unable to handle kernel NULL pointer dereference while freeing fiostats.
* 3978384 (3909583) Disable partitioning of directory if directory size is greater than upper 
threshold 
value.
* 3978602 (3908954) Some writes could be missed causing data loss.
* 3978609 (3921152) Performance drop caused by vx_dalloc_flush().
* 3980167 (3968785) VxFS module failed to load on Solaris 11.4.
* 3980962 (3931761) Cluster wide hang may be observed in case of high workload.
* 3981018 (3978615) VxFS filesystem is not getting mounted after OS upgrade and first reboot
* 3981552 (3919130) Failures observed while setting named attribute using
nxattrset command.
Patch ID: VRTSodm-7.1.0.3300
* 3980168 (3968788) ODM module failed to load on Solaris 11.4.
Patch ID: VRTSamf-7.1.0.1100
* 3981437 (3970679) Veritas Infoscale Availability does not support Oracle Solaris 11.4.
Patch ID: VRTSllt-7.1.0.1100
* 3981436 (3970679) Veritas Infoscale Availability does not support Oracle Solaris 11.4.
Patch ID: VRTSvxvm-7.1.0.4200
* 3916822 (3895415) Due to illegal memory access, nodes in the cluster can observe a panic.
* 3921280 (3904538) IO hang happens during slave node leave or master node switch because of racing 
between RV(Replicate Volume) recovery SIO(Staged IO) and new coming IOs.
* 3977787 (3921994) Failure in the backup for disk group, Temporary files such as 
<DiskGroup>.bslist .cslist .perm are seen in the directory /var/temp.
* 3977852 (3900463) vxassist may fail to create a volume on disks having size in terabytes when '-o
ordered' clause is used along with 'col_switch' attribute while creating the volume.
* 3977854 (3895950) vxconfigd hang observed due to accessing stale/un-initiliazed lock.
* 3977990 (3899198) VxDMP (Veritas Dynamic MultiPathing)  causes system panic 
after a shutdown/reboot.
* 3977992 (3878153) VVR 'vradmind' deamon core dump.
* 3977993 (3932356) vxconfigd dumping core while importing DG
* 3977996 (3905030) system hang when install/uninstall VxVM(Veritas Volume Manager)
* 3978000 (3893150) VxDMP vxdmpadm native ls command sometimes doesn't report imported disks' 
pool name
* 3978001 (3919902) vxdmpadm iopolicy switch command can fail and standby paths are not honored 
by some iopolicy
* 3978003 (3953523) vxdisk list not showing DMP managed disks post reboot on sol10 LDOM guest
* 3978183 (3959091) volume was in disable status post reboot on sol10 LDOM guest
* 3978186 (3898069) System panic may happen in dmp_process_stats routine.
* 3978188 (3878030) Enhance VxVM DR tool to clean up OS and VxDMP device trees without user 
interaction.
* 3978189 (3879972) vxconfigd core dumps when NVME boot devices are attached to the system
* 3978199 (3891252) vxconfigd segmentation fault affecting the cluster related processes.
* 3978200 (3935974) When client process shuts down abruptly or resets connection during 
communication with the vxrsyncd daemon, it may terminate
vxrsyncd daemon.
* 3978217 (3873123) If the disk with CDS EFI label is used as remote
disk on the cluster node, restarting the vxconfigd
daemon on that particular node causes vxconfigd
to go into disabled state
* 3978219 (3919559) IO hangs after pulling out all cables, when VVR is reconfigured.
* 3978221 (3892787) Enabling DMP (Dynamic Multipathing) native support should not import the exported 
zpools present on the system.
* 3978222 (3907034) The mediatype is not shown as ssd in vxdisk -e list command for SSD (solid 
state devices) devices.
* 3978223 (3958878) vxrecover dumps core on system reboot.
* 3978225 (3948140) System panic can occur if size of RTPG (Report Target Port Groups) data returned
by underlying array is greater than 255.
* 3978233 (3879324) VxVM DR tool fails to handle busy device problem while LUNs are removed from  OS
* 3978234 (3795622) With Dynamic Multipathing (DMP) Native Support enabled, Logical Volume Manager
(LVM) global_filter is not updated properly in lvm.conf file.
* 3978239 (3943519) System failed to start up with error "Cannot find device number for dmpbootdev".
* 3978306 (3945411) System wasn't able to boot after enabling DMP native support for ZFS boot 
devices
* 3978309 (3906119) Failback didn't happen when the optimal path returned back in a cluster 
environment.
* 3978310 (3921668) vxrecover command with -m option fails when executed on the slave
nodes.
* 3978325 (3922159) Thin reclaimation may fail on Xtremio SSD disks.
* 3978328 (3931936) VxVM(Veritas Volume Manager) command hang on master node after 
restarting 
slave node.
* 3978334 (3932246) vxrelayout operation fails to complete.
* 3980027 (3921572) vxconfigd dumps core during cable pull test.
* 3980453 (3964779) Changes to support Solaris 11.4 with Volume Manager
* 3981587 (3918356) zpools are imported automatically when DMP(Dynamic Multipathing) native support is set to on which may lead to zpool corruption.


DETAILS OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
This patch fixes the following incidents:

Patch ID: VRTSvxfs-7.1.0.4300

* 3978197 (Tracking ID: 3894712)

SYMPTOM:
ACL permissions are not inherited correctly on cluster file system.

DESCRIPTION:
The ACL counts stored on a directory inode gets reset every 
time directory inodes 
ownership is switched between the nodes. When ownership on directory inode 
comes back to the node, 
which  previously abdicated it, ACL permissions were not getting inherited 
correctly for the newly 
created files.

RESOLUTION:
Modified the source such that the ACLs are inherited correctly.

* 3978228 (Tracking ID: 3928046)

SYMPTOM:
VxFS panic in the stack like below due to memory address not aligned:
void vxfs:vx_assemble_hdoffset+0x18
void vxfs:vx_assemble_opts+0x8c
void vxfs:vx_assemble_rwdata+0xf4
void vxfs:vx_gather_rwdata+0x58
void vxfs:vx_rwlock_putdata+0x2f8
void vxfs:vx_glm_cbfunc+0xe4
void vxfs:vx_glmlist_thread+0x164
unix:thread_start+4

DESCRIPTION:
The panic issue happened on copying piggyback data from inode to data buffer 
for the rwlock under revoke processing. After some data has been copied to 
the 
data buffer, it reached to a 32-bits aligned address, but the value (large 
dir 
freespace offset) which is defined as 64-bits data type was being accessed 
at 
the address. Then it causes system panic due to memory address not aligned.

RESOLUTION:
The code changed by copy data to the 32-bits aligned address through bcopy() 
rather than access directly.

* 3978248 (Tracking ID: 3927419)

SYMPTOM:
System panicked with this stack:
- vxg_api_range_unlockwf
- vx_glm_range_unlock
- vx_imntunlock
- vx_do_unmount
- vx_unmount
- generic_shutdown_super
- kill_block_super 
- vx_kill_sb
- amf_kill_sb 
- deactivate_super
- mntput_no_expire
- sys_umount

DESCRIPTION:
This is a race between freeze operation and unmount on a disabled filesyetm. The 
freeze converts the glm locks of the mount point directory inode. At the same time, 
unmount thread may be unlocking the same lock which is during the middle of the lock 
conversion, thus causes the panic.

RESOLUTION:
The lock conversion on the mount point directory inode is unnecessary, so skip it.

* 3978340 (Tracking ID: 3898565)

SYMPTOM:
System panicked with this stack:
- panicsys
- panic_common
- panic
- segshared_fault
- as_fault
- vx_memlock
- vx_dio_zero
- vx_dio_rdwri
- vx_dio_read
- vx_read_common_inline
- vx_read_common_noinline
- vx_read1
- vx_read
- fop_read

DESCRIPTION:
Solaris no longer supports F_SOFTLOCK. The vx_memlock() uses F_SOFTLOCK to fault
in the page.

RESOLUTION:
Change vxfs code to avoid using F_SOFTLOCK.

* 3978347 (Tracking ID: 3896920)

SYMPTOM:
Secondary mount could hang if VX_GETEMAPMSG is sent around the same time.

DESCRIPTION:
Fixing the deadlock between the secondary mount and VX_GETEMAPMSG. They 
stay on the same priority level, which could cause deadlock because secondary mount 
msgs(VX_FSADM_QUERY_MSG, VX_DEVICE_QUERY_MSG, VX_FDD_ADV_TBL_MSG). They are 
sent when the FS is frozen, so any messages(VX_GETEMAPMSG) dependent on these, could hang 
for it and these mount time messages could hang for primary to respond which has sent a 
VX_GETEMAPMSG broadcast, waiting for the node mounting the FS, to respond.

RESOLUTION:
Move the secondary mount messages to priority-13 queue, since the messages 
there take mayfrz lock, they won't collide with these messge. The only exception is 
VX_EDELEMODEMSG but it doesn't wait for the response, so it's safe.

* 3978357 (Tracking ID: 3911048)

SYMPTOM:
The LDH bucket validation failure message is logged and system hang.

DESCRIPTION:
When modifying a large directory, vxfs needs to find a new bucket in the LDH 
for this 
directory, and once the bucket is full, it will be be split to get more 
bucket to use. 
When the bucket is split to maximum amount, overflow bucket will be 
allocated. Under 
some condition, the available bucket lookup on overflow bucket will may got 
incorrect 
result and overwrite the existing bucket entry thus corrupt the LDH file. 
Another 
problem is that when the bucket invalidation failed, the bucket buffer is 
released 
without checking whether the buffer is already in a previous transaction, 
this may 
cause the transaction flush thread to hang and finally stuck the whole 
filesystem.

RESOLUTION:
Correct the LDH bucket entry change code to avoid the corrupt. And release 
the bucket 
buffer without throw it out of memory to avoid blocking the transaction 
flush.

* 3978361 (Tracking ID: 3905576)

SYMPTOM:
Cluster file system hangs. On one node, all worker threads are blocked due to file
system freeze. And there's another thread blocked with stack like this:

- __schedule
- schedule
- vx_svar_sleep_unlock
- vx_event_wait
- vx_olt_iauinit
- vx_olt_iasinit
- vx_loadv23_fsetinit
- vx_loadv23_fset
- vx_fset_reget
- vx_fs_reinit
- vx_recv_cwfa
- vx_msg_process_thread
- vx_kthread_init

DESCRIPTION:
The frozen CFS won't thaw because the mentioned thread is waiting for a work item
to be processed in vx_olt_iauinit(). Since all the worker threads are blocked,
there is no free thread to process this work item.

RESOLUTION:
Change the code in vx_olt_iauinit(), so that the work item will be processed even
with all worker threads blocked.

* 3978370 (Tracking ID: 3941942)

SYMPTOM:
If fiostats_enabled filesystem is created, and if odmwrites are in progress, 
forcefully unmounting the filesystem can panic the system.

crash_kexec
oops_end
no_context
__bad_area_nosemaphore
bad_area_nosemaphore
__do_page_fault
do_page_fault
vx_fiostats_free
fdd_chain_inactive_common
fdd_chain_inactive
fdd_odm_close
odm_vx_close
odm_fcb_free
odm_fcb_rel
odm_ident_close
odm_exit
odm_tsk_daemon_deathlist
odm_tsk_daemon
odm_kthread_init
kernel_thread

DESCRIPTION:
When we are freeing fiostats assigned to an inode, when we unmount the 
filesystem forcefully, we have to validate fs field. Otherwise we may end up 
in a situation where we dereference NULL pointer for checks in this codepath, 
which panics.

RESOLUTION:
Code is modified to add checks to validate fs in such scenarios of force 
unmount.

* 3978384 (Tracking ID: 3909583)

SYMPTOM:
Disable partitioning of directory if directory size is greater than upper threshold value.

DESCRIPTION:
If PD is enabled during mount, mount may take long time to complete. Because mount 
tries to partition all the directories hence looks like hung. To avoid such hangs, a new upper 
threshold 
value for PD is added which will disable partitioning of directory if directory size is above that value.

RESOLUTION:
Code is modified to disable partitioning of directory if directory size is greater than 
upper 
threshold value.

* 3978602 (Tracking ID: 3908954)

SYMPTOM:
Whilst performing vectored writes, using writev(), where two iovec-writes write
to different offsets within the same 4K page-aligned range of a file, it is
possible to find null data at the beginning of the 4Kb range when reading
the
data back.

DESCRIPTION:
Whilst multiple processes are performing vectored writes to a file, using
writev(),

The following situation can occur 

We have 2 iovecs, the first is 448 bytes and the second is 30000 bytes. The
first iovec of 448 bytes completes, however the second iovec finds that the
source page is no longer in memory. As it cannot fault-in during uiomove, it has
to undo both iovecs. It then faults the page back in and retries the second iovec
only. However, as the undo operation also undid the first iovec, the first 448
bytes of the page are populated with nulls. When reading the file back, it seems
that no data was written for the first iovec. Hence, we find nulls in the file.

RESOLUTION:
Code has been changed to handle the unwind of multiple iovecs accordingly in the
scenarios where certain amount of data is written out of a particular iovec and
some from other.

* 3978609 (Tracking ID: 3921152)

SYMPTOM:
Performance drop. Core dump shows threads doing vx_dalloc_flush().

DESCRIPTION:
An implicit typecast error in vx_dalloc_flush() can cause this performance issue.

RESOLUTION:
The code is modified to do an explicit typecast.

* 3980167 (Tracking ID: 3968785)

SYMPTOM:
VxFS module failed to load on Solaris 11.4.

DESCRIPTION:
The VxFS module failed to load on Solaris 11.4 release, due to the kernel level changes in 11.4 kernel.

RESOLUTION:
Added VxFS support for Solaris 11.4 release.

* 3980962 (Tracking ID: 3931761)

SYMPTOM:
cluster wide hang may be observed in a race scenario if freeze gets initiated and there are multiple pending workitems in the worklist related to lazy isize update workitems.

DESCRIPTION:
If lazy_isize_enable tunable is set to ON and the 'ls -l' command is executed frequently from a non-writing node of the cluster, it accumulates a huge number of workitems to get processed by the worker threads. If any workitem with active level 1 is enqueued after these workitems, and clusterwide freeze gets initiated, it leads to a deadlock situation. The worker threads gets exhausted in processing the lazy isize update workitems and the workitem with active level 1 never gets processed. This leads to the cluster to stop responding.

RESOLUTION:
VxFS has been updated to discard the blocking lazy isize update workitems if freeze is in progress.

* 3981018 (Tracking ID: 3978615)

SYMPTOM:
VxFS filesystem is not getting mounted after OS upgrade and first reboot

DESCRIPTION:
The vxfs-modload service is not getting called before local_fs service. The vxfs-modload service is use to replace the appropriate kernel modules after OS upgrade and reboot. The vxfs device files are also not configuring properly after system boot from sol11.4. Due to this the filesystem is not getting mounted after OS upgrade and first reboot.

RESOLUTION:
Code changes are done in service file.

* 3981552 (Tracking ID: 3919130)

SYMPTOM:
The following failure may be observed while setting named attribute
using nxattrset command:
INFO: V-3-28410:  nxattrset failed for <filename> with error 48

DESCRIPTION:
In Solaris platform, the user land structures are of 32 bits and
kernel structures are of 64 bits. The transition from user space to kernel space
reads information regarding attributes from 32 bit structures. There is a flag
field in 64 bit structure that is used for further processing, but due to the
missing flag field in 32 bit structure, the corresponding flag field of 64 bits
structure does not get initialized and hence contains the garbage value. It
leads to the failure of nxattrset command.

RESOLUTION:
The required flag field has been introduced in 32 bit structure.

Patch ID: VRTSodm-7.1.0.3300

* 3980168 (Tracking ID: 3968788)

SYMPTOM:
ODM module failed to load on Solaris 11.4.

DESCRIPTION:
The ODM module failed to load on Solaris 11.4 release, due to the kernel level changes in 11.4.

RESOLUTION:
Added ODM support for Solaris 11.4 release.

Patch ID: VRTSamf-7.1.0.1100

* 3981437 (Tracking ID: 3970679)

SYMPTOM:
Veritas Infoscale Availability does not support Oracle Solaris 11.4.

DESCRIPTION:
Veritas Infoscale Availability does not support Oracle Solaris versions later 
than 11.3.

RESOLUTION:
Veritas Infoscale Availability now supports Oracle Solaris 11.4.

Patch ID: VRTSllt-7.1.0.1100

* 3981436 (Tracking ID: 3970679)

SYMPTOM:
Veritas Infoscale Availability does not support Oracle Solaris 11.4.

DESCRIPTION:
Veritas Infoscale Availability does not support Oracle Solaris versions later 
than 11.3.

RESOLUTION:
Veritas Infoscale Availability now supports Oracle Solaris 11.4.

Patch ID: VRTSvxvm-7.1.0.4200

* 3916822 (Tracking ID: 3895415)

SYMPTOM:
Panic can occur on the nodes in the cluster all of a sudden. Following stack 
will be seen as a part of the thread list:

bcopy()
cvm_dc_hashtable_clear_udidentry()
vol_test_connectivity()
volconfig_ioctl()
volsioctl_real()
spec_ioctl()
fop_ioctl()
ioctl()
_syscall_no_proc_exit32()

DESCRIPTION:
We are trying to access the memory which is not allocated to the pointer and 
trying to copy the same to another pointer. Since we are trying to access 
illegal memory which can lead to a panic kind of situation all of a sudden. 
The issue would occur when the UDID of the device is less than 377.

RESOLUTION:
Fix is to avoid accessing illegal memory in the code through the pointer.

* 3921280 (Tracking ID: 3904538)

SYMPTOM:
RV(Replicate Volume) IO hang happens during slave node leave or master node 
switch.

DESCRIPTION:
RV IO hang happens because of SRL(Serial Replicate Log) header is updated by RV 
recovery SIO. After slave node leave or master node switch, RV recovery could 
be 
initiated. During RV recovery, all new coming IOs should be quiesced by setting 
NEED 
RECOVERY flag on RV to avoid racing. Due to a code defect, this flag is removed 
by 
transaction commit, result in conflicting between new IOs and RV recovery SIO.

RESOLUTION:
Code changes have been made to fix this issue.

* 3977787 (Tracking ID: 3921994)

SYMPTOM:
Temporary files such as <DiskGroup>.bslist .cslist .perm are seen in the 
directory /var/temp.

DESCRIPTION:
When ADD and REMOVE operations of disks of a disk group are done between the 
interval of two backups, a failure in the next backup of the same disk group is 
observed, which is why the files are left behind in the directory as specified 
in

RESOLUTION:
Corrected the syntax errors in the code, to handle the vxconfigbackup issue.

* 3977852 (Tracking ID: 3900463)

SYMPTOM:
vxassist may fail to create a volume on disks having size in terabytes when '-o
ordered' clause is used along with 'col_switch' attribute while creating the volume.

Following error may be reported:
VxVM vxvol ERROR V-5-1-1195 Volume <volume-name> has more than one associated
sparse plex but no complete plex

DESCRIPTION:
The problem is seen specially when user attempts to create a volume on large sized
disks using '-o ordered' option along with the 'col_switch' attribute. 
The error reports the plex to be sparse because plex length is getting incorrectly
calculated in the code due to an integer overflow of a variable which handles the
col_switch attribute.

RESOLUTION:
The code is fixed to avoid the integer overflow.

* 3977854 (Tracking ID: 3895950)

SYMPTOM:
vxconfigd hang may be observed all of a sudden. The following stack will be 
seen as part of threadlist:
slock()
.disable_lock()
volopen_isopen_cluster()
vol_get_dev()
volconfig_ioctl()
volsioctl_real()
volsioctl()
vols_ioctl()
rdevioctl()
spec_ioctl()
vnop_ioctl()
vno_ioctl()
common_ioctl(??, ??, ??, ??)

DESCRIPTION:
Some of the critical structures in the code are protected with lock to avoid 
simultaneous modification. A particular lock structure gets copied to the 
local stack memory. In this case
the structure might have information about the state of the lock and also at 
the time of copy that lock structure might be in an intermediate state. When 
function 
tries to access such type of lock structure, the result could lead to panic 
or hang since that lock structure might be in some unknown state.

RESOLUTION:
When we make local copy of the structure, no one is going to modify the new 
local structure and hence acquiring lock is not required while accessing 
this copy.

* 3977990 (Tracking ID: 3899198)

SYMPTOM:
VxDMP causes system panic after a shutdown or reboot and 
displays
the following stack trace:
vpanic
volinfo_ioctl()
volsioctl_real()
ldi_ioctl()
dmp_signal_vold()
dmp_throttle_paths()
dmp_process_stats()
dmp_daemons_loop()
thread_start()

DESCRIPTION:
In a special scenario of system shutdown/reboot, the DMP
(Dynamic MultiPathing) restore daemon tries to call the ioctl functions
in VXIO module which is being unloaded and this causes system panic.

RESOLUTION:
The code is modified to stop the DMP I/O restore daemon 
before system shutdown/reboot.

* 3977992 (Tracking ID: 3878153)

SYMPTOM:
VVR (Veritas Volume Replicator)  'vradmind' deamon core dump in following stack.

#0  __kernel_vsyscall ()
#1  raise () from /lib/libc.so.6
#2  abort () from /lib/libc.so.6
#3  __libc_message () from /lib/libc.so.6
#4  malloc_printerr () from /lib/libc.so.6
#5  _int_free () from /lib/libc.so.6
#6  free () from /lib/libc.so.6
#7  operator delete(void*) () from /usr/lib/libstdc++.so.6
#8  operator delete[](void*) () from /usr/lib/libstdc++.so.6
#9  inIpmHandle::~IpmHandle (this=0x838a1d8,
    __in_chrg=<optimized out>) at Ipm.C:2946
#10 IpmHandle::events (handlesp=0x838ee80, vlistsp=0x838e5b0,
    ms=100) at Ipm.C:644
#11 main (argc=1, argv=0xffffd3d4) at srvmd.C:703

DESCRIPTION:
Under certain circumstances 'vradmind' daemon may core dump freeing a variable
allocated in 
stack.

RESOLUTION:
Code change has been done to address the issue.

* 3977993 (Tracking ID: 3932356)

SYMPTOM:
In a two node cluster vxconfigd dumps core while importing the DG -

 dapriv_da_alloc ()
 in setup_remote_disks ()
 in volasym_remote_getrecs ()
 req_dg_import ()
 vold_process_request ()
 start_thread () from /lib64/libpthread.so.0
 from /lib64/libc.so.6

DESCRIPTION:
The vxconfigd is dumping core due to address alignment issue.

RESOLUTION:
The alignment issue is fixed.

* 3977996 (Tracking ID: 3905030)

SYMPTOM:
system hang when install/uninstall VxVM with bellow stack:

genunix:cv_wait+0x3c()
genunix:ndi_devi_enter+0x54()
genunix:devi_config_one+0x114()
genunix:ndi_devi_config_one+0xd0()
genunix:resolve_pathname_noalias+0x244()
genunix:resolve_pathname+0x10()
genunix:ldi_vp_from_name+0x100()
genunix:ldi_open_by_name+0x40()
vxio:vol_ldi_init+0x60()
vxio:vol_attach+0x5c()

Or
genunix:cv_wait+0x38
genunix:ndi_devi_enter
genunix:devi_config_one
genunix:ndi_devi_config_one
genunix:resolve_pathname_noalias
genunix:resolve_pathname
genunix:ldi_vp_from_name
genunix:ldi_open_by_name
vxdmp:dmp_setbootdev
vxdmp:dmp_attach

DESCRIPTION:
According to Oracle, ldi_open_by_name should not be called from a device's
attach, detach, 
or power entry point. This could result in a system crash or deadlock.

RESOLUTION:
Code changes have been done to avoid calling ldi_open_by_name during device
attach.

* 3978000 (Tracking ID: 3893150)

SYMPTOM:
VxDMP(Veritas Dynamic Multi-Pathing) vxdmpadm native ls command sometimes 
doesn't report imported disks' pool name

DESCRIPTION:
When Solaris pool is imported with extra options like -d or -R, paths in 
'zpool status <pool name>' can be disk full path. 'Vxdmpadm native ls' 
command doesn't handle such situation hence fails to report its pool name.

RESOLUTION:
Code changes have been made to correctly handle disk full path to get its 
pool name.

* 3978001 (Tracking ID: 3919902)

SYMPTOM:
VxVM(Veritas Volume Manager) vxdmpadm iopolicy switch command may not work. 
When issue happens, vxdmpadm setattr iopolicy finishes without any error, 
but subsequent vxdmpadm getattr command shows iopolicy is not correctly 
updated:
# vxdmpadm getattr enclosure emc-vplex0 iopolicy
ENCLR_NAME     DEFAULT        CURRENT
============================================
emc-vplex0     Balanced       Balanced
# vxdmpadm setattr arraytype VPLEX-A/A iopolicy=minimumq
# vxdmpadm getattr enclosure emc-vplex0 iopolicy
ENCLR_NAME     DEFAULT        CURRENT
============================================
emc-vplex0     Balanced       Balanced

Also standby paths are not honored by some iopolicy(for example balanced 
iopolicy). Read/Write IOs are seen against standby paths by vxdmpadm iostat 
command.

DESCRIPTION:
array's iopolicy field becomes stale when vxdmpadm setattr arraytype 
iopolicy command is used, hence when  iopolicy was set back to the staled 
one, it will not work actually. Also, when paths are evaluated for issusing 
IOs, standby flag isn't taken into consideration hence standby paths are 
used for r/w IOs.

RESOLUTION:
Code changes have been done to address these issues.

* 3978003 (Tracking ID: 3953523)

SYMPTOM:
vxdisk list not showing DMP(Dynamic Multi-Pathing) managed disks post reboot on 
sol10 LDOM guest.

DESCRIPTION:
Due to some dependency issues between LDOM drivers and DMP deriver, DMP failed
to manage all the devices.

RESOLUTION:
Code changes have been done to refresh OS device tree before startup.

* 3978183 (Tracking ID: 3959091)

SYMPTOM:
volume was in disable status post reboot on sol10 LDOM guest.

DESCRIPTION:
Due to some dependency issues between LDOM drivers and DMP(Dynamical Multi-Pathing) driver, DMP wasn't able to manage all devices when start volumes during system booting. In this case, any of those devices were under a volume, this volume might be marked disabled before startup. hence the issue.

RESOLUTION:
Code changes have been done to reattach devices and restart volume before startup.

* 3978186 (Tracking ID: 3898069)

SYMPTOM:
System panic may happen in dmp_process_stats routine with the following stack:

dmp_process_stats+0x471/0x7b0 
dmp_daemons_loop+0x247/0x550 
kthread+0xb4/0xc0
ret_from_fork+0x58/0x90

DESCRIPTION:
When aggregate the pending IOs per DMP path over all CPUs, out of bound 
access issue happened due to the wrong index of statistic table, which could 
cause a system panic.

RESOLUTION:
Code changes have been done to correct the wrong index.

* 3978188 (Tracking ID: 3878030)

SYMPTOM:
Enhance VxVM(Veritas Volume Manager) DR(Dynamic Reconfiguration) tool to 
clean up OS and VxDMP(Veritas Dynamic Multi-Pathing) device trees without 
user interaction.

DESCRIPTION:
When users add or remove LUNs, stale entries in OS or VxDMP device trees can 
prevent VxVM from discovering changed LUNs correctly. It even causes VxVM 
vxconfigd process core dump under certain conditions, users have to reboot 
system to let vxconfigd restart again.
VxVM has DR tool to help users adding or removing LUNs properly but it 
requires user inputs during operations.

RESOLUTION:
Enhancement has been done to VxVM DR tool. It accepts '-o refresh' option to 
clean up OS and VxDMP device trees without user interaction.

* 3978189 (Tracking ID: 3879972)

SYMPTOM:
vxconfigd core dumps when NVME boot devices are attached to the system with the following stack:
ddl_get_disk_given_path()
ddl_reconfigure_all()
ddl_find_devices_in_system()
find_devices_in_system()
mode_set()
setup_mode()
startup()
main()
_start()

DESCRIPTION:
The core dump happens because of NULL pointer deference while accessing the NVME boot devices during device reconfiguration.The issue occurs due to wrong calculation of 
device number used for opening the NVME devices, due to which open fails on NVME boot device. At later stage, removal of the NVME boot device fails because of earlier 
open failure, which causes the reconfiguration failure.

RESOLUTION:
Code changes have been made to properly calculate the device number used for opening the NVME boot device.

* 3978199 (Tracking ID: 3891252)

SYMPTOM:
vxconfigd core dumps and below message is seen in syslog
vxconfigd[8121]: segfault at 10 
ip 00000000004b85c1 sp 00007ffe98d64c50 error 4 in vxconfigd[400000+337000]
vxconfigd core dump is seen with following stack:
#0  0x00000000004b85c1 in dbcopy_open ()
#1  0x00000000004cad6f in dg_config_read ()
#2  0x000000000052cf15 in req_dg_get_dalist ()
#3  0x00000000005305f0 in request_loop ()
#4  0x000000000044f606 in main ()

DESCRIPTION:
The issue is seen because NULL pointer was not handled for configuration 
copy while opening the configuration copy leading to vxconfigd segmentation 
fault.

RESOLUTION:
The code has been modified to handle that scenario when NULL pointer for 
configuration copy was used for opening the copy.

* 3978200 (Tracking ID: 3935974)

SYMPTOM:
While communicating with client process, vxrsyncd daemon terminates and after 
sometime it gets started or may require a reboot to start.

DESCRIPTION:
When the client process shuts down abruptly and vxrsyncd daemon attempt to write 
on the client socket, SIGPIPE signal is generated. The default action for this 
signal is to terminate the process. Hence vxrsyncd gets terminated.

RESOLUTION:
This SIGPIPE signal should be handled in order to prevent the termination of 
vxrsyncd.

* 3978217 (Tracking ID: 3873123)

SYMPTOM:
When remote disk on node is EFI disk, vold enable fails.
And following message get logged, and eventually causing the vxconfigd to go 
into disabled state:
Kernel and on-disk configurations don't match; transactions are disabled.

DESCRIPTION:
This is becasue one of the cases of EFI remote disk is not properly handled
in disk recovery part when vxconfigd is enabled.

RESOLUTION:
Code changes have been done to set the EFI flag on darec in recovery code

* 3978219 (Tracking ID: 3919559)

SYMPTOM:
IO hangs after pulling out all cables, when VVR(Veritas Volume Replicator) 
is 
reconfigured.

DESCRIPTION:
When VVR is configured and SRL(Storage Replicator Log) batch feature is 
enabled, after pulling out all cable, if more than one IO get queued in VVR 
before a header error, due to a bug in VVR, at least one IO won't be 
handled, 
hence the issue.

RESOLUTION:
Code has been modified to get every queued IO in VVR handled properly.

* 3978221 (Tracking ID: 3892787)

SYMPTOM:
Enabling DMP native support should not import the exported zpools present on the 
system.

DESCRIPTION:
When we enable DMP native support through "vxdmpadm settune 
dmp_native_support=on", it will try to migrate and import the zpools that are currently 
exported on the host. This is not an expected behavior and exported pools before 
DMP Native Support should not be imported.

RESOLUTION:
Code changes has been done not to import the zpools that are exported on the host 
before enabling dmp native support.

* 3978222 (Tracking ID: 3907034)

SYMPTOM:
The mediatype is not shown as ssd in vxdisk -e list command for SSD (solid 
state devices) devices.

DESCRIPTION:
Some of the SSD devices does not have a ASL (Array Support Library) to claim 
them and are claimed as JBOD (Just a bunch of disks). In this case since 
there is no ASL, the 
attributes of the device like mediatype are not known. This is the reason 
mediatype is not shown in vxdisk -e list output.

RESOLUTION:
Code now checks the value stored in the file 
/sys/block/<device>/queue/rotational which signifies whether the device is 
SSD or not to detect mediatype.

* 3978223 (Tracking ID: 3958878)

SYMPTOM:
vxrecover core dumps on system reboot with following stack:
strncpy()
dg_set_current_id()
dgreconnect()
main()

DESCRIPTION:
While booting system, vxrecover core dumps due to accessing one uninitialized disk group record.

RESOLUTION:
Changes are done in vxrecover to fix the core dump.

* 3978225 (Tracking ID: 3948140)

SYMPTOM:
System may panic if RTPG data returned by the array is greater than 255 with
below stack:

dmp_alua_get_owner_state()
dmp_alua_get_path_state()
dmp_get_path_state()
dmp_check_path_state()
dmp_restore_callback()
dmp_process_scsireq()
dmp_daemons_loop()

DESCRIPTION:
The size of the buffer given to RTPG SCSI command is currently 255 bytes. But the
size of data returned by underlying array for RTPG can be greater than 255
bytes. As a result
incomplete data is retrieved (only the first 255 bytes) and when trying to read
the RTPG data, it causes invalid access of memory resulting in error while
claiming the devices. This invalid access of memory may lead to system panic.

RESOLUTION:
The RTPG buffer size has been increased to 1024 bytes for handling this.

* 3978233 (Tracking ID: 3879324)

SYMPTOM:
VxVM(Veritas Volume Manager) DR(Dynamic Reconfiguration) tool fails to 
handle busy device problem while LUNs are removed from OS

DESCRIPTION:
OS devices may still be busy after removing them from OS, it fails 'luxadm -
e offline <disk>' operation and leaves staled entries in 'vxdisk list' 
output 
like:
emc0_65535   auto            -            -            error
emc0_65536   auto            -            -            error

RESOLUTION:
Code changes have been done to address busy devices issue.

* 3978234 (Tracking ID: 3795622)

SYMPTOM:
With Dynamic Multi-Pathing (DMP) Native Support enabled, LVM global_filter is
not updated properly in lvm.conf file to reject the newly added paths.

DESCRIPTION:
With DMP Native Support enabled, when new paths are added to existing LUNs, LVM
global_filter is not updated properly in lvm.conf file to reject the newly added
paths. This can lead to duplicate PV (physical volumes) found error reported by
LVM commands.

RESOLUTION:
The code is modified to properly update global_filter field in lvm.conf file
when new paths are added to existing disks.

* 3978239 (Tracking ID: 3943519)

SYMPTOM:
System failed to start up with error "Cannot find device number for dmpbootdev". System keeps panic 
with bellow info:
WARNING: VxVM vxio V-5-0-674 Cannot find device number for dmpbootdev
Cannot open mirrored root device, error 6
Cannot remount root on /pseudo/vxio@0:0 fstype ufs
 
panic[cpu0]/thread=180e000: vfs_mountroot: cannot remount root
genunix:vfs_mountroot+398 ()
genunix:main+11c ()

DESCRIPTION:
Due to a bug in DMP(Dynamic Multi-Pathing), after encapsulated boot disk, 
DMP driver doesn't get initialized completely. This will cause the failure 
of 
boot device open, and further cause OS couldn't mount the file system, hence 
the issue.

RESOLUTION:
Code changes have been done to initialize DMP driver completely in boot 
mode.

* 3978306 (Tracking ID: 3945411)

SYMPTOM:
system kept in cyclic reboot after enabling DMP(Dynamic Multi-Pathing) 
native support for ZFS boot devices with below error:
NOTICE: VxVM vxdmp V-5-0-1990 driver version VxVM  Multipathing Driver 
installed
WARNING: VxVM vxdmp V-5-3-2103 dmp_claim_device: Boot device not found in OS 
tree
NOTICE: zfs_parse_bootfs: error 19
Cannot mount root on rpool/40 fstype zfs
panic[cpu0]/thread=20012000: vfs_mountroot: cannot mount root
Warning - stack not written to the dumpbuf
000000002000fa00 genunix:main+1dc ()

DESCRIPTION:
The boot device was under DMP control after enabling DMP native support. Hence
DMP failed to get its device number by inquiring the device under OS control,
hence the issue.

RESOLUTION:
code changes were made to get the correct device number of boot device.

* 3978309 (Tracking ID: 3906119)

SYMPTOM:
In CVM (Cluster Volume Manager) environment, failback didn't happen when the 
optimal path returned back.

DESCRIPTION:
For ALUA(Asymmetric Logical Unit Access) array, it supports implicit and 
explicit asymmetric logical unit access management methods. In a CVM 
environment, DMP(Dynamic Multi-pathing) failed to start failback for 
implicit 
ALUA only mode array, hence the issue.

RESOLUTION:
Code changes are added to handle this case for implicit ALUA only mode 
array.

* 3978310 (Tracking ID: 3921668)

SYMPTOM:
Running the vxrecover command with -m option fails when run on the
slave node with message "The command can be executed only on the master."

DESCRIPTION:
The issue occurs as currently vxrecover -g <dgname> -m command on shared
disk groups is not shipped using the command shipping framework from CVM
(Cluster Volume Manager) slave node to the master node.

RESOLUTION:
Implemented code change to ship the vxrecover -m command to the master
node, when its triggered from the slave node.

* 3978325 (Tracking ID: 3922159)

SYMPTOM:
Thin reclaimation may fail on Xtremio SSD disks with following error:
Reclaiming storage on:
Disk <disk_name> : Failed. Failed to reclaim <directory_name>.

DESCRIPTION:
VxVM (Veritas Volume Manager) uses thin-reclaimation method in order to reclaim
the space on Xtremio SSD disks. Few SSD arrays use TRIM method for reclaimation,
instead of thin-reclaimation.
A condition in code which checks whether TRIM is supported or not was incorrect
and it was leading to reclaim failure on Xtremio disks.

RESOLUTION:
Corrected the condition in code which checks whether TRIM method is supported or
not for reclaimation.

* 3978328 (Tracking ID: 3931936)

SYMPTOM:
In FSS(Flexible Storage Sharing) environment, after restarting slave node VxVM 
command on master node hang result in failed disks on slave node could not 
rejoin disk group.

DESCRIPTION:
While lost remote disks on slave node comes back, online these disk and add 
them to disk group operations are performed on master node. Disk online 
includes operations from both master and slave node. On slave node these 
disks 
should be offlined then reonlined, but due to code defect reonline disks are 
missed result in these disks are kept in reonlining state. The following add disk 
to 
disk group operation needs to issue private region IOs on the disk. These IOs 
are 
shipped to slave node to complete. As the disks are in reonline state, busy error 
gets returned and remote IOs keep retrying, hence VxVM command hang on 
master node.

RESOLUTION:
Code changes have been made to fix the issue.

* 3978334 (Tracking ID: 3932246)

SYMPTOM:
vxrelayout operation fails to complete.

DESCRIPTION:
IF we lose connectivity to underlying storage  while volume relayout is in 
progress, some intermediate volumes for the relayout could be in disabled or 
undesirable state either due to I/O error. Once the storage connectivity is 
back 
such intermediate volumes should be recovered by vxrecover  utility and resume 
the vxrelayout operation automatically. But due to bug in vxrecover utility 
the 
volumes remained in disable state due to which the vxrelayout operation didn't 
complete.

RESOLUTION:
Changes are done in  vxrecover utility to enable the intermediate volumes.

* 3980027 (Tracking ID: 3921572)

SYMPTOM:
The vxconfigd dumps core when an array is disconnected.

DESCRIPTION:
In a configuration where a Disk Group having disks from more than one array, 
the
vxconfigd is dumping core when an array is disconnected followed by a 
command
which attempts  to get the details of all the disks of the Disk Group. 

Once the array is disconnected, the vxconfigd removes all the Disk Access 
(DA)
records. While servicing the command which needs the details of the disks in 
the
DG, the vxconfigd goes through the DA list. The code which services the 
command
has a defect causing the core.

RESOLUTION:
The code is rectified to ignore the NULL records to avoid the core.

* 3980453 (Tracking ID: 3964779)

SYMPTOM:
Current load of Vxvm modules i.e vxio and vxspec is failing on Solaris 11.4

DESCRIPTION:
The function page_numtopp_nolock has been replaced and renamed as pp_for_pfn_canfail. The _depends_on has been deprecated and cannot be used. VxVM was making use of the attribute to specify the dependency between the modules.

RESOLUTION:
The changes are mainly around the way we handle unmapped buf in vxio driver.
The Solaris API that we were using is no longer valid and is a private API.
Replaced hat_getpfnum() -> ppmapin/ppmapout calls with bp_copyin/bp_copyout in I/O code path.
In ioshipping, replaced it with miter approach and hat_kpm_paddr_mapin()/hat_kpm_paddr_mapout.

* 3981587 (Tracking ID: 3918356)

SYMPTOM:
zpools are imported automatically when DMP native support is set to on which may lead to zpool corruption.

DESCRIPTION:
When DMP native support is set to on all zpools are imported using DMP devices so that when the import happens for the same zpool again it is 
automatically imported using DMP device. In clustered environment if the import of the same zpool is triggered on two different nodes at the 
same time it can lead to zpool corruption. A way needs to be provided so that zpools are not imported.

RESOLUTION:
Changes are made to provide a way to customer to not import the zpools if required. The way is to set the variable auto_import_exported_pools 
to off in the file /var/adm/vx/native_input like below:
bash:~# cat /var/adm/vx/native_input
auto_import_exported_pools=off



INSTALLING THE PATCH
--------------------
Run the Installer script to automatically install the patch:
-----------------------------------------------------------
Please be noted that the installation of this P-Patch will cause downtime.

To install the patch perform the following steps on at least one node in the cluster:
1. Copy the patch infoscale-sol11_sparc-Patch-7.1.0.1100.tar.gz to /tmp
2. Untar infoscale-sol11_sparc-Patch-7.1.0.1100.tar.gz to /tmp/hf
    # mkdir /tmp/hf
    # cd /tmp/hf
    # gunzip /tmp/infoscale-sol11_sparc-Patch-7.1.0.1100.tar.gz
    # tar xf /tmp/infoscale-sol11_sparc-Patch-7.1.0.1100.tar
3. Install the hotfix(Please be noted that the installation of this P-Patch will cause downtime.)
    # pwd /tmp/hf
    # ./installVRTSinfoscale710P1100 [<host1> <host2>...]

You can also install this patch together with 7.1 base release using Install Bundles
1. Download this patch and extract it to a directory
2. Change to the Veritas InfoScale 7.1 directory and invoke the installer script
   with -patch_path option where -patch_path should point to the patch directory
    # ./installer -patch_path [<path to this patch>] [<host1> <host2>...]

Install the patch manually:
--------------------------
>Manual installation is not supported.


REMOVING THE PATCH
------------------
Manual uninstallation is not supported.


SPECIAL INSTRUCTIONS
--------------------
In case of stack + OS upgrade if ODM and vxfsckd is not online after reboot then follow below steps and start the services using CPI :

     # /lib/svc/method/odm restart
     # /usr/sbin/devfsadm -i vxportal
     #./installer -start

In case of Upgrade, If the stack is already installed on the nodes before upgrading the OS to SOLARIS 11.4, CPI installer might throw an error that the product licensing verification failed and license are not installed. This is because the deprecated packages “uucp” and “bourne” are removed in SOLARIS 11.4. To continue, follow the steps given below:

    1.  After Upgrading OS to SOLARIS11.4, install the packages “uucp” and “bourne” manually on the cluster nodes
    2.  Start the Product through CPI.


OTHERS
------
NONE