infoscale-sol11.4_sparc-Patch-7.4.0.1200

 Basic information
Release type: Patch
Release date: 2019-02-28
OS update support: None
Technote: None
Documentation: None
Popularity: 11128 viewed    downloaded
Download size: 84.55 MB
Checksum: 2872525873

 Applies to one or more of the following products:
InfoScale Availability 7.4 On Solaris 11 SPARC
InfoScale Enterprise 7.4 On Solaris 11 SPARC
InfoScale Foundation 7.4 On Solaris 11 SPARC
InfoScale Storage 7.4 On Solaris 11 SPARC

 Obsolete patches, incompatibilities, superseded patches, or other requirements:

This patch supersedes the following patches: Release date
infoscale-sol11_sparc-Patch-7.4.0.1100 (obsolete) 2018-07-23

 Fixes the following incidents:
3949320, 3949322, 3950578, 3950740, 3953235, 3958838, 3958884, 3958887, 3958976, 3959065, 3959302, 3959306, 3959433, 3959996, 3964083, 3966524, 3966896, 3966920, 3967002, 3967030, 3967032, 3967098, 3968555, 3968787, 3968790, 3968999, 3969121, 3969588, 3970369, 3970683, 3970684

 Patch ID:
VRTSvxvm-7.4.0.1500
VRTSvxfs-7.4.0.1500
VRTSodm-7.4.0.1500
VRTSllt-7.4.0.1100
VRTSamf-7.4.0.1100

Readme file
                          * * * READ ME * * *
                       * * * InfoScale 7.4 * * *
                         * * * Patch 1200 * * *
                         Patch Date: 2019-02-15


This document provides the following information:

   * PATCH NAME
   * OPERATING SYSTEMS SUPPORTED BY THE PATCH
   * PACKAGES AFFECTED BY THE PATCH
   * BASE PRODUCT VERSIONS FOR THE PATCH
   * SUMMARY OF INCIDENTS FIXED BY THE PATCH
   * DETAILS OF INCIDENTS FIXED BY THE PATCH
   * INSTALLATION PRE-REQUISITES
   * INSTALLING THE PATCH
   * REMOVING THE PATCH


PATCH NAME
----------
InfoScale 7.4 Patch 1200


OPERATING SYSTEMS SUPPORTED BY THE PATCH
----------------------------------------
Solaris 11 SPARC


PACKAGES AFFECTED BY THE PATCH
------------------------------
VRTSamf
VRTSllt
VRTSodm
VRTSvxfs
VRTSvxvm


BASE PRODUCT VERSIONS FOR THE PATCH
-----------------------------------
   * InfoScale Availability 7.4
   * InfoScale Enterprise 7.4
   * InfoScale Foundation 7.4
   * InfoScale Storage 7.4


SUMMARY OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
Patch ID: VRTSvxvm-7.4.0.1500
* 3949320 (3947022) VVR:vxconfigd hang during during /scripts/configuratio/assoc_datavol.tc#6
* 3958838 (3953681) Data corruption issue is seen when more than one plex of volume is detached.
* 3958884 (3954787) Data corruption may occur in GCO along with FSS environment on RHEL 7.5 Operating system.
* 3958887 (3953711) Panic observed while switching the logowner to slave while IO's are in progress
* 3958976 (3955101) Panic observed in GCO environment (cluster to cluster replication) during replication.
* 3959433 (3956134) System panic might occur when IO is in progress in VVR (veritas volume replicator) environment.
* 3967098 (3966239) IO hang observed while copying data in cloud environments.
* 3968555 (3964779) Changes to support Solaris 11.4 with Volume Manager
* 3968999 (3966872) Deport and rename clone DG changes the name of clone DG along with source DG
* 3969588 (3964337) Partition size getting set to default after running vxdisk scandisks.
* 3970369 (3970368) Sol11.4 DMP+DR observed error messages in dmpdr -o refresh utility
Patch ID: VRTSvxvm-7.4.0.1100
* 3949322 (3944259) The vradmin verifydata and vradmin ibc commands fail on private diskgroups
with Lost connection error.
* 3950578 (3953241) Messages in syslog are seen with message string "0000" for VxVM module.
Patch ID: VRTSamf-7.4.0.1100
* 3970684 (3970679) Veritas Infoscale Availability does not support Oracle Solaris 11.4.
Patch ID: VRTSllt-7.4.0.1100
* 3970683 (3970679) Veritas Infoscale Availability does not support Oracle Solaris 11.4.
Patch ID: VRTSodm-7.4.0.1500
* 3968790 (3968788) ODM module failed to load on Solaris 11.4.
Patch ID: VRTSodm-7.4.0.1100
* 3953235 (3953233) After installing the 7.4.0.1100 (on AIX and Solaris) and 7.4.0.1200 (on Linux)
patch, the Oracle Disk Management (ODM) module fails to load.
Patch ID: VRTSvxfs-7.4.0.1500
* 3959065 (3957285) job promote operation from replication target node fails.
* 3959302 (3959299) Improve file creation time on systems with Selinux enabled.
* 3959306 (3959305) Fix a bug in security attribute initialisation of files with named attributes.
* 3959996 (3938256) When checking file size through seek_hole, it will return incorrect offset/size 
when delayed allocation is enabled on the file.
* 3964083 (3947421) DLV upgrade operation fails while upgrading Filesystem from DLV 9 to DLV 10.
* 3966524 (3966287) During the multiple mounts of Shared CFS mount points, more than 100MB is consumed per mount
* 3966896 (3957092) System panic with spin_lock_irqsave thru splunkd.
* 3966920 (3925281) Hexdump the incore inode data and piggyback data when inode 
revalidation fails.
* 3967002 (3955766) CFS hung when doing extent allocating.
* 3967030 (3947433) While adding a volume (part of vset) in already mounted filesystem, fsvoladm
displays error.
* 3967032 (3947648) Mistuning of vxfs_ninode and vx_bc_bufhwm to very small 
value.
* 3968787 (3968785) VxFS module failed to load on Solaris 11.4.
* 3969121 (3894712) ACL permissions are not inherited correctly on cluster 
file system.
Patch ID: VRTSvxfs-7.4.0.1100
* 3950740 (3953165) Messages in syslog are seen with message string "0000" for VxFS module.


DETAILS OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
This patch fixes the following incidents:

Patch ID: VRTSvxvm-7.4.0.1500

* 3949320 (Tracking ID: 3947022)

SYMPTOM:
vxconfigd hang.

DESCRIPTION:
There was a window between NIOs being added to rp_port->pt_waitq and rlink being disconnected where NIOs were left in pt_waitq and hence their parent (ack sio) were not done. The ack sio had IO count which led to the vxconfigd hang.

RESOLUTION:
Don't add NIO to rp_port->pt_waitq if rp_port->pt_closing is set. Instead call done on the NIO with error ENC_CLOSING.Before deleting the port, call done on the NIOs in pt_waitq with error ENC_CLOSING.

* 3958838 (Tracking ID: 3953681)

SYMPTOM:
Data corruption issue is seen when more than one plex of volume is detached.

DESCRIPTION:
When a plex of volume gets detached, DETACH map gets enabled in the DCO (Data Change Object). The incoming IO's are tracked in DRL (Dirty Region Log) and then asynchronously copied to DETACH map for tracking.
If one more plex gets detached then it might happen that some of the new incoming regions are missed in the DETACH map of the previously detached plex.
This leads to corruption when the disk comes back and plex resync happens using corrupted DETACH map.

RESOLUTION:
Code changes are done to correctly track the IO's in the DETACH map of previously detached plex and avoid corruption.

* 3958884 (Tracking ID: 3954787)

SYMPTOM:
On a RHEL 7.5 FSS environment with GCO configured having NVMe devices and Infiniband network, data corruption might occur when sending the IO from Master to slave node.

DESCRIPTION:
In the recent RHEL 7.5 release, linux stopped allowing IO on the underlying NVMe device which has gaps in between BIO vectors. In case of VVR, the SRL header of 3 blocks is added to the BIO . When the BIO is sent through LLT to the other node because of LLT limitation of 32 fragments can lead to unalignment of BIO vectors. When this unaligned BIO is sent to the underlying NVMe device, the last 3 blocks of the BIO are skipped and not written to the disk on the slave node. This leads to incomplete data written on the slave node which leads to data corruption.

RESOLUTION:
Code changes have been done to handle this case and send the BIO aligned to the underlying NVMe device.

* 3958887 (Tracking ID: 3953711)

SYMPTOM:
System might encounter panic while switching the logowner to slave while IO's are in progress with the following stack:

vol_rv_service_message_free()
vol_rv_replica_reconfigure()
sched_clock_cpu()
vol_rv_error_handle()
vol_rv_errorhandler_callback()
vol_klog_start()
voliod_iohandle()
voliod_loop()
voliod_kiohandle()
kthread()
insert_kthread_work()
ret_from_fork_nospec_begin()
insert_kthread_work()
vol_rv_service_message_free()

DESCRIPTION:
While processing a transaction, we leave the IO count on the RV object to let the transaction to proceed. In such case we set the RV object in SIO to NULL. But while freeing the message, the object is de-referenced without taking into consideration it can be NULL. This can lead to a panic because of NULL pointer de-reference in code.

RESOLUTION:
Code changes have been made to handle NULL value of the RV object.

* 3958976 (Tracking ID: 3955101)

SYMPTOM:
Server might panic in a GCO environment with the following stack:

nmcom_server_main_tcp()
ttwu_do_wakeup()
ttwu_do_activate.constprop.90()
try_to_wake_up()
update_curr()
update_curr()
account_entity_dequeue()
 __schedule()
nmcom_server_proc_tcp()
kthread()
kthread_create_on_node()
ret_from_fork()
kthread_create_on_node()

DESCRIPTION:
There are recent changes done in the code to handle Dynamic port changes i.e deletion and addition of ports can now happen dynamically. It might happen that while accessing the port, it was deleted in the background by other thread. This would lead to a panic in the code since the port to be accessed has been already deleted.

RESOLUTION:
Code changes have been done to take care of this situation and check if the port is available before accessing it.

* 3959433 (Tracking ID: 3956134)

SYMPTOM:
System panic might occur when IO is in progress in VVR (veritas volume replicator) environment with below stack:

page_fault()
voliomem_grab_special()
volrv_seclog_wsio_start()
voliod_iohandle()
voliod_loop()
kthread()
ret_from_fork()

DESCRIPTION:
In a memory crunch scenario in some cases the memory reservation for SIO (staged IO) in VVR configuration might fail. If this situation occurs then SIO is tried at a later time when the memory becomes available again but while doing some of the fields of SIO are passed NULL values which leads to panic in the VVR code.

RESOLUTION:
Code changes have been done to pass proper values to IO when it is retired in VVR environment.

* 3967098 (Tracking ID: 3966239)

SYMPTOM:
IO hang observed while copying data to VxVM (Veritas Volume Manager) volumes in cloud environments.

DESCRIPTION:
In an cloud environment having a mirrored volume with DCO  (Data Change Object) attached to the volume, IO's issued on the volume have to process DRL (Dirty region log) which is used for faster recovery on reboot. When IO processing is happening on DRL, there is an condition in code which can lead to IO's not being driven through DRL resulting in a hang situation. Further IO's keep on queuing waiting for the IO's on the DRL to complete leading to a hang situation.

RESOLUTION:
Code changes have been done to resolve the condition which leads to IO's not being driven through DRL.

* 3968555 (Tracking ID: 3964779)

SYMPTOM:
Current load of Vxvm modules i.e vxio and vxspec is failing on Solaris 11.4

DESCRIPTION:
The function page_numtopp_nolock has been replaced and renamed as pp_for_pfn_canfail. The _depends_on has been deprecated and cannot be used. VxVM was making use of the attribute to specify the dependency between the modules.

RESOLUTION:
The changes are mainly around the way we handle unmapped buf in vxio driver.
The Solaris API that we were using is no longer valid and is a private API.
Replaced hat_getpfnum() -> ppmapin/ppmapout calls with bp_copyin/bp_copyout in I/O code path.
In ioshipping, replaced it with miter approach and hat_kpm_paddr_mapin()/hat_kpm_paddr_mapout.

* 3968999 (Tracking ID: 3966872)

SYMPTOM:
The deport and rename of a cloned DG is renaming both source and cloned DG.

DESCRIPTION:
On a DR site both source and clone DGs can co-exist, where the source DG is deported while
the cloned DG is in imported state. Now the user attempts to deport-rename the cloned DG,
then names of both source and cloned DGs are changed.

RESOLUTION:
The deport code is fixed to take care of the situation.

* 3969588 (Tracking ID: 3964337)

SYMPTOM:
After running vxdisk scandisks, the partition size gets set to default value of 512.

DESCRIPTION:
During device discovery, VxVM (Veritas Volume Manager) compares the original partition size present and the new partition size which is reported. In the code while reading the partition size from Kernel memory, the buffer utilized in the userland memory is not initialized and has a garbage value. Because of this the different between old partition size and new partition size is detected which leads to partition size being set to a default value.

RESOLUTION:
Code changes have been done to properly initialize the buffer in userland which is used to read data from kernel.

* 3970369 (Tracking ID: 3970368)

SYMPTOM:
while performing the DMP + DR test case error messages are observed while running the dmpdr -o refresh utility.
/usr/lib/vxvm/voladm.d/bin/dmpdr -o refresh
WARN: Please Do not Run any Device Discovery Operations outside the Tool during Reconfiguration operations
INFO: The logs of current operation can be found at location /var/adm/vx/dmpdr_20181128_1638.log
INFO: Collecting OS Version Info
ERROR: Collecting LeadVille Version - Failed..Because, command [modinfo | grep "SunFC FCP"] failed with the Error:[]
INFO: Collecting SF Product version Info
INFO: Checking if MPXIO is enabled

DESCRIPTION:
In Solaris 11.4 the module "fcp (SunFC FCP)" has been renamed to "fcp (Fibre Channel SCSI ULP)". The module is used during DMPDR testing for refreshing/checking the FC devices. Because the name of the module has changed, failure is observed while running "dmpdr -o refresh" command.

RESOLUTION:
The code has been changed to take care of name change between Solaris 11.3 and 11.4.

Patch ID: VRTSvxvm-7.4.0.1100

* 3949322 (Tracking ID: 3944259)

SYMPTOM:
The vradmin verifydata and vradmin ibc commands fail on private diskgroups
with Lost connection error.

DESCRIPTION:
This issue occurs because of a deadlock between the IBC mechanism and the
ongoing I/Os on the secondary RVG. IBC mechanism expects I/O transfer to
secondary in a sequential order, however to improve performance I/Os are now
written in parallel. The mismatch in IBC behavior causes a deadlock and the
vradmin verifydata and vradmin ibc fail due to time out error.

RESOLUTION:
As a part of this fix, IBC behavior is now improved such that it now considers
parallel and possible out-of-sequence I/O writes to the secondary.

* 3950578 (Tracking ID: 3953241)

SYMPTOM:
Customer may get generic message or warning in syslog with string as "vxvm:0000:
<msg>" instead of uniquely 
numbered message id for VxVM module.

DESCRIPTION:
Few syslog messages introduced in InfoScale 7.4 release were not given unique 
message number to identify 
correct places in the product where they are originated. Instead they are marked 
with common message 
identification number "0000".

RESOLUTION:
This patch fixes syslog messages generated by VxVM module, containing "0000" as 
the message string and 
provides them with a unique numbering.

Patch ID: VRTSamf-7.4.0.1100

* 3970684 (Tracking ID: 3970679)

SYMPTOM:
Veritas Infoscale Availability does not support Oracle Solaris 11.4.

DESCRIPTION:
Veritas Infoscale Availability does not support Oracle Solaris versions later 
than 11.3.

RESOLUTION:
Veritas Infoscale Availability now supports Oracle Solaris 11.4.

Patch ID: VRTSllt-7.4.0.1100

* 3970683 (Tracking ID: 3970679)

SYMPTOM:
Veritas Infoscale Availability does not support Oracle Solaris 11.4.

DESCRIPTION:
Veritas Infoscale Availability does not support Oracle Solaris versions later 
than 11.3.

RESOLUTION:
Veritas Infoscale Availability now supports Oracle Solaris 11.4.

Patch ID: VRTSodm-7.4.0.1500

* 3968790 (Tracking ID: 3968788)

SYMPTOM:
ODM module failed to load on Solaris 11.4.

DESCRIPTION:
The ODM module failed to load on Solaris 11.4 release, due to the kernel level changes in 11.4.

RESOLUTION:
Added ODM support for Solaris 11.4 release.

Patch ID: VRTSodm-7.4.0.1100

* 3953235 (Tracking ID: 3953233)

SYMPTOM:
After installing the 7.4.0.1100 (on AIX and Solaris) and 7.4.0.1200 (on Linux)
patch, the Oracle Disk Management (ODM) module fails to load.

DESCRIPTION:
As part of the 7.4.0.1100 (on AIX and Solaris) and 7.4.0.1200 (on Linux) patch,
the VxFS version has been updated to 7.4.0.1200.
Because of the VxFS version update, the ODM module needs to be repackaged due to an
internal dependency on VxFS version.

RESOLUTION:
As part of this fix, the ODM module has been repackaged to support the updated
VxFS version.

Patch ID: VRTSvxfs-7.4.0.1500

* 3959065 (Tracking ID: 3957285)

SYMPTOM:
job promote operation executed on replication target node fails with error message like:

# /opt/VRTS/bin/vfradmin job promote myjob1 /mnt2
UX:vxfs vfradmin: INFO: V-3-28111: Current replication direction:
<machine1>:/mnt1 -> <machine2>:/mnt2
UX:vxfs vfradmin: INFO: V-3-28112: If you continue this command, replication direction will change to:
<machine2>:/mnt2 -> <machine1>:/mnt1
UX:vxfs vfradmin: QUESTION: V-3-28101: Do you want to continue? [ynq]y
UX:vxfs vfradmin: INFO: V-3-28090: Performing final sync for job myjob1 before promoting...
UX:vxfs vfradmin: INFO: V-3-28099: Job promotion failed. If you continue, replication will be stopped and the filesystem will be made available on this host for 
use. To resume replication when <machine1> returns, use the vfradmin job recover command.
UX:vxfs vfradmin: INFO: V-3-28100: Continuing may result in data loss.
UX:vxfs vfradmin: QUESTION: V-3-28101: Do you want to continue? [ynq]y
UX:vxfs vfradmin: INFO: V-3-28227: Unable to unprotect filesystem.

DESCRIPTION:
job promote from target node sends promote operation related message to source node. After this message is processed on source side, 'seqno' file updating/write 
is done. 'seqno' file is created on target side and not present on source side, hence 'seqno' file update returns error and promote fails.

RESOLUTION:
'seqno' file write is not required as part of promote message.  Passing SKIP_SEQNO_UPDATE flag in promote message so that seqno file write is skipped on 
source side during promote processing.
Note: job should be stopped on source node before doing promote from target node.

* 3959302 (Tracking ID: 3959299)

SYMPTOM:
When large number of files are created at once, on a system with Selinux enabled, the file creation
may take longer time as compared to on a system with Selinux disabled.

DESCRIPTION:
On an Selinux enabled system, during file creation Selinux security labels are needed to be stored as
extended attributes. This requires allocation of attribute inode and it's data extent. The content of
the extent are read synchronously into the buffer. If this is a newly allocated extent, it's content
are anyway garbage. And it will get overwritten with the attribute data containing Selinux security
labels. Thus it was found that, for newly allocated attribute extents, the read operation is redundant.

RESOLUTION:
As a fix, for newly allocated attribute extent the reading of the data from that extent is skipped. 
However, If the allocated extent gets merged with previously allocated extent, then extent returned
by allocator could be a combined extent. In such cases, read of entire extent is allowed to ensure 
that previously written data is correctly loaded in-core.

* 3959306 (Tracking ID: 3959305)

SYMPTOM:
When large number of files with named attributes are being created/written to/deleted in a loop, 
along with other operations on an Selinux enabled system, some files may end up without 
security attributes. This may lead to access being denied to such files later.

DESCRIPTION:
On an Selinux enabled system, during file creation, security initialisation happens and
security attributes are stored. However when there are parallel create/write/delete operations 
on multiple files or on same files multiple times which have named attributes, due to a race condition,
it is possible that a security attribute initialization may get skipped for some files. Since these 
file dont have security attributes set, at later time Selinux security module will prevent access 
to such files for other operations. These operations will fail with access denied error.

RESOLUTION:
If this is a file creation context, then while writing named attributes also attempt to do security
initialisation of file by explicitly calling security initiailzation routine. This is an additional provision 
(in addition to security initialisation during default file create code) to ensure that security-initialization
always happens (notwithstanding race conditions) in named attribute write codepath.

* 3959996 (Tracking ID: 3938256)

SYMPTOM:
When checking file size through seek_hole, it will return incorrect offset/size when 
delayed allocation is enabled on the file.

DESCRIPTION:
In recent version of RHEL7 onwards, grep command uses seek_hole feature to check 
current file size and then it reads data depends on this file size. In VxFS, when dalloc is enabled, we 
allocate the extent to file later but we increment the file size as soon as write completes. When 
checking the file size in seek_hole, VxFS didn't completely consider case of dalloc and it was 
returning stale size, depending on the extent allocated to file, instead of actual file size which was 
resulting in reading less amount of data than expected.

RESOLUTION:
Code is modified in such way that VxFS will now return correct size in case dalloc is 
enabled on file and seek_hole is called on that file.

* 3964083 (Tracking ID: 3947421)

SYMPTOM:
DLV upgrade operation fails while upgrading Filesystem from DLV 9 to DLV 10 with
following error message:
ERROR: V-3-22567: cannot upgrade /dev/vx/rdsk/metadg/metavol - Invalid argument

DESCRIPTION:
If the filesystem has been created with DLV 5 or lesser and later the successful
upgrade opration has been done from 5 to 6, 6 to 7, 7 to 8, 8 to 9. The newly
written code tries to find the "mkfs" logging in the history log. There was no
concept of logging the mkfs operation in the history log for DLV 5 or lesser so
Upgrade operation fails while upgrading from DLV 9 to 10

RESOLUTION:
Code changes have been done to complete upgrade operation even in case mkfs
logging is not found.

* 3966524 (Tracking ID: 3966287)

SYMPTOM:
During the multiple mounts of Shared CFS mount points, more than 100MB is consumed per mount

DESCRIPTION:
During initialization of GLM there is scaling of memory usage for the message queues. Original design provide memory per max inode number which don't scale well 
for multiple cfs mount points, memory rise linearly with mount points while inode max number is constant, more over current memory  usage per inode is about 
1/4N where N is max number inodes which translate to 130MB for standard Linux mount which looks like aggressive strategy.
Here we provide additional parameter in order to scale this usage per local scope (shared mount point).
Thanks to that we are able easily to reduce memory footprint.

RESOLUTION:
Introduce kernel parameter which will scale memory usage, to use it kernel parameter needs to be included, example of how to change 100MB per FS to 100KB by providing scope factor = 1000 (Kind of recomended value for multiple CFS mount > 100)

options vxfs vxfs_max_scopes_possible=1000

* 3966896 (Tracking ID: 3957092)

SYMPTOM:
System panic with spin_lock_irqsave thru splunkd in rddirahead path.

DESCRIPTION:
As per current state of, it seems to be spinlock getting re-initialized somehow in rddirahead path which is causing this deadlock.

RESOLUTION:
Code changes done accordingly to avoid this situation.

* 3966920 (Tracking ID: 3925281)

SYMPTOM:
Hexdump the incore inode data and piggyback data when inode revalidation fails.

DESCRIPTION:
While assuming the inode ownership, if inode revalidation fails with piggyback 
data, then we doesn't hexdump the piggyback and incore inode data. This will loose the 
current 
state of inode. Added inode revalidation failure message and hexdump the incore inode data 
and 
piggyback data.

RESOLUTION:
Code is modified to print hexdump of incore inode and piggyback data when 
revalidation of inode fails.

* 3967002 (Tracking ID: 3955766)

SYMPTOM:
CFS hung when doing extent allocating, there is a thread like following to loop forever doing extent allocation:

#0 [ffff883fe490fb30] schedule at ffffffff81552d9a
#1 [ffff883fe490fc18] schedule_timeout at ffffffff81553db2
#2 [ffff883fe490fcc8] vx_delay at ffffffffa054e4ee [vxfs]
#3 [ffff883fe490fcd8] vx_searchau at ffffffffa036efc6 [vxfs]
#4 [ffff883fe490fdf8] vx_extentalloc_device at ffffffffa036f945 [vxfs]
#5 [ffff883fe490fea8] vx_extentalloc_device_proxy at ffffffffa054c68f [vxfs]
#6 [ffff883fe490fec8] vx_worklist_process_high_pri_locked at ffffffffa054b0ef [vxfs]
#7 [ffff883fe490fee8] vx_worklist_dedithread at ffffffffa0551b9e [vxfs]
#8 [ffff883fe490ff28] vx_kthread_init at ffffffffa055105d [vxfs]
#9 [ffff883fe490ff48] kernel_thread at ffffffff8155f7d0

DESCRIPTION:
In the current code of emtran_process_commit(), it is possible that the EAU summary got updated without delegation of the corresponding EAU, because we clear the VX_AU_SMAPFREE flag before updating EAU summary, which could lead to possible hang. Also, some improper error handling in case of bad map can also cause some hang situations.

RESOLUTION:
To avoid potential hang, modify the code to clear the VX_AU_SMAPFREE flag after updating the EAU summary, and improve some error handling in emtran_commit/undo.

* 3967030 (Tracking ID: 3947433)

SYMPTOM:
While adding a volume (part of vset) in already mounted filesystem, fsvoladm
displays following error:
UX:vxfs fsvoladm: ERROR: V-3-28487: Could not find the volume <volume name> in vset

DESCRIPTION:
The code to find the volume in the vset requires the file descriptor of character
special device but in the concerned code path, the file descriptor that is being
passed is of block device.

RESOLUTION:
Code changes have been done to pass the file descriptor of character special device.

* 3967032 (Tracking ID: 3947648)

SYMPTOM:
Due to the wrong auto tuning of vxfs_ninode/inode cache, there could be hang 
observed due to lot of memory pressure.

DESCRIPTION:
If kernel heap memory is very large(particularly observed from SOLARIS T7 
servers), there can be overflow due to smaller size data type.

RESOLUTION:
Changed the code to handle overflow.

* 3968787 (Tracking ID: 3968785)

SYMPTOM:
VxFS module failed to load on Solaris 11.4.

DESCRIPTION:
The VxFS module failed to load on Solaris 11.4 release, due to the kernel level changes in 11.4 kernel.

RESOLUTION:
Added VxFS support for Solaris 11.4 release.

* 3969121 (Tracking ID: 3894712)

SYMPTOM:
ACL permissions are not inherited correctly on cluster file system.

DESCRIPTION:
The ACL counts stored on a directory inode gets reset every 
time directory inodes 
ownership is switched between the nodes. When ownership on directory inode 
comes back to the node, 
which  previously abdicated it, ACL permissions were not getting inherited 
correctly for the newly 
created files.

RESOLUTION:
Modified the source such that the ACLs are inherited correctly.

Patch ID: VRTSvxfs-7.4.0.1100

* 3950740 (Tracking ID: 3953165)

SYMPTOM:
Customer may get generic message or warning in syslog with string as "vxfs:0000:<msg>" instead of uniquely 
numbered message id for VxFS module.

DESCRIPTION:
Few syslog messages introduced in InfoScale 7.4 release were not given unique message number to identify 
correct places in the product where they are originated. Instead they are marked with common message 
identification number "0000".

RESOLUTION:
This patch fixes syslog messages generated by VxFS module, containing "0000" as the message string and 
provides them with a unique numbering.



INSTALLING THE PATCH
--------------------
Run the Installer script to automatically install the patch:
-----------------------------------------------------------
Please be noted that the installation of this P-Patch will cause downtime.

To install the patch perform the following steps on at least one node in the cluster:
1. Copy the patch infoscale-sol11_sparc-Patch-7.4.0.1200.tar.gz to /tmp
2. Untar infoscale-sol11_sparc-Patch-7.4.0.1200.tar.gz to /tmp/hf
    # mkdir /tmp/hf
    # cd /tmp/hf
    # gunzip /tmp/infoscale-sol11_sparc-Patch-7.4.0.1200.tar.gz
    # tar xf /tmp/infoscale-sol11_sparc-Patch-7.4.0.1200.tar
3. Install the hotfix(Please be noted that the installation of this P-Patch will cause downtime.)
    # pwd /tmp/hf
    # ./installVRTSinfoscale740P1200 [<host1> <host2>...]

You can also install this patch together with 7.4 base release using Install Bundles
1. Download this patch and extract it to a directory
2. Change to the Veritas InfoScale 7.4 directory and invoke the installer script
   with -patch_path option where -patch_path should point to the patch directory
    # ./installer -patch_path [<path to this patch>] [<host1> <host2>...]

Install the patch manually:
--------------------------
Manual installation is not recommended.


REMOVING THE PATCH
------------------
Manual uninstallation is not recommended.


SPECIAL INSTRUCTIONS
--------------------
After InfoScale Stack + OS upgrade, if cluster is not configured  properly then run below scripts and reboot the systems
Script 1:
rm /etc/vx/reconfig.d/state.d/.vxvm-configured
svcadm refresh vxvm-configure
vxinstall

Script 2:
  add_drv -m '* 0640 root sys' vxfs  > /dev/null 2>&1
  add_drv -m '* 0640 root sys' fdd   > /dev/null 2>&1
  add_drv -m '* 0640 root sys' vxportal  > /dev/null 2>&1
  add_drv -m '* 0640 root sys' vxcafs  > /dev/null 2>&1
  add_drv -m '* 0640 root sys' odm  > /dev/null 2>&1
  add_drv -m '* 0640 root sys' vxglm  > /dev/null 2>&1
  add_drv -m '* 0640 root sys' vxgms  > /dev/null 2>&1
  /lib/svc/method/vxfs-modload  > /dev/null 2>&1
  /opt/VRTS/bin/cfscluster start > /dev/null 2>&1


OTHERS
------
NONE