This page lists publically-released patches for Veritas Enterprise Products.
For Product GA build, see Veritas Entitlement Management System(VEMS) by clicking the Veritas Support 'Licensing' option.
For information on private patches, contact Veritas Technical Support.
For NetBackup Enterprise Server and NetBackup Server patches, see the NetBackup Downloads.
Patches for your product can have a variety of names. These names are based on product, component, or package names. For more information on patch naming conventions and the relationship between products, components, and packages, see the SORT online help.
access-rhel7_x86_64-Patch-7.3.1.300
Sign in if you want to rate this patch.

 Basic information
Release type: Patch
Release date: 2018-07-06
OS update support: None
Technote: None
Documentation: None
Popularity: 249 viewed    8 downloaded
Download size: 2.5 GB
Checksum: 270058634

 Applies to one or more of the following products:
Access 7.3.1VA On RHEL7 x86-64

 Obsolete patches, incompatibilities, superseded patches, or other requirements:
None.

 Fixes the following incidents:
12228, 12280, 12336, 12424, 12918, 12932, 12966, 13041

 Patch ID:
None.

 Readme file  [Save As...]
README VERSION               : 1.1
README CREATION DATE         : 2018-07-06
PATCH-ID                     : 7.3.1.300
PATCH NAME                   : VA-7.3.1.300
REQUIRED PATCHES             : NONE
INCOMPATIBLE PATCHES         : NONE
SUPPORTED PADV               : rhel7.3_x86_64, rhel7.4_x86_64, OL7.3_x86_64, OL7.4_x86_64
(P-PLATFORM , A-ARCHITECTURE , D-DISTRIBUTION , V-VERSION)
PATCH CRITICALITY            : Optional
HAS KERNEL COMPONENT         : YES
ID                           : NONE

PATCH INSTALLATION INSTRUCTIONS:
-----------------------------------------

For detailed installation instructions :
Please refer to : https://origin-www.veritas.com/content/support/en_US/doc/130196629-130196633-1

For detailed instructions on Upgrading Veritas Access, please refer to 
"Chapter 10 : Upgrading Veritas Access using a rolling upgrade".

SPECIAL INSTRUCTIONS:
-----------------------------------------

1. Extract the tarball

	# tar -xvvf <>

2. Rolling upgrade can be started using below command 

	# ./installaccess -rolling_upgrade

3. Patch can be upgraded from VA-7.3.1, VA-7.3.1.001, and VA-7.3.1.200 release only.
	
4. Make sure that upgrade is performed one node at a time even though installer tries to upgrade multiple nodes at a time.

For example,

	# ./installaccess -rolling_upgrade

											   Veritas Access 7.3.1.300 Rolling Upgrade Program

	Copyright (c) 2018 Veritas Technologies LLC.  All rights reserved.  Veritas and the Veritas Logo are trademarks or registered
	trademarks of Veritas Technologies LLC or its affiliates in the U.S. and other countries. Other names may be trademarks of their
	respective owners.

	The Licensed Software and Documentation are deemed to be "commercial computer software" and "commercial computer software
	documentation" as defined in FAR Sections 12.212 and DFARS Section 227.7202.

	Logs are being written to /var/tmp/installaccess-201804030721cXT while installaccess is in progress.

	Enter the system name of the cluster on which you would like to perform rolling upgrade [q,?] (fss7310_01)

		Checking communication on fss7310_01 .................................................................................... Done
		Checking rolling upgrade prerequisites on fss7310_01 .................................................................... Done

											   Veritas Access 7.3.1.300 Rolling Upgrade Program

	Cluster information verification:

			Cluster Name: fss7310

			Cluster ID Number: 61886

			Systems: fss7310_01 fss7310_02 fss7310_03 fss7310_04

	Would you like to perform rolling upgrade on the cluster? [y, n, q] (y)

	Rolling upgrade phase 1 upgrades all VRTS product packages except non-kernel packages.
	Rolling upgrade phase 2 upgrades all non-kernel packages including: VRTSvcs VRTScavf VRTSvcsag VRTSvcsea VRTSvbs VRTSnas

		Checking communication on fss7310_02 .................................................................................... Done
		Checking rolling upgrade prerequisites on fss7310_02 .................................................................... Done
		Checking communication on fss7310_03 .................................................................................... Done
		Checking rolling upgrade prerequisites on fss7310_03 .................................................................... Done
		Checking communication on fss7310_04 .................................................................................... Done
		Checking rolling upgrade prerequisites on fss7310_04 .................................................................... Done

		Checking the product compatibility of the nodes in the cluster .......................................................... Done

	Rolling upgrade phase 1 is performed on the system(s) fss7310_03. It is recommended to perform rolling upgrade phase 1 on the
	remaining system(s) fss7310_01 fss7310_02 fss7310_04.

	Would you like to perform rolling upgrade phase 1 on the recommended system(s)? [y, n, q] (y) n

	Do you want to quit without phase 1 performed on all systems? [y, n, q] (n) n

	Enter the system names separated by spaces on which you want to perform rolling upgrade: [q,?] fss7310_02
	
5. If file systems are online during upgrade, please make sure that recovery is finished before starting the
   upgrade of new node.
   
	To check the recovery process, please check below command :
		# vxtask list
	
	Recovery will be triggered in ~3-5 minutes after node has joined the cluster.

6. Before starting upgrade please make sure that none of the services are in "FAILED/FAULTED/W_ONLINE" state.
	
7. Fresh installation can also be done using this patch. Please refer to
		https://origin-www.veritas.com/content/support/en_US/doc/130196629-130196633-1
   for more details


SUMMARY OF FIXED ISSUES:
-----------------------------------------
Patch ID: 7.3.1.300

IA-13041		storage fs list fs_name does not display list of pools for mirrored fs
IA-12966		RESTgroup failed to online during fresh install
IA-12932		Filesystem corruption - every other attribute inode is corrupted becuase of encryption
IA-12918		Password is getting written to logs
IA-12424		Debuginfo hangs on collection of files in /sys
IA-12336		VVR iptable rules are not reboot persistent causing "iptable flush" again and again
IA-12280		Storage fs list detail displays incorrect value of encrypt attribute
IA-12228		Fix dependency issues for TFS


Patch ID: 7.3.1.200

IA-12090		Mirrors information not getting displayed in "storage fs list <fs_name>" for mirrored-stripe fs
IA-12046		storage fs checkmirror command takes a long time when a node is down
IA-12029		"storage fs list" is not listing file systems
IA-12020		Samba ports not opened
IA-11983		VxFS dedup of zero files not working
IA-11985		"tier listfiles" does not list filenames with spaces
IA-11976		Creation of Erasure coded volumes fails with 7.3.1 RU1
IA-11975		'storage fs list' displays status offline for fs_name more than 11 characters
IA-11968		Replication enable fails due to 'fs list <fsname>' error
IA-11869		command log feature completion
IA-11853		largefs couldn't be grown for large number of disks.
IA-11792		Unable to create FS by giving disks as input


Patch ID: 7.3.1.001

IA-9843			"storage fs create" taking lot of time to create file systems.
IA-9839			"storage fs list" taking lot of time to list all the file systems.
IA-9838			vxprint/vxdisk commands running slowly
IA-11243		"Storage fs checkmirror"  taking longer in large environments.
IA-10216		GUI discovery taking longer affecting other system operations
IA-10973		CLISH commands hang when private NIC fails
IA-10942		Linux network OS tunables not persistent across reboots 
IA-11338		File system creation failing if the number of Volume objects getting created are very high.
IA-9840			Cluster reboot all leaving FSS cluster in inconsistent state
IA-11405		Some of the Plexes in the volumes may remain in IOFAIL state after reboot.
IA-10946		NIC failure event was not recorded in the event monitoring.
IA-11237		Inconsistent event monitoring in case NODE offline/online events.
IA-10375		Unable to online IP address on newly added node, if any filesystem has quota set on it
IA-11058		Recursive empty directories created in /shared/knfsv4 after a node reboots multiple times
IA-11034		striped-mirrored volumes are created with DCO by default
IA-11051		User not be able to set WORM retention.
IA-11072		Volume recoveries started after cluster stop operations.
IA-11307		User not able to destroy FS in an isolated pool.
IA-10379		sosreport is not collected in evidences
IA-11402		Display events related to disk/plex similar to GUI in clish also.
IA-9847			vxddladm addjbod was leading to random devices having udid_mismatch
IA-11502		Fix Corruption issue for Erasure coded volume after cluster restart


DETAILS OF INCIDENTS FIXED BY THE PATCH
-----------------------------------------

Patch ID: 7.3.1.300

* TRACKING ID: IA-13041

ONE_LINE_ABSTRACT:
storage fs list fs_name does not display list of pools for mirrored fs

SYMPTOM:
storage fs list fs_name does not display list of pools for mirrored fs

DESCRIPTION:
list of pools field in the output of "storage fs list <fs_name>" command displays nothing for mirrored CFS

RESOLUTION:
Code changes done to resolve this

* TRACKING ID: IA-12966

ONE_LINE_ABSTRACT: 
RESTgroup VCS group faulted after fresh Access installation.

SYMPTOM: 
RESTgroup VCS group faulted after fresh Access installation.

DESCRIPTION:
Because of race condition, RESTgroup configuration file was getting updated two times. This was causing duplicate entries of RESTgroup configuration and resulting in RESTgroup getting into the FAULTED state.

RESOLUTION:
Changes done to fix the race and avoid duplicate entries of RESTgroup configuration in the config file.

* TRACKING ID: IA-12932

ONE_LINE_Abstract:
In a scenario where volume encryption at rest is enabled, data corruption may occur if the file system size exceeds 1TB.

SYMPTOM:
In a scenario where volume data encryption at rest is enabled, data corruption may occur if the file system size exceeds 1TB and the data is located in a file extent which has an extent size bigger than 256KB.

DESCRIPTION:
In a scenario where data encryption at rest is enabled, data corruption may occur when both the following cases are satisfied: - File system size is over 1TB - The data is located in a file extent which has an extent size bigger than 256KB This issue occurs due to a bug which causes an integer overflow for the offset.

RESOLUTION:
As a part of this fix, appropriate code changes have been made to improve data encryption behavior such that the data corruption does not occur.

* TRACKING ID: IA-12918

ONE_LINE_ABSTRACT:
Password getting logged as plaintext

SYMPTOM:
Password related operations from GUI results in password getting written in multiple logs

DESCRIPTION:
While performing password related operations, commands getting logged as it is. This resulted in password getting logged.

RESOLUTION:
Added proper ways to hide the password

* TRACKING ID: IA-12424

ONE_LINE_ABSTRACT:
Debuginfo hangs on collection of files in /sys

SYMPTOM:
Debuginfo hangs on collection of files in /sys

DESCRIPTION:
While collecting files in /sys, hang is observed in debuginfo command

RESOLUTION:
Code changes done not to collect files in /sys

* TRACKING ID: IA-12336

ONE_LINE_ABSTRACT:
Continuous replication related iptable rules were not reboot persistent. 

SYMPTOM:
On reboot of a single node or all the nodes of any cluster(Primary or Secondary), iptable rules related to continuous replication gets removed.

DESCRIPTION:
As iptable rules related to continuous replication are not reboot persistent, when the system comes up post reboot, there is no proper communication between primary and secondary. This leads to the replication going into disconnected state.

RESOLUTION:
The continuous replication iptable rules are now set correctly to make it reboot persistent.

* TRACKING ID: IA-12280

ONE_LINE_ABSTRACT:
Storage fs list detail displays incorrect value of encrypt attribute

SYMPTOM:
Storage fs list detail displays incorrect value of encrypt attribute

DESCRIPTION:
Storage fs list detail displays incorrect value of encrypt attribute. For CFS, even if the FS is encrypted, storage fs list command displays FS is not encrypted.

RESOLUTION:
Code changes done to display correct value of encrypt attribute

* TRACKING ID: IA-12228

ONE_LINE_ABSTRACT:
The TFS dependencies have not been set correctly with the underlying volume resources.

SYMPTOM:
On reboot of a single node, TFS resources get into FAULTED state.

DESCRIPTION:
As there is no dependency of any of the TFS resources on the associated volumes, when the system comes up post reboot, there is no proper sequencing of the resources to come online. This leads to the TFS resources going into FAULTED state

RESOLUTION:
The dependencies are now set correctly on between the volumes and TFS resources.

Patch ID: 7.3.1.200

* TRACKING ID: IA-12090

ONE_LINE_ABSTRACT: 'fs list <fsname>' displays incorrect number of mirrors

SYMPTOM : 'fs list <fsname>' displays incorrect number of mirrors

DESCRIPTION :
'fs list <fsname>' displays incorrect number of mirrors information for mirrored-stripe filesystems

RESOLUTION:
Code changes done to resolve this issue.

* TRACKING ID: IA-12046

ONE_LINE_ABSTRACT: 'Storage fs checkmirror' taking longer time environments where one of the node is down.

SYMPTOM :  'Storage fs checkmirror' taking longer time environments where one of the node is down.

DESCRIPTION: 
'storage fs checkmirror' command uses vxlist to get the data of the objects present on the system. vxlist command takes a lot of time to complete when one node of the cluster is down leading to increase in time of the overall checkmirror command.

RESOLUTION:
Code is optimized to run 'storage fs checkmirror' using single vxprint command.


* TRACKING ID: IA-12029

ONE_LINE_ABSTRACT: "storage fs list" command shows no output.

SYMPTOM : "storage fs list" command is showing blank output even when fs is present.

DESCRIPTION :
This issue is caused as SFMH package is not upgraded with the new VxVM build. 
As a result all VIOM commands like vxlist show no output leading to blank outputs of commands in CLISH.

RESOLUTION:
We have refreshed the SFMH package and included it in the RTI and CLISH commands work fine now.


* TRACKING ID: IA-12020

ONE_LINE_ABSTRACT: Samba ports not opened

SYMPTOM :  Port 445 was not opened by smbd process, due to which SMB shares could not be used by client.

DESCRIPTION :
Due to a missing separator in the samba configuration file all the required ports were not opened by the smbd process.

RESOLUTION:
Added the required comma separator to the port list used by smdb process to open required ports.


* TRACKING ID: IA-11983

ONE_LINE_ABSTRACT: VxFS deduplication of zero filled files does not work.

SYMPTOM :  Data files which have only zeroes as content were not being deduplicated by VxFS.

DESCRIPTION :
VxFS deduplication has a optimization to handle sparse files, the holes in the sparse file are not processed
by deduplication engine as there is no disk space usage. Files completely filled by zeros were erroneously considered as holes 
and were skipped, due to this those blocks were not deduplicated.

RESOLUTION:
Corrected the VxFS dedup behavior to handle zero filled files and process the zero filled blocks.


* TRACKING ID: IA-11985

ONE_LINE_ABSTRACT: "tier listfiles" does not list filenames with spaces

SYMPTOM : "tier listfiles" skips the filenames with spaces in them 

DESCRIPTION : 
listfiles considered filename with a space as two separate entries, resulting in skiping those entries

RESOLUTION:
Code changes to allow multiple spaces in filenames


* TRACKING ID: IA-11976

ONE_LINE_ABSTRACT: Creation of erasure coded FileSystem fails with 7.3.1 RU1 release.

SYMPTOM: Creation of erasure coded FileSystem fails with 7.3.1 RU1 release.

DESCRIPTION:
With the recent changes done in 7.3.1 RU1, there was a regression introduced to add an extra parameter (encryption) to the file system creation command which is not supported. 
Because of this extra parameter the erasure coded file system creation failed with 7.3.1 RU1.

RESOLUTION:
Code changes have been done to remove the extra parameter (encryption) during file system creation.


* TRACKING ID: IA-11975

ONE_LINE_ABSTRACT: 'storage fs list' displays status offline for fs_name more than 11 characters

SYMPTOM : 'storage fs list' displays status offline for fs_name more than 11 characters

DESCRIPTION :
If fs_name is longer than 11 characters then, status is displayed as offline even if the filesystem is online in the output of 'storage fs list' command.

RESOLUTION:
Code changes done to resolve this issue.


* TRACKING ID: IA-11968

ONE_LINE_ABSTRACT: Replication enable fails due to 'fs list <fsname>' error

SYMPTOM : Replication enable fails

DESCRIPTION : 
This is a regression due to RU1. If filesytem is being used for continous replication, filesystem size is returned as zero. This causes failure of 'replication enable' command

RESOLUTION:
Code changes to return correct filesystem size


* TRACKING ID: IA-11869  

ONE_LINE_ABSTRACT: Commands are not logged in command.log file

SYMPTOM :  Some commands executed on CLISH are not logged in command.log file

DESCRIPTION :
Commands executed on CLISH are logged along with the return status in command.log file. Some commands were missing logging the messages in command.log file

RESOLUTION:
Logging messages are added after command execution to be logged in command.log file.


* TRACKING ID: IA-11853

ONE_LINE_ABSTRACT: a largefs couldn't be grown for large number of disks

SYMPTOM : File system grow from Access fails with error "Failed to grow file system"

DESCRIPTION : 
Disk list getting truncated, resulting in fs growto failure

RESOLUTION:
Code changes to prevent disk list truncation


* TRACKING ID: IA-11792

ONE_LINE_ABSTRACT: User is unable to create FS by giving disks as input.

SYMPTOM : User will be unable to create FS when using disks in "storage fs create" command in CLISH.

DESCRIPTION :
The CLISH command "storage fs create" if given disks as input will fail with the error that no such disks present.

RESOLUTION:
This was a regression in RU1 and has been fixed now.



Patch ID: 7.3.1.001
	

* TRACKING ID: IA-9843

ONE_LINE_ABSTRACT: "storage fs create" taking lot of time to create file systems.

SYMPTOM : "storage fs create" taking lot of time to create file systems.

DESCRIPTION :
The time taken for command “storage fs create” was increasing as the number of File systems increase
because of redundant code.

RESOLUTION:
Optimized the "storage fs create" operation to reduce time taken by storage fs create.


* TRACKING ID: IA-9839

ONE_LINE_ABSTRACT: "storage fs list" taking lot of time to list all the file systems.

SYMPTOM: "storage fs list" taking lot of time to list the file systems.

DESCRIPTION: 
There was lot of redundant code which was invoking lot of back-end commands to fetch the data.
This was causing "storage fs list" to take long time.

RESOLUTION:
Code optimized to make "storage fs list" command to run faster.

* TRACKING ID: IA-9838

ONE_LINE_ABSTRACT : vxprint/vxdisk commands running slowly

SYMPTOM : vxprint/vxdisk commands running slowly

DESCRIPTION : 
These are internal commands which were taking longer to run as they were fetching unnecessary records.

RESOLUTION:
Optimized the commands to fetch required records only.

* TRACKING ID: IA-11243

SYMPTOM : "Storage fs checkmirror"  taking longer in large environments.

DESCRIPTION:
There was lot of redundant code which was invoking lot of back-end commands to fetch the data.
This was causing "storage fs checkmirror" to take long time.

RESOLUTION:
Code is optimized to run "storage fs checkmirror" faster.

* TRACKING ID: IA-10216

ONE_LINE_ABSTRACT : GUI discovery taking longer affecting other system operations

SYMPTOM : GUI discovery taking longer affecting other system operations

DESCRIPTION : 
GUI operations were running for very long time causing other CLISH commands to run slowly.

RESOLUTION :
Improved the GUI discovery performance and optimizations done to reduce time taken by GUI operations.


* TRACKING ID: IA-10973

ONE_LINE_ABSTRACT : CLISH commands hang when private NIC fails

SYMPTOM : CLISH commands hang when private NIC fails

DESCRIPTION :
In CLISH commands, the connectivity of the nodes was checked using the status of the nodes in the cluster. If private NIC on which IP address is plumbed is down,
the communication between the nodes is lost leading the commands getting hung.

RESOLUTION:
Changed the logic to check for status of private NIC rather than Node Status.


* TRACKING ID: IA-10942

ONE_LINE_ABSTRACT : Linux network OS tunables not persistent across reboots 

SYMPTOM : Linux network OS tunables not persistent across reboots

DESCRIPTION :
The network tunables on the cluster need to be changed for better performance. But, those were not persistent across the reboots. 
Access init scripts were changing the tunables to default state. 

RESOLUTION:
Modified code to make sure Access init scripts are changing the tunables to values recommended for better performance in FSS environment.


* TRACKING ID: IA-11338

ONE_LINE_ABSTRACT : File system creation failing if the number of Volume objects getting created are very high.

SYMPTOM : File System creation failed with “memory allocation” error

DESCRIPTION :
If number of volume objects getting created in the Access environment are too high, it may reach the memory limit leading to the memory allocation failure for file system getting created.

RESOLUTION :
Modified the internal memory limit to make sure File system creation will not fail with the memory allocation failure.



* TRACKING ID: IA-9840
ONE_LINE_ABSTRACT : Cluster reboot all leaving FSS cluster in inconsistent state
SYMPTOM : Cluster reboot all leaving cluster in inconsistent state
DESCRIPTION :
When all the nodes in the environment are rebooted, because of the issues with the startup scripts and service group dependency issues,
cluster services were not coming online in proper order leading to inconsistent state. Many of the services used to be in W_ONLINE/FAILED/FAULTED
state because of this issue after "cluster reboot all".

RESOLUTION :
Fixed the service group dependencies and strtup scripts to online cluster services in proper order.

* TRACKING ID: IA-11405

ONE_LINE_ABSTRACT : Some of the Plexes in the volumes may remain in IOFAIL state after reboot.

SYMPTOM : Some of the Plexes in the volumes may remain in IOFAIL state after reboot.

DESCRIPTION:
When one of the node in cluster is rebooted in FSS environment plexes need to be synced up when the nodes come back up.
Because of a bug in recovery, some of the plexes used to remain in IOFAIL state.

RESOLUTION :
Fixed the issue by triggering recovery for the failed plexes correctly.


* TRACKING ID: IA-10946

ONE_LINE_ABSTRACT : NIC failure event was not recorded in the event monitoring.

SYMPTOM : NIC failure event was not recorded in the event monitoring.

DESCRIPTION : The NIC failure event was not displayed in GUI as well as CLISH.

RESOLUTION:
Code changes were done to display the NIC failure event both in GUI as well as CLISH.

* TRACKING ID: IA-11237
ONE_LINE_ABSTRACT : Inconsistent event monitoring in case NODE offline/online events.

SYMPTOM : NODE offline/online events are not displayed in CLISH but shown in GUI.

DESCRIPTION:
Event reporting was missing in case of CLISH event monitoring framework.

RESOLUTION:
Code modified to make event monitoring framework consistent across GUI and CLISH

* TRACKING ID: IA-10375

ONE_LINE_ABSTRACT : Unable to online IP address on newly added node, if any filesystem has quota set on it

SYMPTOM:
Unable to online IP address on newly added node, if any filesystem has quota set on it

DESCRIPTION :
IP address was not coming online on the newly added node if any filesystem had user and group quota set before adding the node. 

RESOLUTION:
Code changes done to update VCS configuration file while adding the new node

* TRACKING ID: IA-11329

ONE_LINE_ABSTRACT : 
Add node failing if the existing node has VLAN and bond configured.

SYMPTOM:
Add node may fail if the existing cluster has bond and VLAN configured.

DESCRIPTION:
During addnode operation networking was not getting configured correctly, because of which after addnode may fail or
networking might not be configured correctly on the newly added node.

RESOLUTION:
Fix done to perform network configuration correctly during addnode.

* TRACKING ID: IA-11058 

ONE_LINE_ABSTRACT :Recursive empty directories created in /shared/knfsv4 after a node reboots multiple times

SYMPTOM: 
Recursive empty directories created in /shared/knfsv4 after a node reboots multiple times

DESCRIPTION: 
We were force copying directories, without checking if destination existed or not. 
If destination directories already exists while force copying directories (cp -rf src_dir dest_dir), entire src_dir is 
copied inside dest_dir, i.e. now dest_dir contains it’s original contents as well as src_dir and all its subdirectories. 
This results in nested subdirectories structure.

RESOLUTION:
Modified  the code to add the check if destination directory exists before copying.


* TRACKING ID: IA-11034

ONE_LINE_ABSTRACT: striped-mirrored volumes are created with DCO by default
SYMPTOM: striped-mirrored volumes are created with DCO by default
DESCRIPTION: When creating FS with mirrored configurations, volumes are created with DCO and detach map activated.
RESOLUTION: We created volumes with logtype=none so no DCO is created.

* TRACKING ID: IA-11051

ONE_LINE_ABSTRACT:User not be able to set WORM retention.
SYMPTOM: User  not be able to set WORM retention.
DESCRIPTION: When setting WORM retention for a particular directory via CLISH, it fails giving stack trace.
RESOLUTION: There is an undefined variable and this has been fixed to set WORM retention

* TRACKING ID: IA-11072

ONE_LINE_ABSTRACT: Volume recoveries started after cluster stop operations.
SYMPTOM: Volume recoveries started after cluster stop operations.
DESCRIPTION: In cases where we have to update kernel packages, we need a clean way to bring the cluster to a stop and perform the maintenance activity.
RESOLUTION: We have added command in CLISH that can stop either the entire cluster or just a node. Command is “cluster stop nodename|all”


* TRACKING ID: IA-11307

ONE_LINE_ABSTRACT:User not able to destroy FS in an isolated pool.
SYMPTOM: User not able to destroy FS in an isolated pool.
DESCRITION: There is an unknown error displayed when trying to destroy FS in an isolated pool.
RESOLUTION: Code has been fixed to do this.

* TRACKING ID: IA-10379
ONE_LINE_ABSTRACT : sosreport is not collected in evidences
SYMPTOM: sosreport is not collected in evidences
DESCRIPTION: When collecting the debuginfo sosreport is not collected is not getting collected as part of evidences.
RESOLUTION: Fixed the code to collect the sosreport

* TRACKING ID: IA-11402

ONE_LINE_ABSTRACT : Display events related to disk/plex similar to GUI in clish also.

SYMPTOM:
The events for Disk/Plex were seen only in GUI and not in clish.

DESCRIPTION:
Clish did not have events related to Disk offline/online and Plex failure. Similar events could be seen in GUI which was an inconsistent behaviour.

RESOLUTION:
Code changes done to display events related to Disk/Plex similar to GUI in Clish also.


* TRACKING ID: IA-9847

ONE_LINE_ABSTRACT : vxddladm addjbod was leading to random devices  having udid_mismatch

SYMPTOM:
After executing vxddladm addjbod command, random devices were having false udid_mismatch.

DESCRIPTION:
Because of the new changes added to support option “localdisks=yes”, garbage value was getting added to UDID of the device. This lead to the disk having inconsistent on disk and ASL UDID leading to udid_mismatch flag.

RESOLUTION:
Code changes have been made to avoid addition of garbage value to UDID.

* TRACKING ID: IA-11502	

ONE_LINE_ABSTRACT : Fix Corruption issue for Erasure coded volume after cluster restart

SYMPTOM: Data on Erasure Coded volume may get corrupted after restarting the cluster.

DESCRIPTION:
After cluster restart, during recovery operation invalid log might get replayed resulting in
data corruption.

RESOLUTION:
Code changes done to fix the recovery operation to avoid corruption.

KNOWN ISSUES
-----------------------------------------
* TRACKING ID: IA-12125
SYMPTOM : Not able to mount NFS share on client when using NFS GANESHA server intermittently


* TRACKING ID: IA-11427
SYMPTOM : GUI is not displaying any data after upgrade operation with error "License is not installed"

WORK-Around :
To resolve this issue we need to execute following command from node where ManagementConsole service group is online,
/opt/VRTSnas/pysnas/bin/isaconfig 



Read and accept Terms of Service