The latest patch(es) : vcs-sles12_x86_64-Patch-7.3.1.1200
|
---|
Release type: | Patch |
Release date: | 2019-11-05 |
OS update support: | None |
Technote: | None |
Documentation: | None |
Popularity: | 763 viewed downloaded |
Download size: | 71.68 MB |
Checksum: | 305943239 |
InfoScale Availability 7.3.1 On SLES12 x86-64
InfoScale Enterprise 7.3.1 On SLES12 x86-64 InfoScale Storage 7.3.1 On SLES12 x86-64 |
|
3933946, 3964064, 3966701, 3970171, 3981993
|
VRTSvcs-7.3.1.1100-SLES12
|
* * * READ ME * * * * * * Veritas Cluster Server 7.3.1 * * * * * * Patch 1100 * * * Patch Date: 2019-11-26 This document provides the following information: * PATCH NAME * OPERATING SYSTEMS SUPPORTED BY THE PATCH * PACKAGES AFFECTED BY THE PATCH * BASE PRODUCT VERSIONS FOR THE PATCH * SUMMARY OF INCIDENTS FIXED BY THE PATCH * DETAILS OF INCIDENTS FIXED BY THE PATCH * INSTALLATION PRE-REQUISITES * INSTALLING THE PATCH * REMOVING THE PATCH PATCH NAME ---------- Veritas Cluster Server 7.3.1 Patch 1100 OPERATING SYSTEMS SUPPORTED BY THE PATCH ---------------------------------------- SLES12 x86-64 PACKAGES AFFECTED BY THE PATCH ------------------------------ VRTSvcs BASE PRODUCT VERSIONS FOR THE PATCH ----------------------------------- * InfoScale Availability 7.3.1 * InfoScale Enterprise 7.3.1 * InfoScale Storage 7.3.1 SUMMARY OF INCIDENTS FIXED BY THE PATCH --------------------------------------- Patch ID: 7.3.1.1100 * 3970171 (3970170) The Cluster Server component creates some required files in the /tmp and /var/tmp directories. * 3981993 (3981992) A potentially critical security vulnerability in VCS needs to be addressed. Patch ID: 7.3.1.003 * 3966701 (3931460) The Cluster Server component creates some required files in the /tmp and /var/tmp directories. Patch ID: 7.3.1.002 * 3964064 (3951561) A service group fails to come online on a peer node, when any of the core VCS modules fails on the node on which it is currently online. Patch ID: 7.3.1.001 * 3933946 (3866087) Local node goes to ADMIN_WAIT when the peer node leaves while local node in REMOTE_BUILD DETAILS OF INCIDENTS FIXED BY THE PATCH --------------------------------------- This patch fixes the following incidents: Patch ID: 7.3.1.1100 * 3970171 (Tracking ID: 3970170) SYMPTOM: The Cluster Server component creates some required files in the /tmp and /var/tmp directories. DESCRIPTION: The Cluster Server component creates some required files in the /tmp and /var/tmp directories. Non-root users have access to these folders, and they may accidentally modify, move, or delete these files. Such actions may interfere with the normal functioning of Cluster Server. RESOLUTION: This hotfix addresses the issue by moving the required Cluster Server files to secure locations. * 3981993 (Tracking ID: 3981992) SYMPTOM: A potentially critical security vulnerability in VCS needs to be addressed. DESCRIPTION: A potentially critical security vulnerability in VCS needs to be addressed. RESOLUTION: This hotfix addresses the security vulnerability. For details, refer to the security advisory at: https://www.veritas.com/content/support/en_US/security/VTS19-003.html Patch ID: 7.3.1.003 * 3966701 (Tracking ID: 3931460) SYMPTOM: The Cluster Server component creates some required files in the /tmp and /var/tmp directories. DESCRIPTION: The Cluster Server component creates some required files in the /tmp and /var/tmp directories. Non-root users have access to these folders, and they may accidentally modify, move, or delete these files. Such actions may interfere with the normal functioning of Cluster Server. RESOLUTION: This hotfix addresses the issue by moving the required Cluster Server files to secure locations. Patch ID: 7.3.1.002 * 3964064 (Tracking ID: 3951561) SYMPTOM: A service group fails to come online on a peer node, when any of the core VCS modules fails on the node on which it is currently online. DESCRIPTION: When VxFEN or GAB or LLT is stopped on a peer node, HAD clears all the auto-disabled service groups and also the timer (DelayAutoStart attribute) that is set to bring an application online. This causes the application remain offline on the peer node. RESOLUTION: This hotfix updates HAD to check the number of nodes that participate in the cluster when the AutoDisabled attribute of a service group is cleared. If one of the nodes has not yet joined the cluster, HAD continues to wait for the duration mention in FORCE_AUTOSTART_TIMEOUT. The application comes online after the this time has elapsed. Patch ID: 7.3.1.001 * 3933946 (Tracking ID: 3866087) SYMPTOM: When peer node leaves cluster before completing snapshot, the had stuck in ADMIN_WAIT. DESCRIPTION: HAd receives cluster configuration either by local build or snapshot from peer. If it receives from peer, and peer leaves before it broadcasts "End ofSnapshot", the HAd stuck in ADMIN_WAIT, which indicates, that HAD has received configuration, but not sure if it is good enough to start the cluster. RESOLUTION: Now, HAd starts the cluster if an environment variable VCS_NOADMIN_WAIT is exported. This environment variable can be added to /opt/VRTSvcs/bin/vcsenv file on all the cluster nodes. INSTALLING THE PATCH -------------------- Run the Installer script to automatically install the patch: ----------------------------------------------------------- Please be noted that the installation of this P-Patch will cause downtime. To install the patch perform the following steps on at least one node in the cluster: 1. Copy the patch vcs-sles12_x86_64-Patch-7.3.1.1100.tar.gz to /tmp 2. Untar vcs-sles12_x86_64-Patch-7.3.1.1100.tar.gz to /tmp/hf # mkdir /tmp/hf # cd /tmp/hf # gunzip /tmp/vcs-sles12_x86_64-Patch-7.3.1.1100.tar.gz # tar xf /tmp/vcs-sles12_x86_64-Patch-7.3.1.1100.tar 3. Install the hotfix(Please be noted that the installation of this P-Patch will cause downtime.) # pwd /tmp/hf # ./installVRTSvcs731P1100 [<host1> <host2>...] You can also install this patch together with 7.3.1 base release using Install Bundles 1. Download this patch and extract it to a directory 2. Change to the Veritas InfoScale 7.3.1 directory and invoke the installer script with -patch_path option where -patch_path should point to the patch directory # ./installer -patch_path [<path to this patch>] [<host1> <host2>...] Install the patch manually: -------------------------- To install the patch perform the following steps on all nodes in theVCS cluster: 1. Stop VCS on the cluster node. 2. Install the patch. 3. Restart VCS on the node. Stopping VCS on the cluster node -------------------------------- To stop VCS on the cluster node: 1. Ensure that the "/opt/VRTSvcs/bin" directory is included in your PATH environment variable so that you can execute all the VCS commands. For more information, refer to the Veritas Cluster Server Installation Guide. 2. Verify that the base version of VRTSvcs is 7.3.1. 3. Persistently freeze all the service groups: # haconf -makerw # hagrp -freeze [group] -persistent # haconf -dump -makero 4. Stop the cluster on all nodes. If the cluster is writable, you may close the configuration before stopping the cluster. On any node, run the following command to stop the cluster: # hastop -all -force 5. Verify that the cluster is stopped on all nodes: # hasys -state 6. On all nodes, make sure that both the had and hashadow processes are stopped. 7. Stop the VCS CmdServer on all nodes: # /opt/VRTSvcs/bin/CmdServer -stop 8. Copy the /etc/VRTSvcs/conf/config/types.cf file to /etc/VRTSvcs/conf/config/types.cf.orig. 9. Copy the /etc/VRTSvcs/conf/config/main.cf file to /etc/VRTSvcs/conf/config/main.cf.orig. Installing the patch -------------------- To install the patch: 1. Log in as superuser on the system where you are installing the patch. 2. Uncompress the patch that you downloaded from Veritas. 3. Change the directory to the uncompressed patch location. 4. Install the patch: # rpm -Uvh VRTSvcs-7.3.1.1100-SLES12.x86_64.rpm 5. After the installation completes, verify that the patch is installed. # rpm -q VRTSvcs You will find the following output on display with the patch installed properly: VRTSvcs-7.3.1.1100-SLES12.x86_64 Restarting VCS on the cluster node ---------------------------------- To restart VCS on the cluster node: 1. Verify the configuration: # hacf -verify config 2. Start the cluster services on all cluster nodes. First start VCS on a node # hastart On all the other nodes, start VCS by issuing the hastart command after the first node's state changes to LOCAL_BUILD or RUNNING. 3. Unfreeze all the service groups: # haconf -makerw # hagrp -unfreeze [group] -persistent # haconf -dump -makero 4. Start the VCS CmdServer on all nodes # /opt/VRTSvcs/bin/CmdServer REMOVING THE PATCH ------------------ To uninstall the patch perform the following steps: 1. Stop VCS on the node by following the steps in the section "Stopping VCS on the cluster node". 2. Stop the VCS CmdServer #/opt/VRTSvcs/bin/CmdServer -stop 3. Remove the patch: # rpm -e VRTSvcs 4. After the removal completes, verify that the patch has been removed from all the system in the cluster. On each system type: # rpm -qa | grep VRTSvcs The package VRTSvcs should not be displayed, which confirms that the package is removed. 5. Install previous VCS version rpm. 6. Copy the /etc/VRTSvcs/conf/config/types.cf.orig file to /etc/VRTSvcs/conf/config/types.cf. 7. Copy the /etc/VRTSvcs/conf/config/main.cf.orig file to /etc/VRTSvcs/conf/config/main.cf. 8. Start the cluster services on all cluster nodes. First start VCS on one node: # hastart On all the other nodes, start VCS by issuing the hastart command after the first node's state changes to LOCAL_BUILD or RUNNING. 9. Unfreeze all the service groups: # haconf -makerw # hagrp -unfreeze [group] -persistent # haconf -dump -makero 10. Start the VCS CmdServer on all nodes # /opt/VRTSvcs/bin/CmdServer SPECIAL INSTRUCTIONS -------------------- NONE OTHERS ------ NONE |
Why Register?
Get notifications about ASLs/APMs, HCLs, patches, and high availability agents
As a registered user, you can create notifications to receive updates about NetBackup Future Platform and Feature Plans, NetBackup hot fixes/EEBs in released versions, Array Support Libraries (ASLs)/Array Policy Modules (APMs), hardware compatibility lists (HCLs), patches and high availability agents. In addition, you can create system-specific notifications customized to your environment.
Compare configurations
The Compare Configurations feature lets you compare different system scans by the data collector. When you sign in, you can choose a target system, compare reports run at different times, and easily see how the system's configuration has changed.
Save configurations
After logging in, you can retrieve past reports, share reports with colleagues, review notifications you received, and retain custom settings. Anonymous users cannot access these features.
Bulk uploader
As a registered user,you can upload multiple reports, using the Bulk Uploader.