vcsea-rhel7_x86_64-Patch-7.3.1.1100

 Basic information
Release type: Patch
Release date: 2020-01-06
OS update support: None
Technote: None
Documentation: None
Popularity: 241 viewed    23 downloaded
Download size: 1.85 MB
Checksum: 2104554124

 Applies to one or more of the following products:
InfoScale Availability 7.3.1 On RHEL7 x86-64
InfoScale Enterprise 7.3.1 On RHEL7 x86-64

 Obsolete patches, incompatibilities, superseded patches, or other requirements:

This patch supersedes the following patches: Release date
vcsea-rhel7_x86_64-Patch-7.3.1.100 (obsolete) 2019-01-14

 Fixes the following incidents:
3959872, 3960354, 3960502, 3965684, 3989511

 Patch ID:
VRTSvcsea-7.3.1.1100-RHEL7

Readme file
                          * * * READ ME * * *
     * * * Veritas High Availability Enterprise Agents 7.3.1 * * *
                         * * * Patch 1100 * * *
                         Patch Date: 2020-01-20


This document provides the following information:

   * PATCH NAME
   * OPERATING SYSTEMS SUPPORTED BY THE PATCH
   * PACKAGES AFFECTED BY THE PATCH
   * BASE PRODUCT VERSIONS FOR THE PATCH
   * SUMMARY OF INCIDENTS FIXED BY THE PATCH
   * DETAILS OF INCIDENTS FIXED BY THE PATCH
   * INSTALLATION PRE-REQUISITES
   * INSTALLING THE PATCH
   * REMOVING THE PATCH


PATCH NAME
----------
Veritas High Availability Enterprise Agents 7.3.1 Patch 1100


OPERATING SYSTEMS SUPPORTED BY THE PATCH
----------------------------------------
RHEL7 x86-64
SLES12 x86-64


PACKAGES AFFECTED BY THE PATCH
------------------------------
VRTSvcsea


BASE PRODUCT VERSIONS FOR THE PATCH
-----------------------------------
   * InfoScale Availability 7.3.1
   * InfoScale Enterprise 7.3.1


SUMMARY OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
Patch ID: 7.3.1.1100
* 3965684 (3931460) The Cluster Server component creates some required files in 
the /tmp and /var/tmp directories.
* 3989511 (3989510) The VCS agent for Oracle does not support Oracle 19c databases.
Patch ID: 7.3.1.100
* 3959872 (3959871) The VCS agents for Oracle do not work as expected with the changes in commands and in behavior from Oracle 12.1.0.2 onwards.
* 3960354 (3960353) An ASMInst resource fails to come online when the ocssd.bin process is not running.
* 3960502 (3960499) Support for Oracle 18c
* 3965684 (3931460) The Cluster Server component creates some required files in 
the /tmp and /var/tmp directories.


DETAILS OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
This patch fixes the following incidents:

Patch ID: 7.3.1.1100

* 3965684 (Tracking ID: 3931460)

SYMPTOM:
The Cluster Server component creates some required files in the /tmp 
and /var/tmp directories.

DESCRIPTION:
The Cluster Server component creates some required files in the 
/tmp and /var/tmp directories. Non-root users have access to these folders, 
and they may accidentally modify, move, or delete these files. Such actions 
may interfere with the normal functioning of Cluster Server.

RESOLUTION:
This hotfix addresses the issue by moving the required Cluster 
Server files to secure locations.

* 3989511 (Tracking ID: 3989510)

SYMPTOM:
The VCS agent for Oracle does not support Oracle 19c databases.

DESCRIPTION:
In case of non-CDB or CDB-only databases, the VCS agent for Oracle does not recognize that an Oracle 19c resource is intentionally offline after a graceful shutdown. This functionality is never supported for PDB-type databases.

RESOLUTION:
The agent is updated to recognize the graceful shutdown of an Oracle 19c resource as intentional offline in case of non-CDB or CDB-only databases. For details, refer to the article at: https://www.veritas.com/support/en_US/article.100046803.

Patch ID: 7.3.1.100

* 3959872 (Tracking ID: 3959871)

SYMPTOM:
The VCS agents for Oracle do not work as expected with the changes in commands and in behavior from Oracle 12.1.0.2 onwards.

DESCRIPTION:
From 12.1.0.2 onwards, Oracle does not let you modify ora.* resources by using the crsctl command. Instead, you can only use the srvctl command to modify ora.* resources. (For details, refer to the Oracle Doc ID 2016160.1.)

By default, AUTO_START is set to "restore", which does not fit the requirements for ASMInst monitoring. To manage ASM instances through the ASMInst resource type, the AUTO_START attribute must be set to "never". This lets VCS independently manage ASM resources and avoid conflicts with Oracle settings.

$ crsctl stat res ora.asm -p |grep AUTO_START
AUTO_START=restore

Also, the command to modify AUTO_START attribute fails and displays the following error:

$ crsctl modify resource "ora.asm" -attr "AUTO_START=never"
CRS-4995: The command 'Modify resource' is invalid in crsctl. Use srvctl for this command.

Instead, Oracle now mandates the use of "srvctl disable asm".

RESOLUTION:
The hotfix updates the agent to support the changes in Oracle 12.1.0.2 or later. As a prerequisite, you must use the following command, instead of the "crsctl modify resource "ora.asm" -attr "AUTO_START=never"" command that is mentioned in the "Cluster Server Agent for Oracle Installation and Configuration Guide".

$ srvctl disable asm
$ srvctl status asm -detail
ASM is not running.
ASM is disabled.

The agent enables or disables the ora.asm resource internally with each function of the ASMInst and the ASMDG resource types.

* 3960354 (Tracking ID: 3960353)

SYMPTOM:
An ASMInst resource fails to come online when the ocssd.bin process is not running. The following error is logged: "VCS ERROR V-16-20002-244 ASMInst:asminst:online:Cluster Synchronization Service process is not running"

DESCRIPTION:
The ASMInst agent monitors the ora.asm resource and its related processes. During the online operation of the ASMInst agent, the ora.asm resource is brought online by using either of the following options:
- SQL startup options: STARTUP, STARTUP_MOUNT, or STARTUP_OPEN
- srvctl startup option: SRVCTLSTART, SRVCTLSTART_MOUNT, SRVCTLSTART_OPEN
defined in the StartUpOpt attribute.
When the Oracle High Availability Services daemon (ohasd) is stopped by using the 'crsctl stop has' command, it stops all the parent resources, including ora.cssd. However, when ohasd is started by using the 'crsctl start has' command, it does not start the ora.cssd. For ASMInst resource to come online, the ora.cssd must be started, otherwise, the agent reports the ASMInst resource status as offline.

RESOLUTION:
The ASMInst agent is modified to bring ora.cssd online as part of its online function. The SQL query to start ASM returns an error if ocssd.bin is not running. The only way to start ASM along with ora.cssd (ocssd.bin) is to use the 'srvctl start asm' command. If ora.cssd is not running and an SQL option (STARTUP, STARTUP_MOUNT, or STARTUP_OPEN) is provided as part of StartUpOpt attribute, the modified agent maps the StartUpOpt option to its srvctl alternative. Thus, if ora.cssd is not running, the agent always uses the srvctl options internally to start the process.

* 3960502 (Tracking ID: 3960499)

SYMPTOM:
InfoScale did not support Oracle 18c.

DESCRIPTION:
Need to add support for Oracle 18c.

RESOLUTION:
This patch provides support for Oracle 18c with InfoScale.

* 3965684 (Tracking ID: 3931460)

SYMPTOM:
The Cluster Server component creates some required files in the /tmp 
and /var/tmp directories.

DESCRIPTION:
The Cluster Server component creates some required files in the 
/tmp and /var/tmp directories. Non-root users have access to these folders, 
and they may accidentally modify, move, or delete these files. Such actions 
may interfere with the normal functioning of Cluster Server.

RESOLUTION:
This hotfix addresses the issue by moving the required Cluster 
Server files to secure locations.



INSTALLING THE PATCH
--------------------
Run the Installer script to automatically install the patch:
-----------------------------------------------------------
Please be noted that the installation of this P-Patch will cause downtime.

To install the patch perform the following steps on at least one node in the cluster:
1. Copy the patch vcsea-rhel7_x86_64-Patch-7.3.1.1100.tar.gz to /tmp
2. Untar vcsea-rhel7_x86_64-Patch-7.3.1.1100.tar.gz to /tmp/hf
    # mkdir /tmp/hf
    # cd /tmp/hf
    # gunzip /tmp/vcsea-rhel7_x86_64-Patch-7.3.1.1100.tar.gz
    # tar xf /tmp/vcsea-rhel7_x86_64-Patch-7.3.1.1100.tar
3. Install the hotfix(Please be noted that the installation of this P-Patch will cause downtime.)
    # cd /tmp/hf
    # ./installVRTSvcsea731P1100 [<host1> <host2>...]

You can also install this patch together with 7.3.1 base release using Install Bundles
1. Download this patch and extract it to a directory
2. Change to the Veritas InfoScale 7.3.1 directory and invoke the installer script
   with -patch_path option where -patch_path should point to the patch directory
    # ./installer -patch_path [<path to this patch>] [<host1> <host2>...]

Install the patch manually:
--------------------------
The following steps should be run on all nodes in the VCS cluster:
1. Stopping VCS on the cluster node.
2. Installing the patch.
3. Restarting VCS on the node.

Stopping VCS on the cluster node
--------------------------------
To stop VCS on the cluster node:
1. Ensure that the "/opt/VRTS/bin" directory is included in your PATH
   environment variable so that you can execute all the VCS commands.
   For more information, refer to the Veritas Cluster Server
   Installation Guide.
2. Verify that the version of VRTSvcs is 7.3.1 for Linux.
3. Freeze all the service groups persistently.
    # haconf -makerw
    # hagrp -freeze [group] -persistent
    # haconf -dump -makero
4. Stop the cluster on all nodes. If the cluster is writable, you may
   close the configuration before stopping the cluster.
    # haconf -dump -makero
   From any node, execute the following command.
    # hastop -all
    or
    # hastop -all -force
   Verify that the cluster is stopped on all nodes
   by running the 'ha' command.
    # hasys -state
   On all nodes, make sure that both had and hashadow processes are
   stopped.
5. Log in as the super user into the system where the patch is to be
   installed.
6. Run the preceding steps on all nodes in the VCS cluster.

Installing the Patch
--------------------
Perform the following steps:
1. Un-compress the downloaded patch from Veritas.
   Change the directory to the uncompressed patch location.
   Install the VRTSvcsea patch using the following command:
# rpm -Uvh VRTSvcsea-7.3.1.1100-RHEL7.x86_64.rpm
2. Run the following command to verify if the new patch has been installed:
    # rpm -q VRTSvcsea
   If the proper patch is installed, the following output is displayed:
VRTSvcsea-7.3.1.1100-RHEL7.x86_64.rpm

Re-starting VCS on the cluster node
-----------------------------------
1. To start the cluster services on all cluster
   nodes. Execute the following command first on one node:
    # hastart
   On all the other nodes, start VCS by executing the hastart command after the
   first node goes to LOCAL_BUILD or RUNNING state.
2. Unfreeze all the groups.
     # haconf -makerw
     # hagrp -unfreeze [group] -persistent
     # haconf -dump -makero


REMOVING THE PATCH
------------------
Removal of the patch will result in removing the entire package 
from the system/node. To go back to a previous installed version 
of the package, you may need to re-install the package.
Run the following steps on all the VCS cluster nodes:

To remove the patch from a cluster node:
---------------------------------------------
1. Freeze all the service groups persistently.
    # haconf -makerw
    # hagrp -freeze [group] -persistent

2. Stop VCS on the node by following the steps provided in the section 
   "Stopping VCS on the cluster node".

3. Remove the patch by using the following command:
    # rpm -e VRTSvcsea

4. Verify that the patch has been removed from the system:
    # rpm -qa | grep VRTSvcsea
   Ensure that the VRTSvcsea package is not be displayed. This confirms 
   that the package is removed.

5. Install the VRTSvcsea 7.3.1 package from the installation media.

6. To start the cluster services on all cluster 
   nodes, execute the following command first on one node:
    # hastart

   On all the other nodes, start VCS by executing hastart after the 
   first node goes to LOCAL_BUILD or RUNNING state.

7. Unfreeze all the groups.
     # hagrp -unfreeze [group] -persistent
     # haconf -dump -makero


SPECIAL INSTRUCTIONS
--------------------
NONE


OTHERS
------
NONE


Read and accept Terms of Service