vom-Patch-7.0.0.103
Obsolete
The latest patch(es) : vom-Patch-7.0.0.500 

 Basic information
Release type: Patch
Release date: 2016-02-02
OS update support: None
Technote: None
Documentation: None
Popularity: 786 viewed    downloaded
Download size: 485.05 MB
Checksum: 1804278779

 Applies to one or more of the following products:
Operations Manager 7.0.0.0 On AIX
Operations Manager 7.0.0.0 On HP-UX 11i v2 (11.23)
Operations Manager 7.0.0.0 On HP-UX 11i v3 (11.31)
Operations Manager 7.0.0.0 On Linux
Operations Manager 7.0.0.0 On Solaris 10 SPARC
Operations Manager 7.0.0.0 On Solaris 11 SPARC
Operations Manager 7.0.0.0 On Windows x64

 Obsolete patches, incompatibilities, superseded patches, or other requirements:

This patch is obsolete. It is superseded by: Release date
vom-Patch-7.0.0.500 2016-10-24
vom-Patch-7.0.0.400 (obsolete) 2016-09-08
vom-Patch-7.0.0.300a (obsolete) 2016-04-11

This patch supersedes the following patches: Release date
vom-Patch-7.0.0.101 (obsolete) 2015-11-05

 Fixes the following incidents:
3817304, 3818167, 3818904, 3819686, 3827035, 3827406, 3831391, 3832603, 3835254, 3835530, 3837073, 3842055, 3842074, 3844522, 3844885, 3844890, 3853183, 3854463, 3855612, 3856325, 3856347, 3856828, 3857701, 3864325, 3864583, 3865133

 Patch ID:
None.

Readme file
                          * * * README * * *
               * * * Veritas Operations Manager 7.0 * * *
                         * * * Patch 103 * * *
                         Patch Date: 2016-02-01


This document provides the following information:

   * PATCH NAME
   * OPERATING SYSTEMS SUPPORTED BY THE PATCH
   * PACKAGES AFFECTED BY THE PATCH
   * BASE PRODUCT VERSIONS FOR THE PATCH
   * SUMMARY OF INCIDENTS FIXED BY THE PATCH
   * DETAILS OF INCIDENTS FIXED BY THE PATCH
   * INSTALLATION PRE-REQUISITES
   * INSTALLING THE PATCH
   * REMOVING THE PATCH


PATCH NAME
----------
Veritas Operations Manager 7.0 Patch 103


OPERATING SYSTEMS SUPPORTED BY THE PATCH
----------------------------------------
AIX 6.1 ppc
AIX 7.1 ppc
HP-UX 11i v2 (11.23)
HP-UX 11i v3 (11.31)
RHEL5 i686
RHEL5 x86-64
RHEL6 x86-64
RHEL7 x86-64
SLES10 i586
SLES10 x86-64
SLES11 x86-64
SLES12 x86-64
Solaris 10 SPARC
Solaris 11 SPARC
Solaris 11 X86
Windows Server 2012 R2 X64
Windows 2012 X64
Windows Server 2008 R2 X64
Windows 2008 X64



BASE PRODUCT VERSIONS FOR THE PATCH
-----------------------------------
   * Veritas Operations Manager 7.0.0.0


SUMMARY OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
Patch ID: vom-HF0700103
* 3817304 (3816297) VOM rule execution isn't working properly if rule is scoped on multiple
Organizations.
* 3818167 (3818163) Faults counter is not decreased when suppress faults on VOM6.1.
* 3818904 (3790928) VOMGather collects .sfa files leading to very large output file.
* 3819686 (3822310) If there is no connectivity from LDOM control domain to LDOM guest, SPVU are not
visible.
* 3827035 (3826824) VIOM Server doesn't ignore last field in MH version while blocking add host with
higher version than MS.
* 3827406 (3856780) Adding InfoScale support for unix
* 3831391 (3831390) Login error 'VOM server not reachable' occurs on VOM CMS HA
setup.
* 3832603 (3832611) vssatbin coredump on VCS discovery cleanup on Solaris MH.
* 3835254 (3856874) Resource dependency graph does not show properly on VIOM 7.0 windows 
platform. The resources are displayed without any connecting lines between 
them.
* 3835530 (3835536) On Solaris 11, xprtld SMF service remains in disabled state, sporadically, on
reboot.
* 3837073 (3819232) Patch provided to fix the Qualys scan reports POODLE vulnerability did not work.
* 3842055 (3856982) Refresh resource/SG state in VOM as soon as online-offline resource operations
is performed
* 3842074 (3832538) logrotate failed due to Permission denied by SElinux
* 3844522 (3856978) Enable Monitor Capacity for non VxFS and root filesystem
* 3844885 (3856997) If any resource faulted and if clear resource fault, VOM still report the
resource as faulted even if fault is cleared
* 3844890 (3856823) Resource and System state is incorrectly showing stale on all the nodes if any
cluster node is faulted
* 3853183 (3856838) Resource online/offline operation takes time to reflect state in VOM
* 3854463 (3854465) Displaying correct message during list option of AT Migration script.
* 3855612 (3855610) Host is missing critical update alert email does not have 
affected host name.
* 3856325 (3856832) Correct ICON should be shown for system where had is faulted
* 3856347 (3864610) VIOM does not show correct information for SFW patches.
* 3856828 (3844882) Service group status is not updated for faulted host in the Availability Perspective
* 3857701 (3860347) Can not add Volume to a Business Application Group if Volume
name has more than 128 characters.
* 3864325 (3864324) Windows Hotfixes shown as not installed.
* 3864583 (3857213) Host Settings view not showing correct VOM Active state in HA 
or HA-DR environment.
* 3865133 (3866437) VRTSsfmh/bin/perl runaway processes leading to server hang.
* 3856112 (3860009) VOM HA/DR, DB resource fails to start on node where SFM_services service group failed earlier


DETAILS OF INCIDENTS FIXED BY THE PATCH
---------------------------------------
This patch fixes the following Symantec incidents:

Patch ID: vom-HF0700103

* 3817304 (Tracking ID: 3816297)

SYMPTOM:
If rule is created on multiple Organizations, notification of fault/risk will only
be received by recipient, if the faulted object belongs to first Organization
(when sorted Organization names alphabetically).

DESCRIPTION:
The parsing logic of list of Organizations scoped for the rule, doesn't consider
subsequent Organizations after first Organization (when sorted Organization names
alphabetically).

RESOLUTION:
Changed parsing logic to take into account all Organizations scoped for the rule.

* 3818167 (Tracking ID: 3818163)

SYMPTOM:
Fault count on overview page remains the same even after suppressing the
fault.

DESCRIPTION:
When you suppress the fault in fault tab the fault count in overview
tab does on have any effect, ideally it should reduce the count.

RESOLUTION:
Made changes to database view which was responsible for fault count issue in
overview tab.

* 3818904 (Tracking ID: 3790928)

SYMPTOM:
Fault count on overview page remains the same even after suppressing the
fault.

DESCRIPTION:
The uploaded addon (.sfa) file location was not correct and
in case of exception, the file was not removed.

RESOLUTION:
Changed the location of uploaded addon (.sfa) on CMS and
removed dir /var/opt/VRTSsfmcs/tmp/<uniqe id dir> when exception occurs.

* 3819686 (Tracking ID: 3822310)

SYMPTOM:
SPVU are not visible for LDOM guest though LDOM control domain host is added in
VOM MS.

DESCRIPTION:
If there is no connectivity from LDOM control domain to LDOM guest, SPVU are not
visible because we are unable to associate LDOM guest with LDOM control domain.

RESOLUTION:
VOM discovery logic assumed that LDOM guest will always be accessible from LDOM
control domain. Enhanced this discovery logic of computing applicable SPVU for
LDOM guest, by taking into account the case when LDOM control domain doesn't have
connectivity to LDOM guest.

* 3827035 (Tracking ID: 3826824)

SYMPTOM:
If user tries to add Managed Host, with last field greater than last field of
Management Server version, add host fails.

DESCRIPTION:
If user tries to add Managed Host, with last field greater than last field of
Management Server version, add host fails.
e.g. If user tries to add MH with version 7.0.0.100 in MS with version 7.0.0.0, it
fails. It should be successful ignoring 
last field while comparing versions.

RESOLUTION:
Ignored last field in MH version while blocking add host with higher version than MS.

* 3827406 (Tracking ID: 3856780)

SYMPTOM:
VOM 7.0 does not have Infoscale support for unix. It has to be 
enhanced.

DESCRIPTION:
VOM 7.0 had added Infoscale support for Linux and Windows. Since 
InfoScale for Unix is GA, VOM needs to upgrade support to Unix as well for
1) What if analysis report
2) Per core license report

RESOLUTION:
Changes made in above mentioned report to show records for 
InfoScale for unix.

* 3831391 (Tracking ID: 3831390)

SYMPTOM:
Login to CMS fails with error 'VOM server not reachable' when VOM CMS is
active on primary node.

DESCRIPTION:
This may occur if user has manually executed 'mh_ctl --rescan'
command on active node.

RESOLUTION:
Update the primary node record in DB while configuring VOM CMS HA.

This fix will work only when you do fresh VOM CMS HA configuration. Apply this
patch on Primary Node before starting CMS HA configuration.

In case if VOM CMS HA is already configured and seeing this problem, please
execute below steps to resolve the problem.

On active node (Linux CMS)
1) Execute
      /opt/VRTSsfmh/bin/xdbadm -x "select * from P_HOST_CONFIG_MAPPING" -u
habdbsync -c /var/opt/VRTSsfmcs/conf/

2) Look for 'object_name' attribute in the above o/p. Check if there are records
for both VIP & physical name of primary node.

3) If yes, then create a file '/tmp/del_dup_cms_entry.sql' and copy below sql
query in this. Replace <VIP-Name> with the actual VIP name then save & close the
file.
   delete from P_HOST_CONFIG_MAPPING where object_name = '<VIP-Name>'

4) Execute	
   /opt/VRTSsfmh/bin/xdbadm -f /tmp/del_dup_host_entry.sql  -u habdbsync -c
/var/opt/VRTSsfmcs/conf/





On active node (Windows CMS)
1) Execute
      "C:\Program Files\Veritas\VRTSsfmh\bin\xdbadm.exe" -x "select * from
P_HOST_CONFIG_MAPPING" -u habdbsync -c C:\ProgramData\Symantec\VRTSsfmcs\conf

2) Look for 'object_name' attribute in the above o/p. Check if there are records
for both VIP & physical name of primary node.

3) If yes, then create a file 'C:\del_dup_cms_entry.sql' and copy below sql
query in this. Replace <VIP-Name> with the actual VIP name then save & close the
file.
   delete from P_HOST_CONFIG_MAPPING where object_name = '<VIP-Name>'

4) Execute	
   "C:\Program Files\Veritas\VRTSsfmh\bin\xdbadm.exe" -f
C:\del_dup_host_entry.sql  -u habdbsync -c C:\ProgramData\Symantec\VRTSsfmcs\conf

* 3832603 (Tracking ID: 3832611)

SYMPTOM:
vssatbin binary core dumps when running vssat deletecred command.

DESCRIPTION:
When the xprtld scheduler runs the vcs config clean up script, vssatbin cli 
which 
is invoked by vssat script core dumps due to an invalid reference.

RESOLUTION:
The AT code was fixed to handle the invalid reference and the vssatbin cli no 
longer dumps core.

* 3835254 (Tracking ID: 3856874)

SYMPTOM:
If the user creates dependencies between resources and clicks on the 
resource dependency tab, the dependencies between the resources are not 
displayed.

DESCRIPTION:
If the user is using VIOM 7.0 on windows platform the resource dependency 
graph does not display as the underlying atleastN dependency feature 
introduced in VIOM 7.0 is not supported on Windows VCS 6.2 and hence the 
dependencies fail to display.

RESOLUTION:
Removed the support for atleastN dependency feature on windows platform.

* 3835530 (Tracking ID: 3835536)

SYMPTOM:
If user reboots Solaris 11 host, xprtld SMF service remains in disabled state, and
xprtld is not running, on reboot, which happens sporadically.

DESCRIPTION:
If user reboots Solaris 11 host, xprtld SMF service remains in disabled state, and
xprtld is not running, on reboot, which happens sporadically.

RESOLUTION:
On Solaris 11, added retry to import xprtld SMF service after VRTSsfmh package
installation, which ensures the service to get imported and be in enabled state on
subsequent reboots.

* 3837073 (Tracking ID: 3819232)

SYMPTOM:
Qualys scan still reports xprtld is vulnerable to POODLE vulnerability even 
after 
applying the fix as part of 6.1HF6.

DESCRIPTION:
VOM 6.1HF6 updates OpenSSL to 0.9.8zc for POODLE fix. However, since it still 
supports SSLv3 protocol, scanners report xprtld as being vulnerable to POODLE.

RESOLUTION:
SSLv3 has now been disabled in 6.1HF7 for this issue.

* 3842055 (Tracking ID: 3856982)

SYMPTOM:
If you perform any online/offline operation on Service group or resource in case
of VCS then it takes around ~15 seconds to show up the updated state in tree
view and Service group dependency graph view and resource dependency graph view
on UI after performing operation.

DESCRIPTION:
When we perform any operation on SG or resource it wait till the refresh cycle
to happen and then update its state on the UI components which shows its data
like Tree View, Resource dependency graph view, SG dependency graph view.

RESOLUTION:
Added explicit refresh call for refreshing Tree view, resource dependency view,
SG dependency view and all other UI components which shows up state information
of the operated SG or resource immediately after performed operation has
finished its processing. As soon as you perform any operation on SG/resource its
state will immediately updated in all UI component wherever its details shown.

* 3842074 (Tracking ID: 3832538)

SYMPTOM:
logrotate command failed on SElinux due to Permission denied errors

DESCRIPTION:
logrotate command to rotate xprtld logs fails on SELinux and gives permission 
errors

RESOLUTION:
xprtld uses its own internal log rotation and does not require logrotate
Linux cli to rotate its log files. Hence, the xprtld logrotate configuration is 
now removed as its no longer required.

* 3844522 (Tracking ID: 3856978)

SYMPTOM:
If you try to set usage threshold for root filesystem and non VxFS filesystem
through Monitor Capacity functionality then it restricts to monitor capacity

DESCRIPTION:
We have removed Monitor Capicity functionality for non VxFS and root file
system, due to which fault and alerts was not getting generated for storage
usage of root and non VxFS filesystems

RESOLUTION:
Enabled Monitor Capacity functionality for root and non VxFS filesystem as well.
After setting threshold faults and alerts are getting generated for root and non
VxFS filesystem along with VxFS filesystem.

* 3844885 (Tracking ID: 3856997)

SYMPTOM:
If any resource faulted and if clear resource fault, VOM still report the
resource as faulted even if fault is cleared.

DESCRIPTION:
If any  resource faulted and if clear resource fault did not result in 
service group state change , VOM still report the resource faulted  (Citadel)

RESOLUTION:
If for faulted resource if clear the fault from VOM , we run the refresh
operation on the node where fault is cleared

* 3844890 (Tracking ID: 3856823)

SYMPTOM:
Resource and System state is incorrectly showing stale on all the nodes if any
cluster node is faulted

DESCRIPTION:
Resource is showing [Stale]Online incorrectly on all the nodes if any host is 
faulted
Systems are showing [Stale]Running incorrectly on all the nodes if any host 
is faulted

RESOLUTION:
Fixed the database view to show the correct state for system and resources if
any node is faulted

* 3853183 (Tracking ID: 3856838)

SYMPTOM:
Resource online/offline operation takes time to reflect state in VOM

DESCRIPTION:
In VOM added the fix to refresh page on clicking Ok button , but if any
application take time for online/offline updated state will not available by the
time click the Ok button , in this case state updated on next default UI refresh
interval

RESOLUTION:
Added fix to Wait for application state change so that by the time click Ok
button we will have the updated state

* 3854463 (Tracking ID: 3854465)

SYMPTOM:
AT Migration script was not showing correct message when list option was 
executed without performing the migrate operation.

DESCRIPTION:
AT Migration script was showing incorrect results when it was used without 
performing the migrate operation. Ideally list option must be used only after 
the migrate operation is performed.

RESOLUTION:
AT Migration script will bail out showing correct message that 'List option 
can be used only after AT migration is performed'.

* 3855612 (Tracking ID: 3855610)

SYMPTOM:
Host is missing critical update alert email does not have affected host 
name.

DESCRIPTION:
When a rule is configured to send email on missing critical update, 
the alert does not have the hostname in it, so its not possible to identify which 
host has the update missing.

RESOLUTION:
The alert message was changed to include the hostname which has the 
critical update missing. The same alert message is also part of the email message.

* 3856325 (Tracking ID: 3856832)

SYMPTOM:
VOM is showing same ICON for running as well as faulted nodes.

DESCRIPTION:
VOM is showing same ICON for running as well as faulted nodes
Issue is because of column hadstatus in VCSHOST view showing incorrect value.
Its showing hadstatus as RUNNING even if it is faulted

RESOLUTION:
Fixed the database view to show the correct state for hadstatus column

* 3856347 (Tracking ID: 3864610)

SYMPTOM:
VIOM 7.0 does not show SFW/SFW-HA CPs installed and will not allow them 
to be deployed even when showing "VIOM Deployable".

DESCRIPTION:
VIOM is not discovering the SFW patches on the managed host 
correctly and even when they are installed, it shows them as not installed.

RESOLUTION:
VIOM patch discovery for windows was not reporting the SORT id for 
the patch and hence the co-relation was failing. This was fixed and now the 
patches are discovered as expected.

* 3856828 (Tracking ID: 3844882)

SYMPTOM:
Service group status is not updated for faulted host in the Availability
Perspective

DESCRIPTION:
Service group status is not updated for faulted host in the Availability Perspective
Service group state ICON is showing as online on faulted host

RESOLUTION:
Fixed the database view to show the correct state for service group if
any node is faulted

* 3857701 (Tracking ID: 3860347)

SYMPTOM:
Business Application Group creation/modification fails if you try to add
Volume name having more than 128 characters.

DESCRIPTION:
You may not add a Volume Name with more than 128 characters to the
Business Application Group.

RESOLUTION:
Increased the character limit from 128 to 256 characters.

* 3864325 (Tracking ID: 3864324)

SYMPTOM:
Windows Hotfixes and CP's shown as not installed in VOM.

DESCRIPTION:
Windows Hotfixes and CP's are shown as not installed in VOM UI even 
though they are installed on the managed host.

RESOLUTION:
The windows patch discovery was fixed to report the SORT release 
identifier which is used to co-relate HF's / CP in VOM. After this the patches 
are correctly shown as installed/not installed on the managed host.

* 3864583 (Tracking ID: 3857213)

SYMPTOM:
Host Settings view not showing correct VOM Active state in HA or HA-DR 
environment.

DESCRIPTION:
When VOM CMS services are failed over from active node to passive 
node, the settings still show the active node as passive and vice versa.

RESOLUTION:
The CMS node state was not being reported by VCS discovery. The code 
has been fixed to report this and now the correct node state is seen after 
failover.

* 3865133 (Tracking ID: 3866437)

SYMPTOM:
There are large number of VRTSsfmh/bin/perl processes running on the server
leading to server hang

DESCRIPTION:
With current design if discovery of any VOM family is running , subsequent
discoveries for that family take wait lock till discovery for first one is
completed. 
If any discovery process is hung because of any reason result is large number of
subsequent discovery processes with wait lock.

RESOLUTION:
Made the changes in mh_driver.pl script so that only one process will take wait
lock if any discovery process is currently running

* 3856112 (Tracking ID: 3860009)

SYMPTOM:
VOM HA/DR, DB resource fails to start on node where SFM_services service group
failed earlier.

DESCRIPTION:
If force unmounted the filesystem used for VOM DB , postgres commands shows the
the DB state as not running , but still postgres process are in running state.

RESOLUTION:
Killing the postgres processes if postgres commands shows the the DB state as not
running.



INSTALLING THE PATCH
--------------------
IMPORTANT NOTE : Please take a backup of the database using the instructions given in the Admin guide before installing this Hotfix.

This Hotfix is applicable for VOM 7.0 Managed Hosts as well as VOM 7.0 Management Server.

1. Download the file vom-7.0.0.103.sfa
2. Launch a browser and login to the VOM management server.
3. Navigate to Settings ->      Deployment Icon.
4. Upload the Hotfix to the VOM CMS using the "Upload Solutions" button.
   The HF vom-7.0.0.103 should be visible in the Hot fixes tree node.
5. Please install this Hotfix on CS first using the following instructions:
    - Go to Settings ->     Deployment ->     Hot fixes ->     Veritas Operations Manager.
    - Click on Hot fixes Tab. Click on Applicable Hosts Tab.
    - Right click on CS Name and click on Install.
6. For installing this Hotfix on Managed Hosts, please follow the below instructions:
    - Go to Settings ->     Deployment ->     Hot fixes ->     Veritas Operations Manager.
    - Click on Hot fixes Tab and then right click on the HF vom-7.0.0.103 and select Install.
    - Select the 7.0 hosts on which you want to apply the HF and click on the Finish button.


REMOVING THE PATCH
------------------
Un-installation and rollback of this Hotfix is supported only on Solaris 10, HP-UX and AIX.


SPECIAL INSTRUCTIONS
--------------------
It requires approximately around 3 GB of disk space to upload vom-7.0.0.103.sfa. Please ensure that Management Server has at least 3 GB of free disk space where CMS is installed.


OTHERS
------
NONE