* * * READ ME * * * * * * VERITAS APPLICATION DIRECTOR 1.1 AND 1.1 PLATFORM EXPANSION * * * * * * ROLLING PATCH 3 * * * Patch Date: November 03, 2008 Etrack Incidents: 1173153, 1166272, 1160478, 1166272, 1160478, 1176440, 1176421, 1175482, 1195866, 1212366, 1089135, 1212711, 1220000, 1157673, 918655, 1157673, 918655, 1195971, 1178108, 1223659, 1219192, 1224264, 1196539, 1228321, 1234891, 1236994, 1238584, 1244106, 1228331, 1265188, 1286126, 1287978, 1208380, 1378159, 1380019, 1394082, 1406320, 1397542, 1406377, 1417890 This document provides the following information: * PATCH NAME * PATCH SUMMARY OF CONTENT * PACKAGES AFFECTED BY PATCH * PATCH'S BASE VERSION * PATCH'S SUPPORTED OPERATING SYSTEMS * LIST OF INCIDENTS FIXED BY PATCH * EXTRACTING THE PATCH * SHUTTING DOWN SERVICE GROUPS * INSTALLING THE PATCH ON CLIENT SYSTEMS * INSTALLING THE PATCH ON POLICY MASTER SYSTEMS * BEFORE YOU USE IPMP ON THE SOLARIS POLICY MASTER * MAKING PROCESS AND APPLICATION AGENTS ZONE-AWARE PATCH NAME ---------- Name: Veritas Application Director 1.1 and 1.1 Platform Expansion Rolling Patch 3 PACKAGES AFFECTED BY PATCH -------------------------- VRTSvadpm VRTSatClient VRTSatServer VRTSvadw VRTSvadcd VRTSvadc VRTSvcsor VRTSvadag VRTSpmmag PATCH'S BASE VERSION -------------------- The required installed base product version for this patch must be one of the following: * Veritas Application Director 1.1 (VAD 1.1) * Veritas Application Director 1.1 Platform Expansion (VAD 1.1 PE) * Veritas Application Director 1.1 Platform Expansion Rolling Patch 1 (VAD 1.1 PE RP1) * Veritas Application Director 1.1 Platform Expansion Rolling Patch 2 (VAD 1.1 PE RP2 (CP1, HF2, CP3)) This patch is a cumulative patch. It includes and supercedes Rolling Patch 2 (RP2). PATCH'S SUPPORTED OPERATING SYSTEMS ----------------------------------- AIX, ESX, HP-UX, Linux, Solaris, Windows LIST OF INCIDENTS FIXED BY PATCH -------------------------------- This patch fixes the following escalations and associated incidents: * 281-244-631: VAD clients failed to connect during DNS outage. e1173153: When the DNS is down, VAD clients and agents fail. * 290-882-435: LDAP users limited to a maximum of three groups. e1166272: The VxAT PAM plugin is not cluster aware. e1160478: When an LDAP user belongs to several user groups, they do not all appear in the credentials. * 320-074-885: PAM issues with VxAT and VAD. e1166272: The VxAT PAM plugin is not cluster aware. e1160478: When an LDAP user belongs to several user groups, they do not all appear in the credentials. * 290-891-042: The Oracle resource will not run in a Solaris local zone. e1176440: VCSAgGetUID returns the UID from the global zone even if the resource is configured in a local zone. AGFW should not expose VAD_AGFW to the local zone. e1176421: The Oracle/Netlsnr resource fails with unexpected messages. * 290-896-428: Misconfigured NIC resources swamp the Policy Master. e1175482: GTQ thrashing due to attempts to bring intent-online entries online. * 290-931-439: Command dump of configuration database produces incorrect privileges. e1195866: Issues with CMD conversion of user roles. e1212366: Command dump of configuration database produces incorrect privileges for customers e1089135: Invalid role definitions are getting dumped to CMD format. * 311-823-100: NetAppExport agent does not work with multipathing for export ACL. e1212711: Support for IPMP in NetAppExport. e1220000: Ping check on AIX is incorrect. * 281-261-442: NetAppExport resources do not recover after the NetApp filer recovers from a significant fault. e1157673: Unable to OFFLINE IPMultiNICB resource. e918655: hagrp -flush is ineffective in certain cases when an offhost resource is used. * 281-261-439: NetAppExport resource flag attribute value state is unknown after net and folder group recover from a fault. e1157673: Unable to OFFLINE IPMultiNICB resource. e918655: hagrp -flush is ineffective in certain cases when an offhost resource is used. * 290-979-813: The hares -list command consumes 4 gigabytes of memory and then has a segmentation fault. e1265188: hares -list Type="whatever" consumes significant memory. * 320-111-064: The VAD client populates the SystemIPAddrs attrobite using all IP addresses. e1286126: Make the SystemIPAddrs attribute configurable by the user. e1287978: Need a way to transition a system from the "domain dead, node alive" (DDNA) state to FAULTED. * 311-957-024: Issues with the mount resource in a Solaris zone. e1208380: In VCS 5.0 MP1, the Mount agent is unable to monitor NFS mounts in local zones because the zoneadm output in recent Solaris 10 versions append extra fields. * 281-396-581: The VAD client is dumping core. e1378159: Issuing the hagrp -modify command from the Policy Master to update the SystemList for the Service Group causes the client node to dump core on Assert Failure (AF). e1380019: trn: are not localized in some scenarios. * 320-126-961: Exporting the configuration causes haconf.exe to crash. e1394082: Exporting the configuration causes haconf.exe to crash. * 320-128-546: The hares -list VolumeGroup=vadtestvg command produces an error. e1406320: Running the hares -list command with an argument that is not valid for all agents produces an error. * 320-129-718: The "Advanced" option in the GUI under "Modify Service" changes ResFaultPolicy incorrectly. e1397542: The "Advanced" option in the GUI under "Modify Service" changes ResFaultPolicy incorrectly. * 320-133-728: The Application agent when using MonitorProcesses is not zone aware. e1406377: The Process and Application agents fail in the global zone when local zones are configured. * 320-135-169: The Policy Master does not support IPMP. e1417890: Changes are needed for the Policy Master agent and haadmin script so that VAD can support IPMP. This patch also fixes the following incidents: * e1195971: Repeatable Policy Master crash due to incorrect handling of soft global dependencies for online -propagate. * e1178108: Able to offline the ControlGroup while the dependent groups are online. * e1223659: vadd crash is repeatable and occurs when attempting to online a DNS resource. * e1219192: After NetApp filer failure and recovery, NetAppExport resources do not recover. * e1224264: Auto test case cli_haea_neg.tc failed. * e1196539: Policy Master crashed when attempting to update the ControlGroup attribute. * e1228321: Need a non-interactive version of vadencrypt. * e1234891: GroupAdministrators cannot see their groups. * e1236994: The GUI and CLI do not display objects consistently. * e1238584: Issues with roles of type "user" for usergroup. * e1244106: A core dump was observed while offlining service groups with an off-host resource. * e1228331: VxAT is assigning improper group membership to users. * e1290429: Core files were observed for the vadd process. BEFORE YOU INSTALL THIS PATCH ----------------------------- Before you install this patch, perform these steps: 1. Verify that each Policy Master server is able to successfully resolve its own fully qualified host name (FQHN) by configuring local host name resolution and/or verifying proper DNS resolution as shown here: a. Edit /etc/hosts to include the FQHN and IP. b. Edit /etc/nsswitch.conf to ensure file query before DNS. c. Edit /etc/resolv.conf to include the name server. d. Ensure the configured name server can resolve the FQHN: /usr/sbin/nslookup /usr/sbin/ping 2. Verify that all clients are connected to the Policy Master and in the RUNNING state: # hasys -state EXTRACTING THE PATCH -------------------- To extract the patch: 1. Download the patch from the Symantec Support website and copy the .tar.gz file for your platform(s) to the desired directory location: # cp . 2. Unzip the compressed patch files: # gunzip *.gz 3. Extract the compressed patch files from the tar files: tar -xvf For example, for patch vad11pe_RP3.aix.tar, enter: # tar -xvf vad11pe_RP3.aix.tar NOTE: The VxAT packages needed for upgrading VxAT are part of the Solaris and Linux tar files. The two VxAT versions needed are located under these two directories: AT_4.4.19.0 and AT_4.4.19.5. INSTALLING THE PATCH ON CLIENT SYSTEMS -------------------------------------- To install the patch on client systems: 1. Log on as the superuser to the Veritas Application Director (VAD) client system where the patch will be installed. 2. Freeze all the service groups running on that system persistently: a. List all the service groups configured on that system and store the list for later use: # /opt/VRTSvad/bin/hagrp -list SystemList=~ b. For each service group listed in the output, enter: # /opt/VRTSvad/bin/hagrp -freeze 3. Bring down the VAD client on the system: # /opt/VRTSvad/bin/hastop -client -local -force 4. Use the ps command to ensure that the VAD client daemon and all agents have exited. 5. Remove the following packages using the appropriate package remove command for the operating system: VRTSvadag, VRTSvadcd, VRTSvadc, VRTSvcsor On Linux, use: # rpm -e VRTSvadag --noscripts # rpm -e VRTSvadcd --noscripts # rpm -e VRTSvadc # rpm -e VRTSvcsor --nodeps On Solaris, use: # pkgrm VRTSvadag VRTSvadcd VRTSvadc VRTSvcsor On AIX, use: # installp -u VRTSvadag # installp -u VRTSvadcd # installp -u VRTSvadc # installp -u VRTSvcsor On HP, use: # swremove -x enforce_dependencies=false -x autoreboot=true VRTSvadag # swremove -x enforce_dependencies=false -x autoreboot=true VRTSvadcd # swremove -x enforce_dependencies=false -x autoreboot=true VRTSvadc NOTE: Set autoreboot=false if you do not want to reboot the system after removing the packages. 6. Add the same packages from this patch using the appropriate package add command for the operating system: VRTSvcsor, VRTSvadc, VRTSvcscd, VRTSvadag On Linux, use: # rpm -ihv VRTSvcsor --nodeps --force # rpm -ihv VRTSvadc # rpm -ihv VRTSvadcd --nodeps --force # rpm -ihv VRTSvadag On Solaris, use: # pkgadd -d . VRTSvcsor VRTSvadc VRTSvadcd VRTSvadag On AIX, use: # installp -aXd VRTSvcsor.rte.bff VRTSvcsor # installp -aXd VRTSvadc.rte.bff VRTSvadc # installp -aXd VRTSvadcd.rte.bff VRTSvadcd # installp -aXd VRTSvadag.rte.bff VRTSvadag On HP, use: # swinstall -s /path/to/depot/ VRTSvadc # swinstall -s /path/to/depot/ VRTSvadcd # swinstall -s /path/to/depot/ VRTSvadag 7. Bring up the VAD client on the system: # /opt/VRTSvad/bin/hastart -client 8. After the VAD client has started, unfreeze the groups that you froze in step 2: # /opt/VRTSvad/bin/hagrp -unfreeze # /opt/VRTSvad/bin/hagrp -value Frozen 0 <== should be 0 and not 1 9. Ensure that the output from hastatus does not show any frozen groups on this system: # /opt/VRTSvad/bin/hastatus -summary INSTALLING THE PATCH ON POLICY MASTER SYSTEMS --------------------------------------------- To install the patch on Policy Master systems: 1. Log on as the superuser to the Policy Master system where you will install the patch. 2. Take a backup of the Policy Master configuration: # haconf -dbtoxml 3. Take the Policy Master service group offline and bring it up on the second/failover Policy Master system: # hastop -web # hastop -pm # hastop -db 4. If you are running VAD1.1PE-RP2, skip this step and proceed to step 5. If you are running VAD1.1PE-RP1, perform the following steps to upgrade VxAT to version 4.4.19.5: a. Under the directory AT_4.4.19.5, find the tar.gz file for your platform and unzip and untar it. b. backup your VxSS configuration using: # haadmin -backup -vss c. Perform the VxAT upgrade: For solaris, run "installvp". For Linux, run "installat". d. Restore the the VxAT configuration from the "backup_dir": # haadmin -restore -vss e. Start the VxAT daemon: # /opt/VRTSat/bin/vxatd If you are running VAD 1.1 or VAD1.1PE, perform the following steps to upgrade VxAT to version 4.4.19.0 and then version 4.4.19.5: a. Under the directory AT_4.4.19.0, find the tar.gz file for your platform and unzip and untar it. b. backup your VxSS configuration using: # haadmin -backup -vss c. Perform the VxAT upgrade: run "installat" d. Choose "i" for install/upgrade. e. Choose "all" to process all packages. f. Under the directory AT_4.4.19.5, find the tar.gz file for your platform and unzip and untar it. g. Perform the VxAT upgrade: For solaris, run "installvp". For Linux, run "installat". h. Restore the the VxAT configuration from the "backup_dir": # haadmin -restore -vss i. Start the VxAT daemon: # /opt/VRTSat/bin/vxatd 5. Remove the following packages from the first system using the appropriate package remove commands for the operating system: VRTSvadpm, VRTSvadc, VRTSvadw On Linux, use: # rpm -e VRTSvadpm --nodeps # rpm -e VRTSvadc # rpm -e VRTSvadw --nodeps On Solaris, use: # pkgrm VRTSvadpm VRTSvadc VRTSvadw 6. Remove the following directory: /opt/VRTSweb/VERITAS/vad 7. Add the same packages from this patch to the first system using the appropriate package add commands for the operating system: VRTSvadpm VRTSvadc VRTSvadw On Linux, use: # rpm -ivh VRTSvadpm --nodeps # rpm -ivh VRTSvadc # rpm -ivh VRTSvadw --nodeps On Solaris, use: # pkgadd -d . VRTSvadpm VRTSvadc VRTSvadw 8. Start the database: # hastart -db -sys 9. Clean the database: # haconf -cleandb 10. Load the configuration again from the backup directory: # haconf -loaddb 11. Bring the Policy Master service group online on the first system: # hastart -pm # hastart -web 12. Perform steps 4 through 6 on all the remaining Policy Master systems. BEFORE YOU USE IPMP ON THE SOLARIS POLICY MASTER ------------------------------------------------ Before you use IP multipathing (IPMP) on the Solaris Policy Master cluster, perform these steps: 1. Bring down the Policy Master Service Group (PMSG) and the Policy Master Manager (PMM): # /opt/VRTSvcs/bin/hastop -all 2. Remove the existing VRTSpmmag package: # pkgrm VRTSpmmag 3. Install the new VRTSpmmag package: # pkgadd -d . VRTSpmmag 4. Modify the PMSG configuration in the PMM configuration file located at /etc/VRTSvcs/conf/config/main.cf as follows: a. Replace the IP resource with IPMultiNICB. b. Replace the NIC resource with MultiNICB. c. To enable IPMP for the MultiNICB resource, set the UseMpathd attribute to 1 and modify the MpathdCommand if needed. Then, set the Device attribute as follows: Device = { = 0 } For example, if your device name is hme0, set the Device attribute to: Device = { hme0 = 0 } 5. Modify the IPMultiNICB resource so that it refers to its corresponding MultiNICB resource. To do so, set the BaseResName attribute to the name of the MultiNICB resource. 6. Set up the appropriate resource-level dependencies between the PMSG resources. 7. Bring up the PMM by running the following command on both of the Policy Master nodes: # /opt/VRTSvcs/bin/hastart MAKING PROCESS AND APPLICATION AGENTS ZONE-AWARE ------------------------------------------------ If you need to run or monitor a Process or an Application resource in the global zone, set the ContainerInfo of the service group as follows: # hagrp -modify ContainerInfo Type Zone Name Global Enabled 1