|
---|
Release type: | Patch |
Release date: | 2020-07-31 |
OS update support: | Solaris 11 SPARC Update 4 |
Technote: | None |
Documentation: | None |
Popularity: | 832 viewed downloaded |
Download size: | 68.61 MB |
Checksum: | 3293803881 |
InfoScale Enterprise 7.4.2 On Solaris 11 SPARC
|
None.
|
4006461, 4006947
|
VRTSvxvm-7.4.2.1100
|
* * * READ ME * * * * * * Veritas Volume Manager 7.4.2 * * * * * * Patch 1100 * * * Patch Date: 2020-07-16 This document provides the following information: * PATCH NAME * OPERATING SYSTEMS SUPPORTED BY THE PATCH * PACKAGES AFFECTED BY THE PATCH * BASE PRODUCT VERSIONS FOR THE PATCH * SUMMARY OF INCIDENTS FIXED BY THE PATCH * DETAILS OF INCIDENTS FIXED BY THE PATCH * INSTALLATION PRE-REQUISITES * INSTALLING THE PATCH * REMOVING THE PATCH PATCH NAME ---------- Veritas Volume Manager 7.4.2 Patch 1100 OPERATING SYSTEMS SUPPORTED BY THE PATCH ---------------------------------------- Solaris 11 SPARC PACKAGES AFFECTED BY THE PATCH ------------------------------ VRTSvxvm BASE PRODUCT VERSIONS FOR THE PATCH ----------------------------------- * InfoScale Enterprise 7.4.2 SUMMARY OF INCIDENTS FIXED BY THE PATCH --------------------------------------- Patch ID: 7.4.2.1100 * 4006461 (3998475) Unmapped PHYS read I/O split across stripes gives incorrect data leading to data corruption. * 4006947 (4006539) zpool corruption occurs when DMP(Dynamic Multipathing) Native Support is enabled on two servers at the same time. DETAILS OF INCIDENTS FIXED BY THE PATCH --------------------------------------- This patch fixes the following incidents: Patch ID: 7.4.2.1100 * 4006461 (Tracking ID: 3998475) SYMPTOM: Data corruption is observed and service groups went into partial state. DESCRIPTION: In VxVM, fsck log replay initiated read of 64 blocks, that was getting split across 2 stripes of the stripe-mirror volume. So, we had 2 read I/Os of 48 blocks (first split I/O) and 16 blocks (second split I/O). Since the volume was in RWBK mode, this read I/O was stabilized. Upon completion of the read I/O at subvolume level, this I/O was unstabilized and the contents of the stable I/O (stablekio) were copied to the original I/O (origkio). It was observed that the data was always correct till the subvolume level but at the top level plex and volume level, it was incorrect (printed checksum in vxtrace output for this). The reason for this was during unstabilization, we do volkio_to_kio_copy() which copies the contents from stable kio to orig kio (since it is a read). As the orig kio was an unmapped PHYS I/O, in Solaris 11.4, the contents will be copied out using bp_copyout() from volkiomem_kunmap(). The volkiomem_seek() and volkiomem_next_segment() allocates pagesize (8K) kernel buffer (zero'ed out) where the contents will be copied to. When the first split I/O completes unstabilization before the second split I/O, this issue was not seen. However, if the second split I/O completed before the first splt I/O then this issue was seen. Here, in the last iteration of the volkio_to_kio_copy(), the data copied was less than the allocated region size. We allocate 8K region size whereas the data copied from stablekio was less than 8K. Later, during kunmap(), we do a bp_copyout() of alloocated size i.e. 8K. This caused copyout of extra regions that were zero'ed out. Hence the data corruption. RESOLUTION: Now we do a bp_copyout() of the right length i.e. of the copied size instead of the allocated region size. * 4006947 (Tracking ID: 4006539) SYMPTOM: zpool corruption occurs when DMP Native Support is enabled on two servers at the same time. DESCRIPTION: When DMP Native Support is enabled then all the zpools present on the system are migrated to DMP devices. During migration even exported zpools are imported on top of DMP devices. If the same operation is carried out on 2 servers having shared storage then zpools would be imported on top of DMP devices on both the servers approximately at the same time which leads to zpool corruption. RESOLUTION: Changes have been done to not import exported zpools on top of DMP devices and let the customer decide which zpools to import on which server. INSTALLING THE PATCH -------------------- Run the Installer script to automatically install the patch: ----------------------------------------------------------- Please be noted that the installation of this P-Patch will cause downtime. To install the patch perform the following steps on at least one node in the cluster: 1. Copy the patch vm-sol11_sparc-Patch-7.4.2.1100.tar.gz to /tmp 2. Untar vm-sol11_sparc-Patch-7.4.2.1100.tar.gz to /tmp/hf # mkdir /tmp/hf # cd /tmp/hf # gunzip /tmp/vm-sol11_sparc-Patch-7.4.2.1100.tar.gz # tar xf /tmp/vm-sol11_sparc-Patch-7.4.2.1100.tar 3. Install the hotfix(Please be noted that the installation of this P-Patch will cause downtime.) # pwd /tmp/hf # ./installVRTSvxvm742P1100 [<host1> <host2>...] You can also install this patch together with 7.4.2 maintenance release using Install Bundles 1. Download this patch and extract it to a directory 2. Change to the Veritas InfoScale 7.4.2 directory and invoke the installmr script with -patch_path option where -patch_path should point to the patch directory # ./installmr -patch_path [<path to this patch>] [<host1> <host2>...] Install the patch manually: -------------------------- The installation of this P-Patch will cause downtime. Run the Installer script to automatically install the patch: ----------------------------------------------------------- To install the patch perform the following steps on at least one node in the cluster: 1. Copy the patch vm-sol11_sparc-Patch-7.4.2.1100.tar.gz to /tmp 2. Untar vm-sol11_sparc-Patch-7.4.2.1100.tar.gz to /tmp/hf # mkdir /tmp/hf # cd /tmp/hf # gunzip /tmp/vm-sol11_sparc-Patch-7.4.2.1100.tar.gz # tar xf /tmp/vm-sol11_sparc-Patch-7.4.2.1100.tar 3. Install the hotfix(Please be noted that the installation of this P-Patch will cause downtime.) # pwd /tmp/hf # ./installVRTSvxvm742P1100 [<host1> <host2> ...] This patch can be installed in connection with the InfoScale 7.4.2 maintenance release using the Install Bundles installer feature: 1. Download this patch and extract it to a directory of your choosing 2. Change to the directory hosting the Veritas InfoScale 7.4.2 base product software and invoke the installer script with -patch_path option where -patch_path should point to the patch directory you created in step 1. # ./installer -patch_path [<host1 <host2> ...] Install the patch manually: -------------------------- o Before-applying-the-patch:- (a) Stop applications that access VxVM volumes. (b) Stop I/Os to all the VxVM volumes. (c) Umount any filesystems residing on VxVM volumes. (d) In case of multiple boot environments, boot using the BE (Boot-Environment) you wish to install the patch on. For Solaris 11, refer to the man pages for specific instructions on using the 'pkg' command to install the patch provided. Any other special or non-generic installation instructions should be described below as special instructions. The following example installs the updated VRTSvxvm patch on a single-standalone machine: Example# pkg install --accept -g /patch_location/VRTSvxvm.p5p VRTSvxvm After 'pkg install' please follow mandatory configuration steps mentioned in special instructions. Please follow the special instructions mentioned below after installing the patch. REMOVING THE PATCH ------------------ For Solaris 11.1 or later, if DMP native support is enabled, DMP controls the ZFS root pool. Turn off native support before removing the patch. *** If DMP native support is enabled: a.It is essential to disable DMP native support. Run the following command to disable DMP native support # vxdmpadm settune dmp_native_support=off b.Reboot the system # reboot NOTE: If you do not disable native support prior to removing the VxVM patch, the system cannot be restarted after you remove DMP. Please ensure you have access to the base 7.4.2 Veritas software prior to removing the updated VRTSvxvm package. NOTE: Uninstalling the patch will remove the entire package. The following example removes a patch from a standalone system: The VRTSvxvm package cannot be removed unless you also remove the VRTSaslapm package.Therefore the pkg uninstall command will fail as follows: # pkg uninstall VRTSvxvm Creating Plan (Solver setup): - pkg uninstall: Unable to remove 'VRTSvxvm@7.4.2.1100' due to the following packages that depend on it: VRTSaslapm@7.4.2.0 You will also need to uninstall the VRTSaslapm package. # pkg uninstall VRTSvxvm VRTSaslapm NOTE: You will need access to the base software of the VRTSvxvm package (original source media) to reinstall the uninstalled packages. SPECIAL INSTRUCTIONS -------------------- NONE OTHERS ------ NONE |
Why Register?
Get notifications about ASLs/APMs, HCLs, patches, and high availability agents
As a registered user, you can create notifications to receive updates about NetBackup Future Platform and Feature Plans, NetBackup hot fixes/EEBs in released versions, Array Support Libraries (ASLs)/Array Policy Modules (APMs), hardware compatibility lists (HCLs), patches and high availability agents. In addition, you can create system-specific notifications customized to your environment.
Compare configurations
The Compare Configurations feature lets you compare different system scans by the data collector. When you sign in, you can choose a target system, compare reports run at different times, and easily see how the system's configuration has changed.
Save configurations
After logging in, you can retrieve past reports, share reports with colleagues, review notifications you received, and retain custom settings. Anonymous users cannot access these features.
Bulk uploader
As a registered user,you can upload multiple reports, using the Bulk Uploader.