* * * READ ME * * * * * * Symantec Storage Foundation HA 6.1.1 * * * * * * Patch 6.1.1.300 * * * Patch Date: 2015-09-25 This document provides the following information: * PATCH NAME * OPERATING SYSTEMS SUPPORTED BY THE PATCH * PACKAGES AFFECTED BY THE PATCH * BASE PRODUCT VERSIONS FOR THE PATCH * SUMMARY OF INCIDENTS FIXED BY THE PATCH * DETAILS OF INCIDENTS FIXED BY THE PATCH * INSTALLATION PRE-REQUISITES * INSTALLING THE PATCH * REMOVING THE PATCH * KNOWN ISSUES PATCH NAME ---------- Symantec Storage Foundation HA 6.1.1 Patch 6.1.1.300 (Adds RHEL6.7 support) OPERATING SYSTEMS SUPPORTED BY THE PATCH ---------------------------------------- RHEL6 x86-64 PACKAGES AFFECTED BY THE PATCH ------------------------------ VRTSamf VRTSaslapm VRTSdbac VRTSgab VRTSllt VRTSvxfen VRTSvxfs VRTSvxvm BASE PRODUCT VERSIONS FOR THE PATCH ----------------------------------- * Symantec Cluster Server 6.1 * Symantec Dynamic Multi-Pathing 6.1 * Symantec File System 6.1 * Symantec Storage Foundation 6.1 * Symantec Storage Foundation Cluster File System HA 6.1 * Symantec Storage Foundation for Oracle RAC 6.1 * Symantec Storage Foundation HA 6.1 * Symantec Volume Manager 6.1 SUMMARY OF INCIDENTS FIXED BY THE PATCH --------------------------------------- Patch ID: VRTSvxfs-6.1.1.200-RHEL6 * 3660421 (3660422) On RHEL 6.6, umount(8) system call hangs if an application is watching for inode events using inotify(7) APIs. Patch ID: VRTSvxfs-6.1.1.100-RHEL6 * 3520113 (3451284) Internal testing hits an assert "vx_sum_upd_efree1" * 3521945 (3530435) Panic in Internal test with SSD cache enabled. * 3529243 (3616722) System panics because of race between the writeback cache offline thread and the writeback data flush thread. * 3536233 (3457803) File System gets disabled intermittently with metadata IO error. * 3583963 (3583930) When the external quota file is restored or over-written, the old quota records are preserved. * 3617774 (3475194) Veritas File System (VxFS) fscdsconv(1M) command fails with metadata overflow. * 3617776 (3473390) The multiple stack overflows with Veritas File System (VxFS) on RHEL6 lead to panics or system crashes. * 3617781 (3557009) After the fallocate() function reserves allocation space, it results in the wrong file size. * 3617788 (3604071) High CPU usage consumed by the vxfs thread process. * 3617790 (3574404) Stack overflow during rename operation. * 3617793 (3564076) The MongoDB noSQL db creation fails with an ENOTSUP error. * 3617877 (3615850) Write system call hangs with invalid buffer length * 3620279 (3558087) The ls -l command hangs when the system takes backup. * 3620284 (3596378) The copy of a large number of small files is slower on vxfs compared to ext4 * 3620288 (3469644) The system panics in the vx_logbuf_clean() function. * 3621420 (3621423) The VxVM caching shouldnt be disabled while mounting a file system in a situation where the VxFS cache area is not present. * 3628867 (3595896) While creating OracleRAC 12.1.0.2 database, the node panics. * 3636210 (3633067) While converting from ext3 file system to VxFS using vxfsconvert, it is observed that many inodes are missing.. * 3644006 (3451686) During internal stress testing on cluster file system(CFS), debug assert is hit due to invalid cache generation count on incore inode. * 3645825 (3622326) Filesystem is marked with fullfsck flag as an inode is marked bad during checkpoint promote Patch ID: VRTSvxvm-6.1.1.100-RHEL6 * 3632970 (3631230) VRTSvxvm patch version 6.0.5 and 6.1.1 or previous will not work with RHEL6.6 update. Patch ID: VRTSaslapm-6.1.1.200-RHEL6 * 3659363 (3665727) Array I/O POLICY is set to Single-active for SF6.1.1 with RHEL6.6 Patch ID: VRTSllt-6.1.1.200-RHEL6 * 3794198 (3794154) Veritas Cluster Server (VCS) does not support Red Hat Enterprise Linux 6 Update 7 (RHEL6.7). Patch ID: VRTSllt-6.1.1.100-RHEL6 * 3646467 (3642131) VCS support for RHEL 6.6 Patch ID: VRTSamf-6.1.1.100-RHEL6 * 3794198 (3794154) Veritas Cluster Server (VCS) does not support Red Hat Enterprise Linux 6 Update 7 (RHEL6.7). Patch ID: VRTSgab-6.1.0.200-Linux_RHEL6 * 3794198 (3794154) Veritas Cluster Server (VCS) does not support Red Hat Enterprise Linux 6 Update 7 (RHEL6.7). Patch ID: VRTSgab-6.1.0.100-GA_RHEL6 * 3728108 (3728106) On Linux, the value corresponding to 15 minute CPU load average as shown in /proc/loadavg file wrongly increases to about 4. Patch ID: VRTSvxfen-6.1.1.100-RHEL6 * 3794198 (3794154) Veritas Cluster Server (VCS) does not support Red Hat Enterprise Linux 6 Update 7 (RHEL6.7). Patch ID: VRTSdbac-6.1.1.100-RHEL6 * 3831498 (3850806) 6.1.1 vcsmm module does not load with RHEL6.7 (2.6.32-573.el6.x86_64 kernel) DETAILS OF INCIDENTS FIXED BY THE PATCH --------------------------------------- This patch fixes the following Symantec incidents: Patch ID: VRTSvxfs-6.1.1.200-RHEL6 * 3660421 (Tracking ID: 3660422) SYMPTOM: On RHEL 6.6, umount(8) system call hangs if an application is watching for inode events using inotify(7) APIs. DESCRIPTION: On RHEL 6.6, additional counters were added in the super block to track inotify watches, these new counters were not implemented in VxFS. Hence while doing umount, the operation hangs until the counter in the superblock drops to zero, which would never happen since they are not handled in VXFS. RESOLUTION: Code is modified to handle additional counters added in RHEL6.6. Patch ID: VRTSvxfs-6.1.1.100-RHEL6 * 3520113 (Tracking ID: 3451284) SYMPTOM: While allocating extent during write operation, if summary and bitmap data for filesystem allocation unit get mismatched then the assert hits. DESCRIPTION: if extent was allocated using SMAP on the deleted inode, and part of the AU space is moved from deleted inode to the new inode. At this point SMAP state is set to VX_EAU_ALLOCATED and EMAP is not initialized. When more space is needed for new inode, it tries to allocate from the same AU using EMAP and can hit "f:vx_sum_upd_efree1:2a" assert, as EMAP is not initialized. RESOLUTION: Code has been modified to expand AU while moving partial AU space from one inode to other inode. * 3521945 (Tracking ID: 3530435) SYMPTOM: Panic in Internal test with SSD cache enabled. DESCRIPTION: The record end of the write back log record was wrongly getting modified while adding a skip list node in the punch hole case where expunge flag is set where then insertion of new node is skipped RESOLUTION: Code to modified to skip modification of the writeback log record when the expunge flag is set and left end of the record is smaller or equal to the end offset of the next punch hole request. * 3529243 (Tracking ID: 3616722) SYMPTOM: Race between the writeback cache offline thread and the writeback data flush thread causes null pointer dereference, resulting in system panic. DESCRIPTION: While disabling writeback, the writeback cache information is deinitialized from each inode which results in the removal of writeback bmap lock pointer. But during this time frame, if the writeback flushing is still going on through some other thread which has writeback bmap lock, then while removing the writeback bmap lock, null pointer dereference hits since it was already removed through previous thread. RESOLUTION: The code is modified to handle such race conditions. * 3536233 (Tracking ID: 3457803) SYMPTOM: File System gets disabled with the following message in the system log: WARNING: V-2-37: vx_metaioerr - vx_iasync_wait - /dev/vx/dsk/testdg/test file system meta data write error in dev/block DESCRIPTION: The inode's incore information gets inconsistent as one of its field is getting modified without the locking protection. RESOLUTION: Protect the inode's field properly by taking the lock operation. * 3583963 (Tracking ID: 3583930) SYMPTOM: When external quota file is over-written or restored from backup, new settings which were added after the backup still remain. DESCRIPTION: The internal quota file is not always updated with correct limits, so the quotaon operation is to copy the quota limits from external to internal quota file. To complete the copy operation, the extent of external file is compared to the extent of internal file at the corresponding offset. If the external quota file is overwritten (or restored to its original copy) and the size of internal file is more than that of external, the quotaon operation does not clear the additional (stale) quota records in the internal file. Later, the sync operation (part of quotaon) copies these stale records from internal to external file. Hence, both internal and external files contain stale records. RESOLUTION: The code has been modified to remove the stale records in the internal file at the time of quotaon. * 3617774 (Tracking ID: 3475194) SYMPTOM: Veritas File System (VxFS) fscdsconv(1M) command fails with the following error message: ... UX:vxfs fscdsconv: INFO: V-3-26130: There are no files violating the CDS limits for this target. UX:vxfs fscdsconv: INFO: V-3-26047: Byteswapping in progress ... UX:vxfs fscdsconv: ERROR: V-3-25656: Overflow detected UX:vxfs fscdsconv: ERROR: V-3-24418: fscdsconv: error processing primary inode list for fset 999 UX:vxfs fscdsconv: ERROR: V-3-24430: fscdsconv: failed to copy metadata UX:vxfs fscdsconv: ERROR: V-3-24426: fscdsconv: Failed to migrate. DESCRIPTION: The fscdsconv(1M) command takes a filename argument which is used as a recovery failure, to be used to restore the original file system in case of failure when the file system conversion is in progress. This file has two parts: control part and data part. The control part is used to store information about all the metadata like inodes and extents etc. In this instance, the length of the control part is being underestimated for some file systems where there are few inodes, but the average number of extents per file is very large (this can be seen in the fsadm E report). RESOLUTION: Make recovery file sparse, start the data part after 1TB offset, and then the control part can do allocating writes to the hole from the beginning of the file. * 3617776 (Tracking ID: 3473390) SYMPTOM: In memory pressure scenarios, you see panics or system crashes due to stack overflows. DESCRIPTION: Specifically on RHEL6, the memory allocation routines consume much more memory than other distributions like SLES, or even RHEL5. Due to this, multiple overflows are reported for the RHEL6 platform. Most of these overflows occur when VxFS tries to allocate memory under memory pressure. RESOLUTION: The code is modified to fix multiple overflows by adding handoff code paths, adjusting handoff limits, removing on-stack structures and reducing the number of function frames on stack wherever possible. * 3617781 (Tracking ID: 3557009) SYMPTOM: Run the fallocate command with -l option to specify the length of the reserve allocation. The file size is not expected, but multiple of file system block size. For example: If block size = 8K: # fallocate -l 8860 testfile1 # ls -l total 16 drwxr-xr-x. 2 root root 96 Jul 1 11:40 lost+found/ -rw-r--r--. 1 root root 16384 Jul 1 11:41 testfile1 The file size should be 8860, but it's 16384(which is 2*8192). DESCRIPTION: The vx_fallocate() function on Veritas File System (VxFS) creates larger file than specified because it allocates the extent in blocks. So the reserved file size is multiples of block size, instead of what the fallocate command specifies. RESOLUTION: The code is modified so that the vx_fallocate() function on VxFS sets the reserved file size to what it specifies, instead of multiples of block size. * 3617788 (Tracking ID: 3604071) SYMPTOM: With the thin reclaim feature turned on, you can observe high CPU usage on the vxfs thread process. The backtrace of such kind of threads usually look like this: - vx_dalist_getau - vx_recv_bcastgetemapmsg - vx_recvdele - vx_msg_recvreq - vx_msg_process_thread - vx_kthread_init DESCRIPTION: In the routine to get the broadcast information of a node which contains maps of Allocation Units (AUs) for which node holds the delegations, the locking mechanism is inefficient. Thus every time when this routine is called, it will perform a series of down-up operation on a certain semaphore. This can result in a huge CPU cost when many threads calling the routine in parallel. RESOLUTION: The code is modified to optimize the locking mechanism in the routine to get the broadcast information of a node which contains maps of Allocation Units (AUs) for which node holds the delegations, so that it only does down-up operation on the semaphore once. * 3617790 (Tracking ID: 3574404) SYMPTOM: System panics because of a stack overflow during rename operation. The following stack trace can be seen during the panic: machine_kexec crash_kexec oops_end no_context __bad_area_nosemaphore bad_area_nosemaphore __do_page_fault do_page_fault page_fault task_tick_fair scheduler_tick update_process_times tick_sched_timer __run_hrtimer hrtimer_interrupt local_apic_timer_interrupt smp_apic_timer_interrupt apic_timer_interrupt --- <IRQ stack> --- apic_timer_interrupt mempool_free_slab mempool_free vx_pgbh_free vx_pgbh_detach vx_releasepage try_to_release_page shrink_page_list.clone.3 shrink_inactive_list shrink_mem_cgroup_zone shrink_zone zone_reclaim get_page_from_freelist __alloc_pages_nodemask alloc_pages_current __get_free_pages vx_getpages vx_alloc vx_bc_getfreebufs vx_bc_getblk vx_getblk_bp vx_getblk_cmn vx_getblk vx_getmap vx_getemap vx_extfind vx_searchau_downlevel vx_searchau_downlevel vx_searchau_downlevel vx_searchau_downlevel vx_searchau_uplevel vx_searchau vx_extentalloc_device vx_extentalloc vx_bmap_ext4 vx_bmap_alloc_ext4 vx_bmap_alloc vx_write_alloc3 vx_tran_write_alloc vx_idalloc_off1 vx_idalloc_off vx_int_rename vx_do_rename vx_rename1 vx_rename vfs_rename sys_renameat sys_rename system_call_fastpath DESCRIPTION: The stack is overflown by 88 bytes in the rename code path. The thread_info structure is disrupted with VxFS page buffer head addresses.. RESOLUTION: We now use dynamic allocation of local structures in vx_write_alloc3 and vx_int_rename. Thissaves 256 bytes and gives enough room. * 3617793 (Tracking ID: 3564076) SYMPTOM: The MongoDB noSQL db creation fails with an ENOTSUP error. MongoDB uses posix_fallocate to create a file first. When it writes at offset which is not aligned with File System block boundary, an ENOTSUP error comes up. DESCRIPTION: On a file system with 8k bsize and 4k page size, the application creates a file using posix_fallocate, and then writes at some offset which is not aligned with fs block boundary. In this case, the pre-allocated extent is split at the unaligned offset into two parts for the write. However the alignment requirement of the split fails the operation. RESOLUTION: Split the extent down to block boundary. * 3617877 (Tracking ID: 3615850) SYMPTOM: The write system call writes up to count bytes from the pointed buffer to the file referred to by the file descriptor field: ssize_t write(int fd, const void *buf, size_t count); When the count parameter is invalid, sometimes it can cause the write() to hang on VxFS file system. E.g. with a 10000 bytes buffer, but the count is set to 30000 by mistake, then you may encounter such problem. DESCRIPTION: On recent linux kernels, you cannot take a page-fault while holding a page locked so as to avoid a deadlock. This means uiomove can copy less than requested, and any partially populated pages created in routine which establish a virtual mapping for the page are destroyed. This can cause an infinite loop in the write code path when the given user-buffer is not aligned with a page boundary and the length given to write() causes an EFAULT; uiomove() does a partial copy, segmap_release destroys the partially populated pages and unwinds the uio. The operation is then repeated. RESOLUTION: The code is modified to move the pre-faulting to the buffered IO write-loops; The system either shortens the length of the copy if all of the requested pages cannot be faulted, or fails with EFAULT if no pages are pre-faulted. This prevents the infinite loop. * 3620279 (Tracking ID: 3558087) SYMPTOM: Run simultaneous dd threads on a mount point and start the ls l command on the same mount point. Then the system hangs. DESCRIPTION: When the delayed allocation (dalloc) feature is turned on, the flushing process takes much time. The process keeps the glock held, and needs writers to keep the irwlock held. Thels l command starts stat internally and keeps waiting for irwlock to real ACLs. RESOLUTION: Redesign dalloc to keep the glock unlocked while flushing. * 3620284 (Tracking ID: 3596378) SYMPTOM: The copy of a large number of small files is slower on Veritas File System (VxFS) compared to EXT4. DESCRIPTION: VxFS implements the fsetxattr() system call in a synchronized way. Hence, before returning to the system call, the VxFS will take some time to flush the data to the disk. In this way, the VxFS guarantees the file system consistency in case of file system crash. However, this implementation has a side-effect that it serializes the whole processing, which takes more time. RESOLUTION: The code is modified to change the transaction to flush the data in a delayed way. * 3620288 (Tracking ID: 3469644) SYMPTOM: The system panics in the vx_logbuf_clean() function when it traverses chain of transactions off the intent log buffer. The stack trace is as follows: vx_logbuf_clean () vx_logadd () vx_log() vx_trancommit() vx_exh_hashinit () vx_dexh_create () vx_dexh_init () vx_pd_rename () vx_rename1_pd() vx_do_rename () vx_rename1 () vx_rename () vx_rename_skey () DESCRIPTION: The system panics as the vx_logbug_clean() function tries to access an already freed transaction from transaction chain to flush it to log. RESOLUTION: The code has been modified to make sure that the transaction gets flushed to the log before it is freed. * 3621420 (Tracking ID: 3621423) SYMPTOM: The Veritas Volume manager (VxVM) caching is disabled or stopped after mounting a file system in a situation where the Veritas File System (VxFS) cache area is not present. DESCRIPTION: When the VxFS cache area is not present and the VxVM cache area is present and in ENABLED state, if you mount a file system on any of the volumes, the VxVM caching gets stopped for that volume, which is not an expected behavior. RESOLUTION: The code is modified not to disable VxVM caching for any mounted file system if the VxFS cache area is not present. * 3628867 (Tracking ID: 3595896) SYMPTOM: While creating OracleRAC 12.1.0.2 database, the node panics with the following stack: aio_complete() vx_naio_do_work() vx_naio_worker() vx_kthread_init() DESCRIPTION: For a zero size request (with a correctly aligned buffer), Veritas File System (VxFS) wrongly queues the work internally and returns -EIOCBQUEUED. The kernel calls function aio_complete() for this zero size request. However, while VxFS is performing the queued work internally, the aio_complete() function gets called again. The double call of the aio_complete() function results in the panic. RESOLUTION: The code is modified so that the zero size requests will not queue elements inside VxFS work queue. * 3636210 (Tracking ID: 3633067) SYMPTOM: While converting from ext3 file system to VxFS using vxfsconvert, it is observed that many inodes are missing. DESCRIPTION: When vxfsconvert(1M) is run on an ext3 file system, it misses an entire block group of inodes. This happens because of an incorrect calculation of block group number of a given inode in border case. The inode which is the last inode for a given block group is calculated to have the correct inode offset, but is calculated to be in the next block group. This causes the entire next block group to be skipped when the code attempts to find the next consecutive inode. RESOLUTION: The code is modified to correct the calculation of block group number. * 3644006 (Tracking ID: 3451686) SYMPTOM: During internal stress testing on cluster file system(CFS), debug assert is hit due to invalid cache generation count on incore inode. DESCRIPTION: Reset of the cache generation count in incore inode used in Disk Layout Version(DLV) 10 was missed during inode reuse, causing the debug assert. RESOLUTION: The code is modified to reset the cache generation count in incore inode during inode reuse. * 3645825 (Tracking ID: 3622326) SYMPTOM: Filesystem is marked with fullfsck flag as an inode is marked bad during checkpoint promote DESCRIPTION: VxFS incorrectly skipped pushing of data to clone inode due to which the inode is marked bad during checkpoint promote which intern resulted in filesystem being marked with fullfsck flag. RESOLUTION: Code is modified to push the proper data to clone inode. Patch ID: VRTSvxvm-6.1.1.100-RHEL6 * 3632970 (Tracking ID: 3631230) SYMPTOM: VRTSvxvm patch version 6.0.5 and 6.1.1 will not work with RHEL6.6 update. # rpm -ivh VRTSvxvm-6.1.1.000-GA_RHEL6.x86_64.rpm Preparing... ########################################### [100%] 1:VRTSvxvm ########################################### [100%] Installing file /etc/init.d/vxvm-boot creating VxVM device nodes under /dev WARNING: No modules found for 2.6.32-494.el6.x86_64, using compatible modules for 2.6.32-71.el6.x86_64. FATAL: Error inserting vxio (/lib/modules/2.6.32- 494.el6.x86_64/veritas/vxvm/vxio.ko): Unknown symbol in module, or unknown parameter (see dmesg) ERROR: modprobe error for vxio. See documentation. warning: %post(VRTSvxvm-6.1.1.000-GA_RHEL6.x86_64) scriptlet failed, exit status 1 # Or after OS update, the system log file will have the following messages logged. vxio: disagrees about version of symbol poll_freewait vxio: Unknown symbol poll_freewait vxio: disagrees about version of symbol poll_initwait vxio: Unknown symbol poll_initwait DESCRIPTION: Installation of VRTSvxvm patch version 6.0.5 and 6.1.1 fails on RHEL6.6 due to the changes in poll_initwait() and poll_freewait() interfaces. RESOLUTION: The VxVM package has re-compiled with RHEL6.6 build environment. Patch ID: VRTSaslapm-6.1.1.200-RHEL6 * 3659363 (Tracking ID: 3665727) SYMPTOM: vxdmpadm listapm output does not list any APM except default ones [root@rpms]# vxdmpadm listapm Filename APM Name APM Version Array Types State ================================================================================ dmpjbod.ko dmpjbod 1 Disk Active dmpjbod.ko dmpjbod 1 APdisk Active dmpalua.ko dmpalua 1 ALUA Not-Active dmpaaa.ko dmpaaa 1 A/A-A Not-Active dmpapg.ko dmpapg 1 A/PG Not-Active dmpapg.ko dmpapg 1 A/PG-C Not-Active dmpaa.ko dmpaa 1 A/A Active dmpap.ko dmpap 1 A/P Active dmpap.ko dmpap 1 A/P-C Active dmpapf.ko dmpapf 1 A/PF-VERITAS Not-Active dmpapf.ko dmpapf 1 A/PF-T3PLUS Not-Active [root@rpms]# DESCRIPTION: For supporting RHEL6.6 update, dmp module is recomipled with latest RHEL6.6 kernel version. During post install of the package the APM modules fails to load due to mismatch in DMP and additional APM module kernel version. RESOLUTION: ASLAPM package is recompiled with RHEL6.6 kernel. Patch ID: VRTSllt-6.1.1.200-RHEL6 * 3794198 (Tracking ID: 3794154) SYMPTOM: Veritas Cluster Server (VCS) does not support Red Hat Enterprise Linux 6 Update 7 (RHEL6.7). DESCRIPTION: VCS did not support RHEL versions released after RHEL6 Update 6. RESOLUTION: VCS support for Red Hat Enterprise Linux 6 Update 7 (RHEL6.7) is now introduced. Patch ID: VRTSllt-6.1.1.100-RHEL6 * 3646467 (Tracking ID: 3642131) SYMPTOM: Low Latency Transport (LLT) fails to start on Red Hat Enterprise Linux (RHEL) 6 Update 6. DESCRIPTION: On RHEL 6.6, LLT fails to start due to kABI incompatibility. The following error appears: # rpm -ivh VRTSllt-6.1.1.000-RHEL6.x86_64.rpm Preparing... ########################################### [100%] 1:VRTSllt ########################################### [100%] # /etc/init.d/llt start Starting LLT: LLT: loading module... ERROR: No appropriate modules found. Error in loading module "llt". See documentation. LLT:Error: cannot find compatible module binary Or after OS update, the following messages will be logged in the system log file: kernel: llt: disagrees about version of symbol ib_create_cq kernel: llt: Unknown symbol ib_create_cq kernel: llt: disagrees about version of symbol rdma_resolve_addr kernel: llt: Unknown symbol rdma_resolve_addr kernel: llt: disagrees about version of symbol ib_dereg_mr kernel: llt: Unknown symbol ib_dereg_mr kernel: llt: disagrees about version of symbol rdma_reject kernel: llt: Unknown symbol rdma_reject kernel: llt: disagrees about version of symbol rdma_disconnect kernel: llt: Unknown symbol rdma_disconnect kernel: llt: disagrees about version of symbol rdma_resolve_route kernel: llt: Unknown symbol rdma_resolve_route kernel: llt: disagrees about version of symbol rdma_bind_addr kernel: llt: Unknown symbol rdma_bind_addr kernel: llt: disagrees about version of symbol rdma_create_qp kernel: llt: Unknown symbol rdma_create_qp RESOLUTION: VRTSllt package now includes RHEL 6.6 compatible kernel module. Patch ID: VRTSamf-6.1.1.100-RHEL6 * 3794198 (Tracking ID: 3794154) SYMPTOM: Veritas Cluster Server (VCS) does not support Red Hat Enterprise Linux 6 Update 7 (RHEL6.7). DESCRIPTION: VCS did not support RHEL versions released after RHEL6 Update 6. RESOLUTION: VCS support for Red Hat Enterprise Linux 6 Update 7 (RHEL6.7) is now introduced. Patch ID: VRTSgab-6.1.0.200-Linux_RHEL6 * 3794198 (Tracking ID: 3794154) SYMPTOM: Veritas Cluster Server (VCS) does not support Red Hat Enterprise Linux 6 Update 7 (RHEL6.7). DESCRIPTION: VCS did not support RHEL versions released after RHEL6 Update 6. RESOLUTION: VCS support for Red Hat Enterprise Linux 6 Update 7 (RHEL6.7) is now introduced. Patch ID: VRTSgab-6.1.0.100-GA_RHEL6 * 3728108 (Tracking ID: 3728106) SYMPTOM: On Linux, the value corresponding to 15 minute CPU load average increases to about 4 even when clients of GAB are not running and the CPU usage is relatively low. DESCRIPTION: GAB reads the count of currently online CPUs to correctly adapt the client heartbeat timeout with the system load. Due to an issue, it accidentally overwrites the kernel's load average value. As a result, even though the actual CPU usage does not increase, the value that is observed from /proc/loadavg for the 15 minute CPU load average is increased. RESOLUTION: The code is modified so that the GAB module does not overwrite the kernel's load average value. Patch ID: VRTSvxfen-6.1.1.100-RHEL6 * 3794198 (Tracking ID: 3794154) SYMPTOM: Veritas Cluster Server (VCS) does not support Red Hat Enterprise Linux 6 Update 7 (RHEL6.7). DESCRIPTION: VCS did not support RHEL versions released after RHEL6 Update 6. RESOLUTION: VCS support for Red Hat Enterprise Linux 6 Update 7 (RHEL6.7) is now introduced. Patch ID: VRTSdbac-6.1.1.100-RHEL6 * 3831498 (Tracking ID: 3850806) SYMPTOM: VRTSdbac patch version does not work with RHEL6.7 (2.6.32-573.el6.x86_64 kernel) and is unable to load the vcsmm module on RHEL6.7. DESCRIPTION: Installation of VRTSdbac patch version 6.1.1 fails on RHEL6.7 as the VCSMM module is not available on RHEL6.7 kernel 2.6.32-573.el6.x86_64. The system log file logs the following messages: Starting VCSMM: ERROR: No appropriate modules found. Error in loading module "vcsmm". See documentation. Error : VCSMM driver could not be loaded. Error : VCSMM could not be started. Error : VCSMM could not be started. RESOLUTION: The VRTSdbac package is re-compiled with RHEL6.7 kernel in the build environment to mitigate the failure. INSTALLING THE PATCH -------------------- Run the Installer script to automatically install the patch: ----------------------------------------------------------- To install the patch perform the following steps on at least one node in the cluster: 1. Copy the hot-fix sfha-rhel6.7_x86_64-Patch-6.1.1.300.tar.gz to /tmp 2. Untar sfha-rhel6.7_x86_64-Patch-6.1.1.300.tar.gz to /tmp/hf # mkdir /tmp/hf # cd /tmp/hf # gunzip /tmp/sfha-rhel6.7_x86_64-Patch-6.1.1.300.tar.gz # tar xf /tmp/sfha-rhel6.7_x86_64-Patch-6.1.1.300.tar 3. Install the hotfix # pwd /tmp/hf # ./installSFHA611P300 [ ...] You can also install this patch together with 6.1 GA release and 6.1.1 Patch release using Install Bundles 1. Download Storage Foundation and High Availability Solutions 6.1 2. Extract the tar ball into the /tmp/sfha6.1/ directory 3. Download SFHA Solutions 6.1.1 from https://sort.veritas.com/patches 4. Extract it to the /tmp/sfha6.1.1 directory 5. Change to the /tmp/sfha6.1.1 directory by entering: # cd /tmp/sfha6.1.1 6. Invoke the installmr script with -base_path and -hotfix_path option where the -base_path should point to the 61 image directory, while -hotfix_path to the 6.1.1.300 directory. # ./installmr -base_path [<61 path>] -hotfix_path [] [ ...] Install the patch manually: -------------------------- o Before-the-upgrade :- (a) Stop I/Os to all the VxVM volumes. (b) Umount any filesystems with VxVM volumes. (c) Stop applications using any VxVM volumes. Select the appropriate RPMs for your system, and upgrade to the new patch. # rpm -Uhv REMOVING THE PATCH ------------------ # rpm -e KNOWN ISSUES ------------ * Tracking ID: 3690067 SYMPTOM: The 'delayed allocation' (ie 'dalloc') feature on VxFS 6.1.1.100 p-patch can cause data loss or stale data. Dalloc feature is enabled by default for local mounted file system and is not supported for cluster mounted file systems. Dalloc with sequential extending buffer writes can possibly cause data loss or stale data. This issue is seen only with 6.1.1.100 p patch. WORKAROUND: disable the 'delayed allocation' ('dalloc') feature on the VxFS filesystems. Following commands are used to disable dalloc. 1)For a filesystem which is already mounted # vxtunefs -s -o dalloc_enable=0 $MOUNT_POINT 2) To make the value persistent across system reboot, add an entry to /etc/vx/tunefstab /dev/vx/dsk/$DISKGROUP/$VOLUME dalloc_enable=0 SPECIAL INSTRUCTIONS -------------------- NONE OTHERS ------ NONE