README VERSION : 1.1 README CREATION DATE : 2013-10-01 PATCH-ID : 6.0.400.000 PATCH NAME : VRTSvxfs 6.0.400.000 BASE PACKAGE NAME : VRTSvxfs BASE PACKAGE VERSION : 6.0.100.000 SUPERSEDED PATCHES : 6.0.300.000 REQUIRED PATCHES : NONE INCOMPATIBLE PATCHES : NONE SUPPORTED PADV : sles11_x86_64 (P-PLATFORM , A-ARCHITECTURE , D-DISTRIBUTION , V-VERSION) PATCH CATEGORY : CORRUPTION , HANG , PANIC , PERFORMANCE PATCH CRITICALITY : CRITICAL HAS KERNEL COMPONENT : YES ID : NONE REBOOT REQUIRED : YES REQUIRE APPLICATION DOWNTIME : Yes PATCH INSTALLATION INSTRUCTIONS: -------------------------------- Please refer to Release Notes for install instructions PATCH UNINSTALLATION INSTRUCTIONS: ---------------------------------- Please refer to Release Notes for uninstall instructions SPECIAL INSTALL INSTRUCTIONS: ----------------------------- NONE SUMMARY OF FIXED ISSUES: ----------------------------------------- PATCH ID:6.0.400.000 2933290 (2756779) The code is modified to improve the fix for the read and write performance concerns on Cluster File System (CFS) when it runs applications that rely on the POSIX file-record using the fcntl lock. 2933301 (2908391) It takes a long time to remove checkpoints from the VxFS file system, when there are a large number of files present. 2947029 (2926684) In rare cases, the system may panic while performing a logged write. 2959557 (2834192) You are unable to mount the file system after the full fsck(1M) utility is run. 2978236 (2977828) The file system is marked bad after an inode table overflow error occurs. 2983249 (2983248) The vxrepquota(1M) command dumps core. 3059010 (3011959) The system may panic because of the file system locking or unlocking using the fsadm(1M) or the vxumount(1M) command. 3131798 (2839871) On a system with DELICACHE enabled, several file system operations may hang. 3131826 (2966277) Systems with high file system activity like read/write/open/lookup may panic the system. 3131830 (2991880) In low memory conditions on a VxFS, certain file system activities may seem to be non-responsive. 3248037 (2956195) mmap in the CFS environment takes a long time to complete. 3248051 (3121933) The pwrite(2) function fails with the EOPNOTSUPP error. 3248053 (3140990) Requirement for the ability to turn off VxFS's invalidation of pages for some Network File System (NFS) workloads. 3248077 (3089210) The message V-2-17: vx_iread_1 filesystem file system inode inode number marked bad incore is displayed in the system log. 3248084 (2107152) The system panics when you umount a mntlock protected VxFS files system, if that device is duplicately mounted on different directories. 3248089 (3003679) When running the fsppadm(1M) command and removing a file with the named stream attributes (nattr) at the same time, the file system does not respond. 3248090 (2963763) When the thin_friendly_alloc() and deliache_enable() functionality is enabled, VxFS may enter a deadlock. 3248094 (3192985) Checkpoints quota usage on Cluster File System (CFS) can be negative. 3248096 (3214816) With the DELICACHE feature enabled, frequent creation and deletion of the inodes of a user may result in corruption of the user quota file. 3248099 (3189562) Oracle daemons get hang with the vx_growfile() kernel function. 3248103 (3068902) In case of stale NFS mounts, the statfs() function calls on non-VxFS file systems may cause df commands to hang. 3248990 (3270210) On SUSE Linux Enterprise Server 11 (SLES 11) SP3, no support is provided for VRTSfsadv, VRTSfssdk, VRTSvxfs,VRTScavf and VRTSsvs. 3257474 (3257467) Support for SLES 11 SP3 is added. 3270203 (3270200) Support for SLES 11 SP3 is added. 3283244 (3047134) GAB shouldn't call callback functions in interrupt context 3284764 (3042485) During internal stress testing, the f:vx_purge_nattr:1 assert fails. 3299685 (2999493) The file system check validation fails after a successful full fsck during the internal testing with the following message: run_fsck : First full fsck pass failed, exiting 3306410 (2495673) During communication between the nodes in a cluster there was a mismatch of cio related data in an inode. 3306442 (3312030) The default quota support on Veritas File System version 6.0.4 is changed to 32 bit. 3317208 (3312030) The default quota support on Veritas File System version 6.0.4 is changed to 32 bit. 3321131 (3247752) The system panics with the following message during the internal stress tests: Unable to handle kernel paging request at 3321730 (3214328) There was a mismatch between the states for the glm grant level and the glm data in a cfs inode. 3323912 (3259634) A Cluster File System having more than 4G blocks gets corrupted because the blocks containing some file system metadata get eliminated. PATCH ID:6.0.300.000 2928921 (2843635) Internal testing is having some failures. 2933291 (2806466) "fsadm -R" resulting in panic at LVM layer due to vx_ts.ts_length set to 2GB. 2933292 (2895743) Accessing named attributes for some files seems to be slow. 2933294 (2750860) Performance issue due to CFS fragmentation in CFS cluster 2933296 (2923105) The upgrade VRTSvxfs5.0MP4HFaf hang at vxfs preinstall scripts 2933309 (2858683) Reserve extent attributes changed after vxrestore, only for files greater than 8192bytes 2933313 (2841059) full fsck fails to clear the corruption in attribute inode 15 2933330 (2773383) Read/Write operation on a memory mapped files seems to be hung. 2933571 (2417858) VxFS quotas do not support 64 bit limits. 2933729 (2611279) Filesystem with shared extents may panic. 2933751 (2916691) Customer experiencing hangs when doing dedups 2933822 (2624262) Filestore:Dedup:fsdedup.bin hit oops at vx_bc_do_brelse 2937367 (2923867) Internal test hits an assert "f:xted_set_msg_pri1:1". 2978227 (2857751) The internal testing hits the assert "f:vx_cbdnlc_enter:1a". 3008450 (3004466) Installation of 5.1SP1RP3 fails on RHEL 6.3 PATCH ID:6.0.100.200 2912412 (2857629) File system corruption can occur requiring a full fsck of the system. 2912435 (2885592) vxdump to the vxcompress file system is aborted 2923805 (2590918) Delay in freeing unshared extents upon primary switch over. PATCH ID:6.0.100.100 2907912 (2907908) gms rpm installation on SLES11 SP2 fails with error modules are not suitable for kernel 2907921 (2907919) VxFS rpm installation on SLES11 SP2 fails with error modules are not suitable for kernel 2907924 (2907923) odm rpm installation on SLES11 SP2 fails with error modules are not suitable for kernel 2907932 (2907930) glm rpm installation on SLES11 SP2 fails with error modules are not suitable for kernel 2910648 (2905579) VxVM(Veritas volume manager)rpm installation on SLES11SP2 fails SUMMARY OF KNOWN ISSUES: ----------------------------------------- NONE KNOWN ISSUES : -------------- NONE FIXED INCIDENTS: ---------------- PATCH ID:6.0.400.000 * INCIDENT NO:2933290 TRACKING ID:2756779 SYMPTOM: Write and read performance concerns on Cluster File System (CFS) when running applications that rely on POSIX file-record locking (fcntl). DESCRIPTION: The usage of fcntl on CFS leads to high messaging traffic across nodes thereby reducing the performance of readers and writers. RESOLUTION: The code is modified to cache the ranges that are being file-record locked on the node. This is tried whenever possible to avoid broadcasting of messages across the nodes in the cluster. * INCIDENT NO:2933301 TRACKING ID:2908391 SYMPTOM: Checkpoint removal takes too long if Veritas File System (VxFS) has a large number of files. The cfsumount(1M) command could hang if removal of multiple checkpoints is in progress for such a file system. DESCRIPTION: When removing a checkpoint, VxFS traverses every inode to determine if pull/push is needed for upstream/downstream checkpoint in its chain. This is time consuming if the file system has large number of files. This results in the slow checkpoint removal. The command "cfsumount -c fsname" forces the umounts operation on a VxFS file system if there is any asynchronous checkpoint removal job in progress by checking if the value of vxfs stat "vxi_clonerm_jobs" is larger than zero. However, the stat does not count in the jobs in the checkpoint removal working queue and the jobs are entered into the working queue. The "force umount" operation does not happen even if there are pending checkpoint removal jobs because of the incorrect value of "vxi_clonerm_jobs" (zero). RESOLUTION: For slow checkpoint removal issue: Code is modified to create multiple threads to work on different Inode Allocation Units (IAUs) in parallel and to reduce the inode push work by sorting the checkpoint removal jobs by the creation time in ascending order and enlarged the checkpoint push size. For the cfsumount(1M) command hang issue: Code is modified to add the counts of jobs in the working queue in the "vxi_clonerm_jobs" stat. * INCIDENT NO:2947029 TRACKING ID:2926684 SYMPTOM: On systems with heavy transactions workload like creation, deletion of files and so on, the system may panic with the following stack trace: a|.. vxfs:vx_traninit+0x10 vxfs:vx_dircreate_tran+0x420 vxfs:vx_pd_create+0x980 vxfs:vx_create1_pd+0x1d0 vxfs:vx_do_create+0x80 vxfs:vx_create1+0xd4 vxfs:vx_create+0x158 a|.. DESCRIPTION: In case of a delayed log, a transaction commit can complete before completing the log write. The memory for transaction is freed before logging the transaction and corrupts the transaction freelist causing the system to panic. RESOLUTION: The code is modified such that the transaction is not freed untill the log is written. * INCIDENT NO:2959557 TRACKING ID:2834192 SYMPTOM: The mount operation fails after full fsck(1M) utility is run and displays the following error message on the console: 'UX:vxfs mount.vxfs: ERROR: V-3-26881 : Cannot be mounted until it has been cleaned by fsck. Please run "fsck -t vxfs -y MNTPNT" before mounting'. DESCRIPTION: When a CFS is mounted, VxFS validates the per-node-cut entries (PNCUT) which are in-core against their counterparts on the disk. This validation failure makes the mount unsuccessful for the full fsck. Full fsck is in the fourth pass when it checks the free inode/extent maps and merges the dirty PNCUT files in- core, and validates them with the corresponding on-disk values. However, if any PNCUT entry is corrupted, then the fsck(1M) utility simply ignores it. This results in the mount failure. RESOLUTION: The code is modified to enhance the fsck(1M) utility to handle any delinquent PNCUT entries and rebuild them as required. * INCIDENT NO:2978236 TRACKING ID:2977828 SYMPTOM: The filesystem is marked bad after an inode table overflow error: kernel: vxfs: msgcnt 7911 mesg 014: V-2-14: vx_iget - inode table overflow kernel: vxfs: msgcnt 7912 mesg 063: V-2-63: vx_fset_markbad - /dev/vx/dsk/sfsdg/project2 file system fileset (index 1019) marked bad kernel: V-2-96: vx_setfsflags - /dev/vx/dsk/sfsdg/project2 file system fullfsck flag set - vx_fset_markbad DESCRIPTION: vx_clone_dispose() need to read in every inode in the removing file set to truncate their allocated data extent. For very large file sets that means read in millions of inodes into vxfs inode cache, which may lead to heavy contention of the vxfs inode cache. vx_iget() may return ENFILE due to short of memory in such case. vx_clone_dispose() will set the whole file system full fsck flag in such an error. But since the ENFILE is not a critical error, we should not set full fsck flag. RESOLUTION: Make vx_clone_dispose() exit directly instead of set full fsck flag on ENFILE error. With this fix, an vx_iget() ENFILE error will interrupt the clone removal process, and the clone removal process will be continued on the next mount operation. The same logic is already used for ENOSPC error during clone removal. * INCIDENT NO:2983249 TRACKING ID:2983248 SYMPTOM: The vxrepquota(1M) command dumps core. DESCRIPTION: In vxrepquota and vxquotaon we allocate an array of 50 pointers for vfstab entries. This means we it can only hold maximum 50 entries. But customer had 69 VxFS file systems in /etc/vfstab. This caused the overflow. RESOLUTION: Extended the size of array listbuf to 1024. Though overflow will still happen when more than 1024 vxfs exists in /etc/vfstab, we believe 1024 is big enough for practical use. * INCIDENT NO:3059010 TRACKING ID:3011959 SYMPTOM: The system may panic because of the file system locking or unlocking using the fsadm(1M) or vxumount(1M) command with the following stack trace: vx_show_options show_vfsmnt seq_read vfs_read sys_read system_call_fastpath DESCRIPTION: When performing the file system locking or unlocking operations and read the mount lock key, a user cannot serialize these operations with a lock. As a result, when other users try to read the file system locking or unlocking the user reset, the other users are trying to read an entry which is already freed. This conflict causes a null pointer de-reference resulting in the mentioned panic above. RESOLUTION: The code is modified to serialize file system locking operations with the CLONEOPS lock. * INCIDENT NO:3131798 TRACKING ID:2839871 SYMPTOM: On a system with DELICACHE enabled, several file system operations may hang with the following stack trace: vx_delicache_inactive vx_delicache_inactive_wp vx_workitem_process vx_worklist_process vx_worklist_thread vx_kthread_init DESCRIPTION: The DELICACHE lock is used to synchronize the access to the DELICACHE list and it is held only while updating this list. However, in some cases it is held longer and is released only after the issued I/O is completed, causing other threads to hang. RESOLUTION: The code is modified to release the spinlock before issuing a blocking I/O request. * INCIDENT NO:3131826 TRACKING ID:2966277 SYMPTOM: Systems with high file-system activity like read/write/open/lookup may panic with the following stack trace due to a rare race condition: spinlock+0x21 ( ) -> vx_rwsleep_unlock() vx_ipunlock+0x40() vx_inactive_remove+0x530() vx_inactive_tran+0x450() vx_local_inactive_list+0x30() vx_inactive_list+0x420() -> vx_workitem_process() -> vx_worklist_process() vx_worklist_thread+0x2f0() kthread_daemon_startup+0x90() DESCRIPTION: ILOCK is released before doing a IPUNLOCK that causes a race condition. This results in a panic when an inode that has been set free is accessed. RESOLUTION: The code is modified so that the ILOCK is used to protect the inodes' memory from being set free, while the memory is being accessed. * INCIDENT NO:3131830 TRACKING ID:2991880 SYMPTOM: In low memory conditions on a Veritas File System (VxFS) certain file systems activities may seem to be non-responsive with the following stack traces seen in the crash dumps: vx_diput d_kill prune_one_dentry __shrink_dcache_sb prune_dcache shrink_dcache_memory shrink_slab do_try_to_free_pages try_to_free_pages __alloc_pages_slowpath __alloc_pages_nodemask kmem_getpages fallback_alloc kmem_cache_alloc do_tune_cpucache enable_cpucache kmem_cache_create DESCRIPTION: In low memory conditions a cache created operation from the VxFS kernel may trigger a cache shrink operation via vx_diput since there no enough free memory to allocate. Both these operations require the cache_chain_mutex lock resulting in a deadlock situation. RESOLUTION: The code is modified to remove the cache shrink operation from the vx_diput context, thus avoiding the deadlock. * INCIDENT NO:3248037 TRACKING ID:2956195 SYMPTOM: mmap in CFS environment takes a long time to complete. DESCRIPTION: During a mmap operation, the read-write operations across the nodes in CFS invalidate the cache if the cached file is accessed from a different node. And the cache invalidation slows the mmap operation down. RESOLUTION: The code is modified to add a new module parameter "vx_cfs_mmap_perf_tune" for Linux kernel version beyond 2.6.27. If the value is set to 1, the mmap operation over CFS runs by an optimized path. * INCIDENT NO:3248051 TRACKING ID:3121933 SYMPTOM: The pwrite() function fails with EOPNOTSUPP when the write range is in two indirect extents. DESCRIPTION: When the range of pwrite() falls in two indirect extents (one ZFOD extent belonging to DB2 pre-allocated files created with setext( , VX_GROWFILE, ) ioctl and another DATA extent belonging to adjacent INDIR) write fails with EOPNOTSUPP. The reason is that VxFS is trying to coalesce extents which belong to different indirect address extents as part of this transaction - such a meta- data change consumes more transaction resources which VxFS transaction engine is unable to support in the current implementation. RESOLUTION: Code is modified to retry the transaction without coalescing the extents, as latter is an optimisation and should not fail write. * INCIDENT NO:3248053 TRACKING ID:3140990 SYMPTOM: Requirement for the ability to turn off VxFS's invalidation of pages for some Network File System (NFS) workloads. DESCRIPTION: If pages get aged under memory pressure, those will be re-used and next reads might have to populate them again reading from disk. Also, flush behind might lead reclaiming some pages that we may need in future for reads to serve. This can be changed through tunables. RESOLUTION: Exposed few global variables as tunables so that, those can be tuned to make performance better. 1 ) vx_read_flush_disabled - disable the read flush 2 ) vx_write_flush_disabled - disable the write flush 3 ) vx_nfs_mem_pressure_ignore - Ignore the memory pressure and disable write flush behind while IOs are happening through NFS. 4 ) vx_iclean_timelag - minimum time inodes should be on the freelist before reclaim * INCIDENT NO:3248077 TRACKING ID:3089210 SYMPTOM: After vx_maxlink (the number of sub-directories in a directory or hard link count) is increased (from 32K to 64K), you may receive the following error message: vxfs: msgcnt 2545 mesg 017: V-2-17: vx_iread_1 - file system inode marked bad incore A full fsck operation on such file system may move such directories to /lost+found. DESCRIPTION: When the vx_malink value is increased from 32k to 64k, the new limit is saved in a .conf file. However, the upgrade operation and the reinstall operation don't read the file. Instead the module initialization scripts roll it back to the default 32k, which leads to the mentioned message. And the file system check utility (fsck) still verifies file system consistency using the outdated max limit of 32k. It moves all directories that are larger than 32k to the /lost+found directory. RESOLUTION: The code is modified to read the configuration file during the upgrade operation and the re-install operation. And fsck is modified to check the new limit. * INCIDENT NO:3248084 TRACKING ID:2107152 SYMPTOM: System panic when umount a mntlock protected vxfs filesystem, if that device is duplicate mounted on different directories. DESCRIPTION: Linux support mounting the same device on different diretories multiple times. And vx_vfsmnt_updatecnt() is calling mntget()/mntput() for EVERY mounted vfsmount's of the same device. This means all mount points of the same device cannot be unmounted before "fsadm -o mntunlock". But there maybe new vfsmount's of this device being added after the first vx_mntlock_hold(). These new vfsmount missed the mntget() from vx_mntlock_hold(), but caught by the mntput() of following vx_mntlock_release(). The mismatched mntput() caused the system panic. RESOLUTION: Change vx_get_sb_impl() to get a hold on every new mount point if that device already have mntlock set. Also change to vx_aioctl_setmntlock()/vx_aioctl_unsetmntlock() to take the VX_CLONEOPS_LOCK. * INCIDENT NO:3248089 TRACKING ID:3003679 SYMPTOM: The file system hangs when doing fsppadm and removing a file with named stream attributes (nattr) at the same time. The following two typical threads are involved: T1: COMMAND: "fsppadm" schedule at vxg_svar_sleep_unlock vxg_grant_sleep vxg_cmn_lock vxg_api_lock vx_glm_lock vx_ihlock vx_cfs_iread vx_iget vx_traverse_tree vx_dir_lookup vx_rev_namelookup vx_aioctl_common vx_ioctl vx_compat_ioctl compat_sys_ioctl T2: COMMAND: "vx_worklist_thr" schedule vxg_svar_sleep_unlock vxg_grant_sleep vxg_cmn_lock vxg_api_lock vx_glm_lock vx_genglm_lock vx_dirlock vx_do_remove vx_purge_nattr vx_nattr_dirremove vx_inactive_tran vx_cfs_inactive_list vx_inactive_list vx_workitem_process vx_worklist_process vx_worklist_thread vx_kthread_init kernel_thread DESCRIPTION: The file system hangs due to the deadlock between the threads. T1 initiated by fsppadm calls vx_traverse_tree to obtain the path name for a given inode number. T2 removes the inode as well as its affiliated nattr inodes. The reverse name lookup (T1) holds the global dirlock in vx_dir_lookup during the lookup process. It traverses the entire path from bottom to top to resolve the inode number inversely in vx_traverse_tree. During the lookup, VxFS needs to hold the hlock of each inode to read them, and drop it after reading. The file removal (T2) is processed via vx_inactive_tran which will take the "hlock" of the inode being removed. After that, it will remove all its named attribute inodes invx_do_remove, where sometimes the global dirlock is needed. Eventually, each thread waits for the lock, which is held by the other thread and this result in the deadlock. RESOLUTION: The code is modified so that the dirlock is not acquired during reserve name lookup. * INCIDENT NO:3248090 TRACKING ID:2963763 SYMPTOM: When thin_friendly_alloc and deliache_enable parameters are enabled, Veritas File System (VxFS) may hit the deadlock. The thread involved in the deadlock can have the following stack trace: vx_rwsleep_lock() vx_tflush_inode() vx_fsq_flush() vx_tranflush() vx_traninit() vx_remove_tran() vx_pd_remove() vx_remove1_pd() vx_do_remove() vx_remove1() vx_remove_vp() vx_remove() vfs_unlink() do_unlinkat The threads waiting in vx_traninit() for transaction space, displays following stack trace: vx_delay2() vx_traninit() vx_idelxwri_done() vx_idelxwri_flush() vx_common_inactive_tran() vx_inactive_tran() vx_local_inactive_list() vx_inactive_list+0x530() vx_worklist_process() vx_worklist_thread() DESCRIPTION: In the extent allocation code paths, VxFS is setting the IEXTALLOC flag on the inode, without taking the ILOCK, with overlapping transactions picking up this same inode off the delicache list makes the transaction done code paths to miss the IUNLOCK call. RESOLUTION: The code is modified to change the corresponding code paths to set the IEXTALLOC flag under proper protection. * INCIDENT NO:3248094 TRACKING ID:3192985 SYMPTOM: Checkpoints quota usage on CFS can be negative. An example is as follows: Filesystem hardlimit softlimit usage action_flag /sofs1 51200 51200 18446744073709490176 << negative DESCRIPTION: In CFS, to manage the intent logs, and the other extra objects required for CFS, a holding object referred to as a per-node-object-location table (PNOLT) is created. In CFS, the quota usage is calculated by reading the per node cut (current usage table) files (member of PNOLT) and summing up the quota usage for each clone clain. However, when the quotaoff and quotaon operations are fired on a CFS checkpoint, the usage shows "0" after these two operations are executed. This happens because the quota usage calculation is skipped. Subsequently, if a delete operation is performed, the usage becomes negative since the blocks allocated for the deleted file are subtracted from zero. RESOLUTION: The code is modified such that when the quotaon operation is performed, the quota usage calculation is not skipped. * INCIDENT NO:3248096 TRACKING ID:3214816 SYMPTOM: When you create and delete the inodes of a user frequently with the DELICACHE feature enabled, the user quota file becomes corrupt. DESCRIPTION: The inode DELICACHE feature causes this issue. This feature optimizes the updates on the inode map during the file creation and deletion operations. It is enabled by default. You can disable this feature with the vxtunefs(1M) command. When DELICACHE is enabled and the quota is set for Veritas File System (VxFS), VxFS updates the quota for the inodes before the inodes are on the DELICACHE list and after they are on the inactive list during the removal process. As a result, VxFS decrements the current number of user files twice. This causes the quota file corruption. RESOLUTION: The code is modified to identify the inodes moved to the inactive list from the DELICACHE list. This flag prevents the quota being decremented again during the removal process. * INCIDENT NO:3248099 TRACKING ID:3189562 SYMPTOM: Oracle daemons get hang with the vx_growfile() kernel function. You may see similar stack trace as follows: vx_growfile+0004D4 () vx_doreserve+000118 () vx_tran_extset+0005DC () vx_extset_msg+0006E8 () vx_cfs_extset+000040 () vx_extset+0002D4 () vx_setext+000190 () vx_uioctl+0004AC () vx_ioctl+0000D0 () vx_ioctl_skey+00004C () vnop_ioctl+000050 (??, ??, ??, ??, ??, ??) kernel_add_gate_cstack+000030 () vx_vop_ioctl+00001C () vx_odm_resize@AF15_6+00015C () vx_odm_resize+000030 () odm_vx_resize+000040 () odm_resize+0000E8 () vxodmioctl+00018C () hkey_legacy_gate+00004C () vnop_ioctl+000050 (??, ??, ??, ??, ??, ??) vno_ioctl+000178 (??, ??, ??, ??, ??) DESCRIPTION: The vx_growfile() kernel function may run into a loop on a highly fragmented file system, which causes multiple processes to hang. The vx_growfile() routine is invoked through the setext(1) command or its Application Programming Interface (API). When the vx_growfile() function requires more extents than the typed extent buffer can spare, an VX_EBMAPLOCK error may occur. To handle the error, VxFS cancels the transaction and repeats the same operation again, which creates the loop. RESOLUTION: The code is modified to make VxFS commit the available extents to proceed the growfile transaction, and repeat enough times until the transaction is completed. * INCIDENT NO:3248103 TRACKING ID:3068902 SYMPTOM: - VxFS df issues statfs() calls on non-vxfs filesystems. This can result in a df hang in case of stale NFS mounts. - VxFS df should display on vxfs filesystems with "-t vxfs" option. DESCRIPTION: Same as RESOLUTION: With "-t vxfs" option VxFS df will only probe and display information about VxFS filesystems. * INCIDENT NO:3248990 TRACKING ID:3270210 SYMPTOM: There was no Support on SLES11SP3 for VRTSfsadv,VRTSfssdk,VRTSvxfs,VRTScavf ,VRTSsvs DESCRIPTION: There was no Support on SLES11SP3 for VRTSfsadv,VRTSfssdk,VRTSvxfs,VRTScavf ,VRTSsvs RESOLUTION: Added Support on SLES11SP3 for VRTSfsadv,VRTSfssdk,VRTSvxfs,VRTScavf,VRTSsvs * INCIDENT NO:3257474 TRACKING ID:3257467 SYMPTOM: Module load failing on sles11 sp2/sp3, DESCRIPTION: sles11 sp2 having higher kernel version than sp3, module installation failing since our installer script checks for version near to highest version on the target machine, if unable to load highest version module, then give up. RESOLUTION: Now if our installer is unable to load highest kernel version(incompatible) matching to target, it checks for lower version also until it got installed any module or exhausted the list of incompatible modules. * INCIDENT NO:3270203 TRACKING ID:3270200 SYMPTOM: since d_alloc() reference the super block pointer which can be null in case of disconnected dentries therefore got panic. DESCRIPTION: since d_alloc() reference the super block pointer which can be null in case of disconnected dentries therefore replacing d_alloc() with d_alloc_pseudo(). RESOLUTION: replacing d_alloc() with d_alloc_pseudo(). * INCIDENT NO:3283244 TRACKING ID:3047134 SYMPTOM: kernel panic,assert during internal testing as GAB calls, callback functions in interrupt context. DESCRIPTION: During internal testing found kernel panic, assert because GAB calls, callback functions in interrupt context. RESOLUTION: From VxFS, new flag is introduced and pass it to GAB for registration, so that GAB doesn't call, callback functions in interrupt context * INCIDENT NO:3284764 TRACKING ID:3042485 SYMPTOM: During internal Stress testing the following assert is hit "f:vx_purge_nattr:1" DESCRIPTION: In case of corruption file-system check utility is run and inodes are picked up serially, to be checked and fixed ( in case of corruption) .However in some cases the order in which these are processed can be changed causing inconsistent meta-data causing the above mentioned assertion failure. RESOLUTION: The code is modified to handle named attribute inodes in an earlier pass during full fsck operation. * INCIDENT NO:3299685 TRACKING ID:2999493 SYMPTOM: During internal testing, file system check validation failure even after successful full fsck. DESCRIPTION: Even after successful full fsck completion , fsck validation fails due to incorrect entries in a structural file(IFRCT) which maintains reference of count of shared extents. While processing this information for indirect extents, the modified data is not getting flushed to the disk as we are not marking buffer dirty after modifying its contents. RESOLUTION: Modified the code to mark buffer dirty When buffer get modified. * INCIDENT NO:3306410 TRACKING ID:2495673 SYMPTOM: Incore inode gets marked bad and an internal test assertion fails 'f:xted_check_fdd:4' DESCRIPTION: In CFS environment, when two nodes communicate for grant on inode etc. some data is also piggybacked to the initiating node. If there is any discrepancy on the data piggybacked between these two nodes the inode gets marked bad. In this case, in-core inode's value was not matching. It was a corner case where during communication the file system gets disabled causing stale cio data being sent to the initiating node. RESOLUTION: The source code is modified to accommodate following change: If the FS gets disabled it should not delegate false information when asked for cio count from other nodes. It should also invalidate its cio count state from other nodes. * INCIDENT NO:3306442 TRACKING ID:3312030 SYMPTOM: On Systems with VxFS 6.0.3 installed the default quota is implicitly changed to 64 bit DESCRIPTION: The quota conversion to 64-bit happens implicitly during the mount operation. This means there is no option available for 32-bit quota RESOLUTION: The code is modified to change the default quota to 32-bit. The File Systems which were created and mounted with quotas in 6.0.3 release will continue with 64-bit quota support when upgraded to 6.0.4. However, any new File Systems created will have default quota set to 32-bit. Please contact Symantec Customer Support if you need 64-bit quota support for these newer File Systems * INCIDENT NO:3317208 TRACKING ID:3312030 SYMPTOM: On Systems with VxFS 6.0.3 installed the default quota is implicitly changed to 64 bit DESCRIPTION: The quota conversion to 64-bit happens implicitly during the mount operation. This means there is no option available for 32-bit quota RESOLUTION: The code is modified to change the default quota to 32-bit. The File Systems which were created and mounted with quotas in 6.0.3 release will continue with 64-bit quota support when upgraded to 6.0.4. However, any new File Systems created will have default quota set to 32-bit. Please contact Symantec Customer Support if you need 64-bit quota support for these newer File Systems * INCIDENT NO:3321131 TRACKING ID:3247752 SYMPTOM: During internal Stress tests the system panic with the following message kernel panic-"unable to handle kernel paging request at " DESCRIPTION: In low memory conditions the stack space required to server a page request would be more than ordinary.Since special shrink functions are called in the same context to free some memory leading to stack overflow with the above mentioned message. RESOLUTION: The code is modified to perform early hand-off for the shrink functions to execute. * INCIDENT NO:3321730 TRACKING ID:3214328 SYMPTOM: There was a mismatch between state for glm grant level and the glm data in a cfs inode. DESCRIPTION: If the state of the glock grant level of a cfs inode is not null inode should have some valid glm data. In this case there was not a valid glm data even when a state was showing inode should have a valid glm data. This case occurs when the FS is disabled in some error scenario and if any thread has started it's execution before the FS was disabled it completes its job even if the FS gets disabled in between. This changes the glm state of an inode however miss to update other flags like inode->cflags which causes the mismatch. RESOLUTION: The source code is modified to accomodate following change: We skip updating glm state when some specific flag is set in inode->i_cflags and the FS is also being disabled. * INCIDENT NO:3323912 TRACKING ID:3259634 SYMPTOM: In CFS each node that has the file system cluster mounted has its own intent-log in the file system. A cluster file system that has more than 4,294,967,296 file system blocks can zero out an incorrect location due to an incorrect typecasting, for example 65536 file system blocks at block offset of 1,537,474,560 [fs blocks] can be incorrectly zeroed out using a 8Kb fs block size and an intent-log of size 65536 fs blocks. This issue can only occur if an intent-log is located above an offset of 4,294,967,296 file system blocks. This situation can occur when adding a new node to the cluster and mounting an additional CFS secondary for the first time, which needs to create and zero a new intent-log. This situation can also be triggered if the file system or intent log is resized and an intent-log needs to be cleared. The problem occurs only with the following file system size and the FS block size combinations: 1kb block size and FS size > 4TB 2kb block size and FS size > 8TB 4kb block size and FS size > 16TB 8kb block size and FS size > 32TB The message log can contain the following messages: full fsck flag is set on a file system with the following type of messages: 2013 Apr 17 14:52:22 sfsys kernel: vxfs: msgcnt 5 mesg 096: V-2-96: vx_setfsflags - /dev/vx/dsk/sfsdg/vol1 file system fullfsck flag set - vx_ierror 2013 Apr 17 14:52:22 sfsys kernel: vxfs: msgcnt 6 mesg 017: V-2-17: vx_attr_iget - /dev/vx/dsk/sfsdg/vol1 file system inode 13675215 marked bad incore 2013 Jul 17 07:41:22 sfsys kernel: vxfs: msgcnt 47 mesg 096: V-2-96: vx_setfsflags - /dev/vx/dsk/sfsdg/vol1 file system fullfsck flag set - vx_ierror 2013 Jul 17 07:41:22 sfsys kernel: vxfs: msgcnt 48 mesg 017: V-2-17: vx_dirbread - /dev/vx/dsk/sfsdg/vol1 file system inode 55010476 marked bad incore DESCRIPTION: In CFS each node that has the file system cluster mounted has its own intent-log in the file system.An intent-log is created when an additional node mounts the file system as a CFS Secondary. Note that intent-logs are never removed, they are reused. Whilst clearing an intent log, an incorrect block number is passed to the log clearing routine resulting in zeroing out an incorrect location. The incorrect location might point to file data or file system metadata, or the incorrect location might be part of the file system's available freespace. This is silent corruption. If file system metadata is corrupted it will be detected when the corrupt metadata is subsequently accessed and the file system will be marked for full fsck. RESOLUTION: The code is modified so that the correct block number is passed to the log clearing routine. PATCH ID:6.0.300.000 * INCIDENT NO:2928921 TRACKING ID:2843635 SYMPTOM: The VxFS internal testing, there are some failures during the reorg operation of structural files. DESCRIPTION: While the reorg is in progress, from certain ioctl, the error value that is to be returned is overwritten and thus results in an incorrect error value and test failures. RESOLUTION: Made changes accordingly so as the error value is not overwritten. * INCIDENT NO:2933291 TRACKING ID:2806466 SYMPTOM: A reclaim operation on a filesystem mounted on a Logical Volume Manager (LVM) volume using the fsadm(1M) command with the 'R' option may panic the system and the following stack trace is displayed: vx_dev_strategy+0xc0() vx_dummy_fsvm_strategy+0x30() vx_ts_reclaim+0x2c0() vx_aioctl_common+0xfd0() vx_aioctl+0x2d0() vx_ioctl+0x180() DESCRIPTION: Thin reclamation is supported only on the file systems mounted on a Veritas Volume Manager (VxVM) volume. RESOLUTION: The code is modified to error out gracefully if the underlying volume is LVM. * INCIDENT NO:2933292 TRACKING ID:2895743 SYMPTOM: It is taking too long for 50 Windows7 clients to log off in parallel if the user profile is stored in CFS. DESCRIPTION: VxFS keeps file creation time/full ACL things for samba clients in the extended attribute which is implemented via named streams. VxFS will read the named stream every time for each ACL object. Reading of named stream is a costly operation, as it results in an open, an opendir, a lookup and another open to get the fd. VxFS function vx_nattr_open() holds the exclusive rwlock to read an ACL object that stored as extended attribute. It may cause heavy lock contention when there are many threads wants the same lock, they will get blocked until one of nattr_open releases it, but which will take time since nattr_open is very slow.. RESOLUTION: Take the rwlock in shared mode instead of exclusive for linux getxattr code path. * INCIDENT NO:2933294 TRACKING ID:2750860 SYMPTOM: On a large file system(4TB or greater), the performance of the write(1) operation with many small request sizes may degrade, and many threads may be found sleeping with the following stack trace: real_sleep sleep_one vx_sleep_lock vx_lockmap vx_getemap vx_extfind vx_searchau_downlevel vx_searchau_downlevel vx_searchau_downlevel vx_searchau_downlevel vx_searchau_uplevel vx_searchau vx_extentalloc_device vx_extentalloc vx_te_bmap_alloc vx_bmap_alloc_typed vx_bmap_alloc vx_write_alloc3 vx_recv_prealloc vx_recv_rpc vx_msg_recvreq vx_msg_process_thread kthread_daemon_startup DESCRIPTION: For a cluster-mounted file system, the free-extend-search algorithm is not optimized for a large file system (4TB or greater), and for instances where the number of free Allocation Units (AUs) available can be very large. RESOLUTION: The code is modified to optimize the free-extend-search algorithm by skipping certain AUs. This reduces the overall search time. * INCIDENT NO:2933296 TRACKING ID:2923105 SYMPTOM: Removing the vxfs module from kernel taking lot of time. DESCRIPTION: When there are huge number of buffers allocated from buffer cache, at the time of removing module the process of freeing the buffers takes a lot of time. Algorithm of this process can be improved. RESOLUTION: Modified the algorithm such that, it will not keep on traversing the freelists even if it has found the freechunk. We will break out from the search and free that buffer. * INCIDENT NO:2933309 TRACKING ID:2858683 SYMPTOM: The reserve-extent attributes are changed after the vxrestore(1M ) operation, for files that are greater than 8192 bytes. DESCRIPTION: A local variable is used to contain the number of the reserve bytes that are reused during the vxrestore(1M) operation, for further VX_SETEXT ioctl call for files that are greater than 8k. As a result, the attribute information is changed. RESOLUTION: The code is modified to preserve the original variable value till the end of the function. * INCIDENT NO:2933313 TRACKING ID:2841059 SYMPTOM: The file system gets marked for a full fsck operation and the following message is displayed in the system log: V-2-96: vx_setfsflags file system fullfsck flag set - vx_ierror vx_setfsflags+0xee/0x120 vx_ierror+0x64/0x1d0 [vxfs] vx_iremove+0x14d/0xce0 vx_attr_iremove+0x11f/0x3e0 vx_fset_pnlct_merge+0x482/0x930 vx_lct_merge_fs+0xd1/0x120 vx_lct_merge_fs+0x0/0x120 vx_walk_fslist+0x11e/0x1d0 vx_lct_merge+0x24/0x30 vx_workitem_process+0x18/0x30 vx_worklist_process+0x125/0x290 vx_worklist_thread+0x0/0xc0 vx_worklist_thread+0x6d/0xc0 vx_kthread_init+0x9b/0xb0 V-2-17: vx_iremove_2 : file system inode 15 marked bad incore DESCRIPTION: Due to a race condition, the thread tries to remove an attribute inode that has already been removed by another thread. Hence, the file system is marked for a full fsck operation and the attribute inode is marked as 'bad ondisk'. RESOLUTION: The code is modified to check if the attribute node that a thread is trying to remove has already been removed. * INCIDENT NO:2933330 TRACKING ID:2773383 SYMPTOM: Deadlock involving two threads, one holding the sem and waiting for the irwlock and other one holding irwlock and waiting for the mmap sem DESCRIPTION: The hang in down_read is due to waiting for the mmap_sem. The thread holding the mmap_sem is waiting for the RWLOCK, This is being held by one of the threads wanting the mmap_sem, hence the deadlock. An enhancement was made to not take the mmap_sem for cio and mmap. This was not complete and had allowed for the mmap_sem to be taken for native asynch io calls when using this nommapcio option. RESOLUTION: We skip taking the mmap_sem in case of native-io if file has CIO advisory set. * INCIDENT NO:2933571 TRACKING ID:2417858 SYMPTOM: When the hard/soft limit of quota is specified above 1TB, the command fails and gives error. DESCRIPTION: We store the quota records corresponding to users in the external and internal quota files. In external quota file, the record are in the form of structure which are 32 bit. So, we can specify the block limits upto 32-bit value (1TB). This limit was insufficient in many cases. RESOLUTION: Made use of 64-bit structures and 64-bit limit macros to let users have usage/limits greater than 1 TB. * INCIDENT NO:2933729 TRACKING ID:2611279 SYMPTOM: Filesystem with shared extents may panic with following stack trace. page_fault vx_overlay_bmap vx_bmap_lookup vx_bmap vx_local_get_sharedblkcnt vx_get_sharedblkcnt vx_aioctl_get_sharedblkcnt vx_aioctl_common mntput_no_expire vx_aioctl vx_ioctl DESCRIPTION: The mechanism to manage shared extent uses special file. We never expects HOLE in this special file. In HOLE cases we may see panic while working on this file. RESOLUTION: Code has been modified to check if HOLE is present in special file. In case if it is HOLE processing is skipped and thus panic is avoided. * INCIDENT NO:2933751 TRACKING ID:2916691 SYMPTOM: fsdedup infinite loop with the following stack: #5 [ffff88011a24b650] vx_dioread_compare at ffffffffa05416c4 #6 [ffff88011a24b720] vx_read_compare at ffffffffa05437a2 #7 [ffff88011a24b760] vx_dedup_extents at ffffffffa03e9e9b #11 [ffff88011a24bb90] vx_do_dedup at ffffffffa03f5a41 #12 [ffff88011a24bc40] vx_aioctl_dedup at ffffffffa03b5163 DESCRIPTION: vx_dedup_extents() do the following to dedup two files: 1. Compare the data extent of the two files that need to be deduped. 2. Split both files' bmap to make them share the first file's common data extent. 3. Free the duplicate data extent of the second file. In step 2, During bmap split, vx_bmap_split() might need to allocate space for the inode's bmap to add new bmap entries, which will add emap to this transaction. (This condition is more likely to hit if the dedup is being run on two large files that have interleaved duplicate/difference data extents, the files bmap will needed to be splited more in this case) In step 3, vx_extfree1() doesn't support Multi AU extent free if there is already an emap in the same transaction, In this case, it will return VX_ETRUNCMAX. (Please see incident e569695 for history of this limitation) VX_ETRUNCMAX is a retirable error, so vx_dedup_extents() will undo everything in the transaction and retry from the beginning, then hit the same error again. Thus infinite loop. RESOLUTION: We make vx_te_bmap_split() always register an transaction preamble for the bmap split operation in dedup, and let vx_dedup_extents() perform the preamble at a separate transaction before it retry the dedup operation. * INCIDENT NO:2933822 TRACKING ID:2624262 SYMPTOM: Panic hit in vx_bc_do_brelse() function while executing dedup functionality with following backtrace. vx_bc_do_brelse() vx_mixread_compare() vx_dedup_extents() enqueue_entity() __alloc_pages_slowpath() __get_free_pages() vx_getpages() vx_do_dedup() vx_aioctl_dedup() vx_aioctl_common() vx_rwunlock() vx_aioctl() vx_ioctl() vfs_ioctl() do_vfs_ioctl() sys_ioctl() DESCRIPTION: While executing function vx_mixread_compare() in dedup codepath, we hit error due to which an allocated data structure remained uninitialised. The panic occurs due to writing to this uninitialised allocated data structure in the function vx_mixread_compare(). RESOLUTION: Code is changed to free the memory allocated to the data structure when we are going out due to error. * INCIDENT NO:2937367 TRACKING ID:2923867 SYMPTOM: Got assert hit due to VX_RCQ_PROCESS_MSG having lower priority(Numerically) than VX_IUPDATE_MSG; DESCRIPTION: When primary is going to send VX_IUPDATE_MSG message to the owner of the inode about updation of the inode's non-transactional field change then it checks for the current messaging priority(for VX_RCQ_PROCESS_MSG) with the priority of the message being sent(VX_IUPDATE_MSG) to avoid possible deadlock. In our case we were getting VX_RCQ_PROCESS_MSG priority numerically lower than VX_IUPDATE_MSG thus getting assert hit. RESOLUTION: We have changed the VX_RCQ_PROCESS_MSG priority numerically higher than VX_IUPDATE_MSG thus avoiding possible assert hit. * INCIDENT NO:2978227 TRACKING ID:2857751 SYMPTOM: The internal testing hits the assert "f:vx_cbdnlc_enter:1a" when the upgrade was in progress. DESCRIPTION: The clone/fileset should be mounted if there is an attempt to add an entry in the dnlc. If the clone/fileset is not mounted and still there is an attempt to add it to dnlc, then it is not valid. RESOLUTION: Fix is added to check if filset is mounted or not before adding an entry to dnlc. * INCIDENT NO:3008450 TRACKING ID:3004466 SYMPTOM: Installation of 5.1SP1RP3 fails on RHEL 6.3 DESCRIPTION: Installation of 5.1SP1RP3 fails on RHEL 6.3 RESOLUTION: Updated the install script to handle the installation failure. PATCH ID:6.0.100.200 * INCIDENT NO:2912412 TRACKING ID:2857629 SYMPTOM: When a new node takes over a primary for the file system, it could process stale shared extent records in a per node queue. The primary will detect a bad record and set the full fsck flag. It will also disable the file system to prevent further corruption. DESCRIPTION: Every node in the cluster that adds or removes references to shared extents, adds the shared extent records to a per node queue. The primary node in the cluster processes the records in the per node queues and maintains reference counts in a global shared extent device. In certain cases the primary node might process bad or stale records in the per node queue. Two situations under which bad or stale records could be processed are: 1. clone creation initiated from a secondary node immediately after primary migration to different node. 2. queue wraparound on any node and take over of primary by new node immediately afterwards. Full fsck might not be able to rectify the file system corruption. RESOLUTION: Update the per node shared extent queue head and tail pointers to correct values on primary before starting processing of shared extent records. * INCIDENT NO:2912435 TRACKING ID:2885592 SYMPTOM: vxdump of a file system which is compressed using vxcompress is aborted. DESCRIPTION: vxdump is aborted due to malloc() failure. malloc()fails due to a memory leak in vxdump command code while handling compressed extents. RESOLUTION: Fixed the memory leak. * INCIDENT NO:2923805 TRACKING ID:2590918 SYMPTOM: Upon new node in the cluster taking over as primary of the file system, there might be a significant delay in freeing up unshared extents. This problem can occur only in the case when shared extent addition or deletions occurred immediately after primary switch over to different node in the cluster. DESCRIPTION: When a new node in the cluster takes over as primary for the file system, a file system thread in the new primary performs a full scan of the shared extent device file to free up any shared extents that have become completely unshared. If heavy shared extent related activity such as additional sharing or unsharing of extents were to occur anywhere in the cluster while the full scan was being performed, the full scan could get interrupted. Due to a bug, the full scan is marked as completed and scheduled further scans of the shared extent device are partial scans. This will cause a substantial delay in freeing up some of the unshared extents in the device file. RESOLUTION: If the first full scan of shared extent device upon primary takeover gets interrupted, then do not mark the full scan as complete. PATCH ID:6.0.100.100 * INCIDENT NO:2907912 TRACKING ID:2907908 SYMPTOM: VxFS and VxVM components fail to install post SLES11SP2 GA kernel versions (below mentioned kernel versions) 3.0.26-0.7.6 3.0.31-0.9.1 3.0.34-0.7.9 3.0.38-0.5.1 DESCRIPTION: Starting from 3.0 kernel onwards, the 3.0+ kernel versions have 3 numbers whereas 2.6 kernel versions had 4 numbers. It turns out that the logic to look at first 3 digits as the 'kern_major' needs to be updated to work for 3.0.* kernel versions and subsequently for SLES11SP2 new kernel RESOLUTION:Changed the code to look at only first 2 digits as the 'kern_major' for 3.0.* kernel versions * INCIDENT NO:2907921 TRACKING ID:2907919 SYMPTOM: VxFS and VxVM components fail to install post SLES11SP2 GA kernel versions (below mentioned kernel versions) 3.0.26-0.7.6 3.0.31-0.9.1 3.0.34-0.7.9 3.0.38-0.5.1 DESCRIPTION: Starting from 3.0 kernel onwards, the 3.0+ kernel versions have 3 numbers whereas 2.6 kernel versions had 4 numbers. It turns out that the logic to look at first 3 digits as the 'kern_major' needs to be updated to work for 3.0.* kernel versions and subsequently for SLES11SP2 new kernel RESOLUTION:Changed the code to look at only first 2 digits as the 'kern_major' for 3.0.* kernel versions * INCIDENT NO:2907924 TRACKING ID:2907923 SYMPTOM: VxFS and VxVM components fail to install post SLES11SP2 GA kernel versions (below mentioned kernel versions) 3.0.26-0.7.6 3.0.31-0.9.1 3.0.34-0.7.9 3.0.38-0.5.1 DESCRIPTION: Starting from 3.0 kernel onwards, the 3.0+ kernel versions have 3 numbers whereas 2.6 kernel versions had 4 numbers. It turns out that the logic to look at first 3 digits as the 'kern_major' needs to be updated to work for 3.0.* kernel versions and subsequently for SLES11SP2 new kernel RESOLUTION:Changed the code to look at only first 2 digits as the 'kern_major' for 3.0.* kernel versions * INCIDENT NO:2907932 TRACKING ID:2907930 SYMPTOM: VxFS and VxVM components fail to install post SLES11SP2 GA kernel versions (below mentioned kernel versions) 3.0.26-0.7.6 3.0.31-0.9.1 3.0.34-0.7.9 3.0.38-0.5.1 DESCRIPTION: Starting from 3.0 kernel onwards, the 3.0+ kernel versions have 3 numbers whereas 2.6 kernel versions had 4 numbers. It turns out that the logic to look at first 3 digits as the 'kern_major' needs to be updated to work for 3.0.* kernel versions and subsequently for SLES11SP2 new kernel RESOLUTION:Changed the code to look at only first 2 digits as the 'kern_major' for 3.0.* kernel versions * INCIDENT NO:2910648 TRACKING ID:2905579 SYMPTOM: VxVM rpm installation on SLES11SP2 fails for kernel version 3.0.26-0.7.6 and above with following error message: "This release of vxdmp does not contain any modules which are suitable for your 3.0.31-0.9-default kernel. error: %post(VRTSvxvm-6.0.100.000-GA_SLES11.x86_64) scriptlet failed, exit status 1" DESCRIPTION: SUSE for SLES11 SP2 released kernel version updates (3.0.26-0.7.6, 3.0.31-0.9.1, 3.0.34-0.7.9, 3.0.38-0.5.1) following its earlier kernel version 3.0.13- 0.27.1. VxVM kernel modules are built against 3.0.13-0.27.1, which are compatible to other kernel updates as well. But when installing installation scripts used to treat first three digits (e.g. 3.0.13) of version as short kernel version, which must match with kernel version of machine. As in case of updated kernel versions, short kernel version does not match and fails with error saying modules are not suitable for kernel. RESOLUTION:Installation scripts are now updated so that for 3.0 kernel series first 2 digits of kernel version are treated as short kernel version. INCIDENTS FROM OLD PATCHES: --------------------------- NONE