sfs-sles10_x86_64-5.6RP1P4HF6

 Basic information
Release type: Hot Fix
Release date: 2013-09-17
OS update support: None
Technote: None
Documentation: None
Popularity: 953 viewed    downloaded
Download size: 804.79 MB
Checksum: 1295226054

 Applies to one or more of the following products:
FileStore 5.6 On SLES10 x86-64

 Obsolete patches, incompatibilities, superseded patches, or other requirements:

This patch supersedes the following patches: Release date
sfs-sles10_x86_64-5.6RP1P4HF5 (obsolete) 2013-06-17
sfs-sles10_x86_64-5.6RP1P4HF4 (obsolete) 2013-03-17
sfs-sles10_x86_64-5.6RP1P4HF1 (obsolete) 2012-07-09
sfs-sles10_x86_64-5.6RP1P4 (obsolete) 2012-04-03
sfs-sles10_x86_64-5.6RP1P2 (obsolete) 2011-11-07
sfs-sles10_x86_64-5.6RP1P1 (obsolete) 2011-08-11
sfs-sles10_x86_64-5.6RP1 (obsolete) 2011-06-07
sfs-sles10_x86_64-5.6P2 (obsolete) 2011-04-07
sfs-sles10_x86_64-5.6P1 (obsolete) 2011-02-06

 Fixes the following incidents:
3021863, 3071059, 3132737, 3139904, 3142747, 3210003, 3210874, 3221252, 3226136, 3228847, 3247712, 3253887, 3254380, 3259734, 3259736, 3269900

 Patch ID:
None.

Readme file
Date: 2013-09-18
OS: SLES
OS Version: 10 SP3
Symantec FileStore 
5.6 RP1 P4 HF6 Patch Upgrade README

CONTENTS
I.   OVERVIEW
II.  UPGRADE PROCEDURE
III. FIXES IN THE NEW PATCH
IV.  KNOWN ISSUES 
V.   NEW FEATURES
VI.  APPENDIX

PATCH ID                        : N/A
PATCH NAME                      : SFSFS-patch-5.6RP1P4HF6_rc4_2013_07_30.tar.gz
BASE PACKAGE NAME               : Symantec FileStore
BASE PACKAGE VERSION            : 5.6
OBSELETE PATCHES                : N/A
SUPERCEDED PATCHES              : N/A
INCOMPATIBLE PATCHES            : N/A
SUPPORTED OS                    : SLES
SUPPORTED OS VERSION            : SLES 10 SP3
CREATION DATE                   : 2013-09-18
CATEGORY                        : enhancement, bug fix
REBOOT REQUIRED                 : Yes
SUPPORTS ROLLBACK               : NO


I. OVERVIEW:
------------
Symantec FileStore provides a scalable clustered storage solution. This document provides release information for the patch.


II. UPGRADE PROCEDURE:
----------------------
After you have installed or synchronized a new Symantec FileStore patch into your cluster, the list of available commands may change. Please login again to the CLI to access the updated features. 

IMPORTANT: There is a downtime for the services during an upgrade. The actual downtime will be a little longer than it takes to reboot the system. To avoid data loss, Symantec FileStore recommends that customers stop 
I/O processing completely during a patch upgrade.  

After you apply this patch, you cannot uninstall it. The 5.6RP1P4HF6 patch can only be installed on 5.6, 5.6P1, 5.6P2, 5.6P3, 5.6RP1, 5.6RP1P1,  5.6RP1P2, 5.6RP1P3, 5.6RP1P4, 5.6RP1P4HF1, 5.6RP1P4HF2, 5.6RP1P4HF3, 5.6RP1P4HF4 or 5.6RP1P4HF5.

If you are upgrading on a replication source cluster:
	1)	Pause running jobs. 
	2)	Upgrade the cluster.
	3)	Resume all paused replication jobs.
If you are upgrading on a replication target cluster:
	1)	On the target cluster, check "echo listfset | fsdb -t vxfs /dev/vx/rdsk/sfsdg/<replication target fs> | grep -w \"\" | wc -l"
		a.	If the command returns 0. 
		b.	If the job is in the running state, pause the job from the source console.
		c.	Go to step 3.
 
	2)	If the above command returns non-zero:
		a.	If the job is in a running state, pause the job from the source console.
		b.	Umount the checkpoint first using the command "cfsumount /vx/<destination fs checkpoint>".  
		c.	Fsclustadm -v showprimary <destination fs>.
		d.	Except for the file system primary node, run "hagrp offline <vrts_destination fs group> -sys <non primary node(s)>".
		e.	Once you have run "hagrp -offline" on the non-primary nodes, run "vxumount -f -o mntunlock=VCS <destination fs>" on the primary node. 
		
	3)	Upgrade the cluster. After upgrading the online file systems that are offline in 2(b) and 2(d) steps, run the following commands after the upgrade:
		a.	 Run this command on all nodes, "hagrp -online <vrts_destination fs group> -sys <nodename>".
		b.	 cfsmount /vx/<destination fs checkpoint>.
		c.	 Resume all paused replication jobs.

To install the patch:
1. Login as master:
   su - master
2. Start the patch install:
   upgrade patch install

IMPORTANT: Full upgrade instructions are included in the Symantec FileStore 5.6 Release Notes. Please note the following revisions
When you upgrade:

Symantec FileStore recommends that you remove I/O fencing before upgrading any cluster node or exporting your current configuration. Use the Storage> fencing off command first followed by the Storage> fencing destroy command to remove I/O fencing. 
This step is not required, but it is suggested for a clean upgrade. 

III. FIXES IN THE NEW PATCH:
----------------------------

Etrack Incidents: 3021863, 3071059, 3132737, 3139904, 3142747, 3210003, 3210874, 3221252, 3226136, 3228847, 3247712, 3253887, 3254380, 3259734, 3259736, 3269900
  
Errors/Problems Fixed:

        3021863		"smbd[29175]: disk_free: sys_popen() failed" message logged in /var/log/message many times.
        3071059		`fsckptadm create` got frozen at vx_msg_send() on master. and `vx_msg_thread` got frozen at vx_recv_closeset()->vx_iget()->vx_ireuse() on slave. 
        3132737  	`cli> storage snapshot quota on|off` may reset usage value incorrectly, this value may go to negative value easily like 18446744073709521067.
	3139904		"storage fs shrinkto" does not finish even if it is performing for 24 hours or more.
	3142747		nfs-lock(F_WRLCK) becomes no-effect when VIP is failed-over to other node. this should be kept.
	3210003  	`storage snapshot scheudle delete/create` mishandles FS-names with "-" hyphen, this seems to be one of hyphen-handling-issues.
	3210874		file was saved as null file on quota file system.
	3221252		`replication job enable job1` deletes line in crontab for "job10". this is one of name-mis-matching-issus.
	3226136		if system has snapshot names "snap1" and "snap1_01", `cli>storage snapshot list` can't show right size for "snap1".
	3228847		File names with special characters causes logging failure
	3247712		data replication failed.
	3253887		READ issue of the `reboot one of two controller on eternus` cause "fullfsck flag set".
	3254380		`support debuginfo upload` for slave node, sfsfs_debuginfo*.tar.gz remains under working-dir on slave node, it does not remain for master node.
	3259734		LLT peerinact value remains set as 18000 when debuginfo is performed simultaneously.
	3259736		The LLT was not expired when LLT disconnected over 16
	3269900		fullfsck flag was set in vx_attr_iget().
	
		
IV. KNOWN ISSUES:
-----------------

Etrack: 3263117

Symptom:  After `fs destroy FS`, fs_alert_conf still has fullspace-line for the FS, this should be removed at `fs destroy`.

Description: fs alert full space information is not removed even after destroying fs.

Resolution: Please ensure that alert is turned off before destroying fs. Otherwise fs alert information won't be cleared.

Etrack: 3244325

Symptom: Scheduled relocation job failed "No such file or directory" for the destination server.

Description: This issue may occur if a repunit is defined as a sub-directory in a file system and there are directory operations (add, delete, move, and so on) happening inside the repunit 
This issue does not occur if the user defines the repunit using the entire file system. 

Resolution: The workaround is to define the entire file system as the repunit.

Etrack: 3285662

Symptom: replication process "rep_rm_src_star" dumps core when FileStore has an excessive amount of metadata changes, for example, over 10 million.

Description: This is a by design "out of memory" issue. Incremental replication cannot handle more than approximately 20 million non-rename FCL changes or 10 million rename 
changes due to the 4G virtual memory limit by the 32-bit library. 

Resolution: The workaround is to set the replication schedule to trigger the job more frequently to avoid too many changes at one time for an incremental replication session. 
If the failure happens, perform a "resync" on the job, and trigger the job again with shorter intervals.


V. NEW FEATURES:
------------------
N/A

VI. APPENDIX:
------------------
N/A