Configuring VCS/SF in a branded zone environment

You must perform the following steps on the Solaris 11 systems.

To configure VCS/SF in a branded zone environment

  1. Install VCS, SF, or SFHA as required in the global zone.

    See the Cluster Server Configuration and Upgrade Guide.

    See the Storage Foundation and High Availability Configuration and Upgrade Guide.

  2. Configure a solaris10 branded zone. For example, this step configures a solaris10 zone.

    • Run the following command in the global zone as the global administrator:

      # zonecfg -z sol10-zone
      sol10-zone: No such zone configured
      Use 'create' to begin configuring a new zone.
    • Create the solaris10 branded zone using the SYSsolaris10 template.

      zonecfg:sol10-zone> create -t SYSsolaris10
    • Set the zone path. For example:

      zonecfg:sol10-zone> set zonepath=/zones/sol10-zone

      Note that zone root for the branded zone can either be on the local storage or the shared storage VxFS, UFS, or ZFS.

    • Add a virtual network interface.

      zonecfg:sol10-zone> add net
      zonecfg:sol10-zone:net> set physical=net1
      zonecfg:sol10-zone:net> set address=192.168.1.20
      zonecfg:sol10-zone:net> end
    • Verify the zone configuration for the zone and exit the zonecfg command prompt.

      zonecfg:sol10-zone> verify
      zonecfg:sol10-zone> exit

      The zone configuration is committed.

  3. Verify the zone information for the solaris10 zone you configured.
    # zonecfg -z sol10-zone info

    Review the output to make sure the configuration is correct.

  4. Install the solaris10 zone that you created using the flash archive you created previously.

    See Preparing to migrate a VCS cluster.

    # zoneadm -z sol10-zone install -p -a /tmp/sol10image.flar

    After the zone installation is complete, run the following command to list the installed zones and to verify the status of the zones.

    # zoneadm list -iv
  5. Boot the solaris10 branded zone.
    # /usr/lib/brand/solaris10/p2v sol10-zone 
    # zoneadm -z sol10-zone boot

    After the zone booting is complete, run the following command to verify the status of the zones.

    # zoneadm list -v
  6. Configure the zone with following command:
    # zlogin -C sol10-zone
  7. Install VCS in the branded zone:

    • Install only the following VCS 7.0 packages:

      • VRTSperl

      • VRTSvlic

      • VRTSvcs

      • VRTSvcsag

  8. If you configured Oracle to run in the branded zone, then install the VCS agent for Oracle packages (VRTSvcsea) and the patch in the branded zone.

    See the Cluster Server Agent for Oracle Configuration and Upgrade Guide for installation instructions.

  9. For ODM support, install the following additional packages and patches in the branded zone:

    • Install the following 7.0 packages:

      • VRTSvlic

      • VRTSvxfs

      • VRTSodm

  10. If using ODM support, relink Oracle ODM library in solaris10 branded zones:

    • Log into Oracle instance.

    • Relink Oracle ODM library.

      If you are running Oracle 10gR2:

      $ rm $ORACLE_HOME/lib/libodm10.so
      $ ln -s /opt/VRTSodm/lib/sparcv9/libodm.so \
      $ORACLE_HOME/lib/libodm10.so

      If you are running Oracle 11gR1:

      $ rm $ORACLE_HOME/lib/libodm11.so
      $ ln -s /opt/VRTSodm/lib/sparcv9/libodm.so \ 
      $ORACLE_HOME/lib/libodm11.so
      
    • To enable odm inside branded zone, enable odm in global zone using smf scripts as described as below:

      global# svcadm enable vxfsldlic
      global# svcadm enable vxodm

      To use ODM inside branded zone, export /dev/odm, /dev/fdd, /dev/vxportal devices and /etc/vx/licenses/lic directory.

      global# zoneadm -z myzone halt 
      global# zonecfg -z myzone 
      zonecfg:myzone> add device 
      zonecfg:myzone:device> set match=/dev/vxportal 
      zonecfg:myzone:device> end 
      zonecfg:myzone> add device 
      zonecfg:myzone:device> set match=/dev/fdd 
      zonecfg:myzone:device> end 
      zonecfg:myzone> add device 
      zonecfg:myzone:device> set match=/dev/odm 
      zonecfg:myzone:device> end 
      zonecfg:myzone> add device 
      zonecfg:myzone:device> set match=/dev/vx/rdsk/dg_name/vol_name 
      zonecfg:myzone:device> end 
      zonecfg:myzone> add device 
      zonecfg:myzone:device> set match=/dev/vx/dsk/dg_name/vol_name 
      zonecfg:myzone:device> end 
      zonecfg:myzone> add fs 
      zonecfg:myzone:fs> set dir=/etc/vx/licenses/lic 
      zonecfg:myzone:fs> set special=/etc/vx/licenses/lic 
      zonecfg:myzone:fs> set type=lofs 
      zonecfg:myzone:fs> end 
      zonecfg:myzone> set fs-allowed=vxfs,odm 
      zonecfg:myzone> verify 
      zonecfg:myzone> commit 
      zonecfg:myzone> exit 
      global# zoneadm -z myzone boot
  11. Configure the resources in the VCS configuration file in the global zone. The following example shows the VCS configuration when VxVM volumes are exported to zones via zone configuration file:
    group zone-grp (
    	SystemList = { vcs_sol1 = 0, vcs_sol2 = 1 }
    	ContainterInfo@vcs_sol1 {Name = sol10-zone, Type = Zone,Enabled = 1 }
    	ContainterInfo@vcs_sol2 {Name = sol10-zone, Type = Zone, Enabled = 1 }
    	AutoStartList = { vcs_sol1 }
    	Administrators = { "z_z1@vcs_lzs@vcs_sol2.symantecexample.com" }
    )
    
    	 DiskGroup zone-oracle-dg (
                    DiskGroup = zone_ora_dg
                    )
           Volume zone-oracle-vol (
                    Volume = zone_ora_vol
                    DiskGroup = zone_ora_dg
                    )
    
            Netlsnr zone-listener (
                    Owner = oracle
                    Home = "/u01/oraHome"
                    )
    
            Oracle zone-oracle (
                    Owner = oracle
                    Home = "/u01/oraHome"
                    Sid = test1
    		)
            Zone zone-res (
                    )
           zone-res requires zone-oracle-vol
          zone-oracle-vol requires zone-oracle-dg
          zone-oracle requires zone-res

    See the Cluster Server Bundled Agents Reference Guide for VCS Zone agent details.