While
troubleshooting some of the Live upgrade issues in my test environment,
I have come across some important list of known issues which causes the
Live Upgrade to fail.
Even
in the past, I have noticed that many Solaris folks complained about
the Live Upgrade just because of these known issues. And I thought it
will be good resource to share it here.
Configurations unsupported by Solaris Live Upgrade Software
1. Executing lucreate in single-user mode.
Reference Sun CR: 7076785/Bug # 15734117 (Fixed in 121430-84 -SPARC and 121431-85 – x86)Not all the services and utilities are available in the single user mode. Therefore, alternate boot environment created in a single user mode will also not have those services and utilities, thus would not be a complete BE. Also the results of lucreate command in single user are unpredictable.
2. Mount point of filesystem configured inside a non-global zone is a descendant of zonepath mount point.
Reference Sun CR: 7073468/Bug # 15732329PBE with below sample zone configuration:
zfs create rootpool/ds1
zfs set mountpoint=/test1 rootpool/ds1
zfs create rootpool/ds2
zfs set mountpoint=/test1/test2 rootpool/ds2
zonecfg -z zone1
> create
> set zonepath=/test1
> add fs
> set dir=/soft
> set special=/test1/test2
> set type=lofs
> end
> exit
zoneadm -z test1 install
zoneadm -z test1 boot
lucreate will fail because the special value of zone1 is /test1/test2 which is a descendent of zonepath /test1zfs set mountpoint=/test1 rootpool/ds1
zfs create rootpool/ds2
zfs set mountpoint=/test1/test2 rootpool/ds2
zonecfg -z zone1
> create
> set zonepath=/test1
> add fs
> set dir=/soft
> set special=/test1/test2
> set type=lofs
> end
> exit
zoneadm -z test1 install
zoneadm -z test1 boot
3. zfs dataset of filesystem configured inside a non-global zone is a descendant of zonepath dataset.
Reference Sun CR: 7116952/Bug # 15758334PBE with below sample zone configuration:
zfs create zonepool/zone2
zfs create -o mountpoint=legacy zonepool/zone2/data
zonecfg -z zone2
> create
> set zonepath=/zonepool/zone2
> add fs
> set dir=/data
> set special=zonepool/zone2/data
> set type=zfs
> end
> exit
zoneadm -z zone2 install
zoneadm -z zone2 boot
lucreate will fail
because the special value of zone2 is zonepool/zone2/data which is a
descendent of the zonepath dataset i.e. zonepool/zone2zfs create -o mountpoint=legacy zonepool/zone2/data
zonecfg -z zone2
> create
> set zonepath=/zonepool/zone2
> add fs
> set dir=/data
> set special=zonepool/zone2/data
> set type=zfs
> end
> exit
zoneadm -z zone2 install
zoneadm -z zone2 boot
4. All subdirectories of the root file system that are part of the OS image, with the exception of /var, must be in the same dataset as the root file system.
Example:
rpool/ROOT/BE /
rpool/ROOT/BE/opt /opt
rpool/ROOT/BE/usr /usr
above configuration is NOT supported, since /opt, /usr come as part of the OS image. Creating
ABE with such PBE configuration will fail.
Whereas
rpool/ROOT/BE /
rpool/ROOT/BE/var /var
is supported and is an exception.]
rpool/ROOT/BE/opt /opt
rpool/ROOT/BE/usr /usr
above configuration is NOT supported, since /opt, /usr come as part of the OS image. Creating
ABE with such PBE configuration will fail.
Whereas
rpool/ROOT/BE /
rpool/ROOT/BE/var /var
is supported and is an exception.]
5. No descendent file systems allowed in /opt
Reference : Sun CR 7153257/Bug # 15778555 (Fixed in 121430-84 – SPARC and 121431-85 – x86)Live Upgrade does not allow non-OS components to be descendants of /opt and /usr. For example, if you try to create a new dataset with -D /opt/ems, it fails with:
/opt/ems is not allowed to be a separate file system in ZFS
luconfig: ERROR: altrpool/ROOT/envB/opt/ems cannot be used for file system /opt/ems in boot environment envB.
/opt is considered an
OS-critical file system from LU perspective and it must reside in the
root pool. There can’t be a separate dataset for /opt or its
descendents.luconfig: ERROR: altrpool/ROOT/envB/opt/ems cannot be used for file system /opt/ems in boot environment envB.
6. Changing the mountpoints of rpool and rpool/ROOT from default values where rpool is a zpool containing BEs.
Reference Sun CR: 7119104/Bug # 15759701
The default values are:
rpool mountpoint is set to /rpool
rpool/ROOT mountpoint is set to legacy.
If rootpool mountpoints are changed from these default value and an ABE is created,
then the behavior of ABE is unpredictable
rpool mountpoint is set to /rpool
rpool/ROOT mountpoint is set to legacy.
If rootpool mountpoints are changed from these default value and an ABE is created,
then the behavior of ABE is unpredictable
7. Zones residing on top level of the dataset.
Reference Sun CR: 6867013/Bug # 15579467
If
ZFS root pool resides on one pool (say rpool) with zone residing on
toplevel dataset of a different pool (say newpool) mounted on /newpool
i.e. zonepath=/newpool, the lucreate would fail.
The following ZFS and zone path configuration is not supported:Live upgrade cannot be used to create an alternate BE when the source BE has a non-global zone with a zone path set to the mount point of a top-level pool file system. For example, if zonepool pool has a file system mounted as /zonepool, you cannot have a non-global zone with a zone path set to /zonepool.
8. Adding filesystem to a non-global zone through /etc/vfstab.
Adding a filesystem to NGZ in the following way is not supported. Entering the following lines in the global zones
/etc/vfstab :
/dev/dsk/c1t0d0s4 /dev/rdsk/c1t0d0s4 /export/zones ufs 1 yes –
/dev/dsk/c1t1d0s0 /dev/rdsk/c1t1d0s0 /export/zones/zone1/root/data ufs 1 no -
/dev/dsk/c1t0d0s4 /dev/rdsk/c1t0d0s4 /export/zones ufs 1 yes –
/dev/dsk/c1t1d0s0 /dev/rdsk/c1t1d0s0 /export/zones/zone1/root/data ufs 1 no -
where:
/export/zones/zone1 is the zone root for NGZ zone1.
/export/zones/zone1/root/data will be a data filesystem in NGZ zone1.
Lucreate will fail.
Instead, use the “zonecfg add fs” feature, which is supported by LU./export/zones/zone1/root/data will be a data filesystem in NGZ zone1.
Lucreate will fail.
zonecfg -z zone1
create
set zonepath=/export/zones/zone1
add fs
set dir=/data
set special=/dev/dsk/c1t1d0s0
set raw=/dev/rdsk/c1t1d0s0
set type=ufs
end
exit
create
set zonepath=/export/zones/zone1
add fs
set dir=/data
set special=/dev/dsk/c1t1d0s0
set raw=/dev/rdsk/c1t1d0s0
set type=ufs
end
exit
9. Using separate /var file system for non-global zones
Reference Sun CR: 6813647/Bug # 15546791Using a separate /var file system for non-global zones is not supported in Solaris 10. Solaris 11 uses separate /var file systems for non-global zones by default.
10. Creating an alternate boot environment on a SVM soft-partition.
With an SVM soft partition as below:
# metastat d100
d100: Soft Partition
Device: c0t3d0s0
State: Okay
Size: 8388608 blocks (4.0 GB)
Device Start Block Dbase Reloc
c0t3d0s0 0 No Yes
Extent Start Block Block count
0 1 8388608
Device Relocation Information:
Device Reloc Device ID
c0t3d0 Yes id1,sd@THITACHI_DK32EJ-36NC_____432G8732
and creating an ABE on this soft partition “d100 “ would fail. i.e.d100: Soft Partition
Device: c0t3d0s0
State: Okay
Size: 8388608 blocks (4.0 GB)
Device Start Block Dbase Reloc
c0t3d0s0 0 No Yes
Extent Start Block Block count
0 1 8388608
Device Relocation Information:
Device Reloc Device ID
c0t3d0 Yes id1,sd@THITACHI_DK32EJ-36NC_____432G8732
lucreate -n svmBE -m /:/dev/md/dsk/d100:ufs
11. Using SVM disksets with non-global zones.
Reference Sun CR: 7167449/Bug # 15790545Using Solaris Volume Manager disksets with non-global zones is not supported in Solaris 10. i.e.
# metastat -s datads -p
datads/d60 -m datads/d160 datads/d260 1
datads/d160 1 2 c0t1d0s1 c0t3d0s1 -i 256b
datads/d260 1 2 c0t1d0s0 c0t3d0s0 -i 256b
datads/d60 -m datads/d160 datads/d260 1
datads/d160 1 2 c0t1d0s1 c0t3d0s1 -i 256b
datads/d260 1 2 c0t1d0s0 c0t3d0s0 -i 256b
Diskset “datads/d60″ is used for non-global zones in /etc/vfstab
cat /etc/vfstab
…
…
/dev/md/datads/dsk/d60 /dev/md/datads/rdsk/d60 /zones ufs 1
Trying to create a new boot environment with submirror “datads/d260″ fails
# lucreate -n svmBE -m /:/dev/md/dsk/d10:ufs,mirror -m /:/dev/dsk/c0t1d0s0:detach,attach,preserve\
-m /zones:/dev/md/datads/dsk/d66:ufs,mirror -m /zones:/dev/md/datads/dsk/d260:detach,attach,preserve
with error messages-m /zones:/dev/md/datads/dsk/d66:ufs,mirror -m /zones:/dev/md/datads/dsk/d260:detach,attach,preserve
ERROR: cannot check device name for device path abbreviation
ERROR: cannot determine if device name is abbreviated device path
orERROR: cannot determine if device name is abbreviated device path
ERROR: option metadevice not a component of a metadevice:
ERROR: cannot validate file system option devices
ERROR: cannot validate file system option devices
12. Excluding ufs/vxfs based zones with zfs root pool.
Reference Sun CR: 7141482/Bug # 15769912When you have a zfs root pool with ufs/vxfs file system based zones and a ZFS ABE is created using liveupgrade, the zones gets merged into zfs root pool.
-m option is not supported while migrating from ZFS to ZFS, its only supported while migrating from UFS to ZFS.
Even
while migrating from UFS to ZFS Liveupgrade can not preserve the
UFS/VXFS file systems of zones of PBE. These file systems get merged
into zfs root pool.
If there are NGZ on the system with either of the following embededd within them:
1. ufs/zfs/lofs filesystems
2. zfs datasets
1. ufs/zfs/lofs filesystems
2. zfs datasets
then
1. During lucreate excluding/including the zonepath is not supported.
2. During lucreate excluding/including the filesystems embedded withing the zones is not supported.
Eg: If there is a zone called zone1 as:2. During lucreate excluding/including the filesystems embedded withing the zones is not supported.
zonecfg -z zone1
> create
> set zonepath=/zones/zone1
> add fs
> set dir=/soft
> set special=/test1/test2
> set type=lofs
> end
> exit
Then lucreate -n abe -c
pbe -x /zones/zone1 -x /test1 -x /test1/test2 is not supported. This is
also applicable for -f, -x, -y, -Y, -z options of lucreate command.> create
> set zonepath=/zones/zone1
> add fs
> set dir=/soft
> set special=/test1/test2
> set type=lofs
> end
> exit
14. Zones on a system with solaris clustered environment and OpsCenter:
If there are zones on a system with solaris clustered environment and Opscenter running, then while booting into the alternate boot environment the zones might fail to boot up.This can not be fixed by LU. But there is a workaround which is, after running luactivate and before “init 6″ run,
# cacaoadm stop -I scn-agent
# svcadm disable -st zones
- If any zones are in “mounted” state , run# svcadm disable -st zones
# zoneadm -z unmount
# init 6
# init 6
15. Zones do not shutdown while booting into ABE.
When user boots into an ABE which has zones, the zones are halted and then booted. They do not shutdown the zones. Therefore the shutdown scripts within the zones would not get executed while booting from PBE to ABE16. lucreate fails if the canmount property of the zfs dataset in the root hierarchy is not set to “noauto”.
On a system with ZFS root if canmount property of datasets in the root hierarchy is not “noauto” eg:
bash-3.2# zfs get canmount
NAME PROPERTY VALUE SOURCE
rpool canmount on local
rpool/ROOT canmount on default
rpool/ROOT/pbe canmount noauto local
rpool/ROOT/pbe/ds1 canmount on default
Here canmount property of zfs dataset “rpool/ROOT/PBE/ds1″ is “on” therefore lucreate will fail.NAME PROPERTY VALUE SOURCE
rpool canmount on local
rpool/ROOT canmount on default
rpool/ROOT/pbe canmount noauto local
rpool/ROOT/pbe/ds1 canmount on default
17. LU operations within the non-global zones fail.
Execution of any LU command within the non-global zones is unsupported.18. All subdirectories on NGZ zonepath that are part of the OS, must be in the same dataset as the zonepath.
– from zonezfg export
fs:
dir: /opt — NOT SUPPORTED /opt in separate dataset
special: zone1/opt
raw not specified
type: zfs
options: []
fs:
dir: /opt — NOT SUPPORTED /opt in separate dataset
special: zone1/opt
raw not specified
type: zfs
options: []
Source : Oracle Technology Network.
0 comments:
Post a Comment