Wednesday, 2 October 2013

How to do MIRRORING under SVM (Solaris Volume Manager) in Solaris.

How to do MIRRORING under SVM (Solaris Volume Manager) in Solaris.
In this post I will be showing how to do mirroring (RAID1) in Solaris OS.
Server availabilty is one of the major requirement in any environment. To achive this we used to do Mirroing in OS which protects single disk failure and indeed our server downtime.
Here in my server I have two disks c1t3d0 & c1t1d0.
c1t3d0 —> Root disk
c1t1d0 —> Root-mirror disk
Submirrors for Root disk —> d10, d20, d50
Submirrors for Root-mirror disk —> d11, d21, d51
Mirror name would be —> d0, d1, d5
Note: Make sure that the mirror-disk should be of equal size or greater than your rootdisk, it would be preferred if both disks should have same size. 

yogesh-test#echo | format
Searching for disks…done
AVAILABLE DISK SELECTIONS:
0. c1t1d0 <SEAGATE-ST722402SSUN72G-0400 cyl 14087 alt 2 hd 24 sec 424>
/pci@1e,600000/pci@0/pci@a/pci@0/pci@8/scsi@1/sd@1,0
1. c1t3d0 <FUJITSU-JAN4173RCSUN72G-0501 cyl 14087 alt 2 hd 24 sec 424>
/pci@1e,600000/pci@0/pci@a/pci@0/pci@8/scsi@1/sd@3,0
Specify disk (enter its number): Specify disk (enter its number):
My server is booted from c1t3d0 —> Root disk from native device (raw devices).
yogesh-test#prtconf -pv | grep -i bootpath
bootpath: ‘/pci@1e,600000/pci@0/pci@a/pci@0/pci@8/scsi@1/disk@3,0:a’
Current /etc/vfstab when server is not mirroed and is booted from c1t3d0 from native device.
yogesh-test#more /etc/vfstab
#device device mount FS fsck mount mount
#to mount to fsck point type pass at boot options
#
fd – /dev/fd fd – no -
/proc – /proc proc – no -
/dev/dsk/c1t3d0s1 – - swap – no -
/dev/dsk/c1t3d0s0 /dev/rdsk/c1t3d0s0 / ufs 1 no -
/dev/dsk/c1t3d0s5 /dev/rdsk/c1t3d0s5 /var/crash ufs 2 yes -
/devices – /devices devfs – no -
ctfs – /system/contract ctfs – no -
objfs – /system/object objfs – no -
swap – /tmp tmpfs – yes -
yogesh-test#df -k
Filesystem kbytes used avail capacity Mounted on
/dev/dsk/c1t3d0s0 8262869 7439956 740285 91% /
/devices 0 0 0 0% /devices
ctfs 0 0 0 0% /system/contract
proc 0 0 0 0% /proc
mnttab 0 0 0 0% /etc/mnttab
swap 23813344 1632 23811712 1% /etc/svc/volatile
objfs 0 0 0 0% /system/object
sharefs 0 0 0 0% /etc/dfs/sharetab
/platform/sun4u-us3/lib/libc_psr/libc_psr_hwcap1.so.1
8262869 7439956 740285 91% /platform/sun4u-us3/lib/libc_psr.so.1
/platform/sun4u-us3/lib/sparcv9/libc_psr/libc_psr_hwcap1.so.1
8262869 7439956 740285 91% /platform/sun4u-us3/lib/sparcv9/libc_psr.so.1
fd 0 0 0 0% /dev/fd
swap 23811720 8 23811712 1% /tmp
swap 23811776 64 23811712 1% /var/run
/dev/dsk/c1t3d0s5 2059015 91363 1905882 5% /var/crash
Below are the volume table of contents for both the disks present in the server.
yogesh-test#prtvtoc /dev/rdsk/c1t3d0s2 ———> Root-disk
* /dev/rdsk/c1t3d0s2 partition map
*
* Dimensions:
* 512 bytes/sector
* 424 sectors/track
* 24 tracks/cylinder
* 10176 sectors/cylinder
* 14089 cylinders
* 14087 accessible cylinders
*
* Flags:
* 1: unmountable
* 10: read-only
*
* Unallocated space:
* First Sector Last
* Sector Count Sector
* 0 20352 20351
* 16800576 4202688 21003263
* 59946816 54523008 114469823
* 131321280 12028032 143349311
*
* First Sector Last
* Partition Tag Flags Sector Count Sector Mount Directory
0 2 00 20352 16780224 16800575 /
1 3 01 21003264 34740864 55744127
2 5 00 0 143349312 143349311
5 0 00 16800576 4202688 21003263 /var/crash
7 0 00 97689600 61056 97750655
yogesh-test#prtvtoc /dev/rdsk/c1t1d0s2 ———> Rootmirror-disk
* /dev/rdsk/c1t1d0s2 partition map
*
* Dimensions:
* 512 bytes/sector
* 424 sectors/track
* 24 tracks/cylinder
* 10176 sectors/cylinder
* 14089 cylinders
* 14087 accessible cylinders
*
* Flags:
* 1: unmountable
* 10: read-only
*
* Unallocated space:
* First Sector Last
* Sector Count Sector
* 0 34740864 34740863
*
* First Sector Last
* Partition Tag Flags Sector Count Sector Mount Directory
0 2 00 34740864 16780224 51521087
1 3 01 0 34740864 34740863
2 5 00 0 143349312 143349311
3 14 01 0 143349312 143349311
4 15 01 143278080 71232 143349311
5 0 00 122061120 4202688 126263807
As the mirror disk (which we are going to mirror with root disk) is having different VTOC so we will copy the VTOC of the root disk on Mirror-disk. so that the partition table should be same on both of the disks.
yogesh-test#prtvtoc /dev/rdsk/c1t3d0s2 |fmthard -s – /dev/rdsk/c1t1d0s2
fmthard: New volume table of contents now in place.
yogesh-test#prtvtoc /dev/rdsk/c1t3d0s2
* /dev/rdsk/c1t3d0s2 partition map
*
* Dimensions:
* 512 bytes/sector
* 424 sectors/track
* 24 tracks/cylinder
* 10176 sectors/cylinder
* 14089 cylinders
* 14087 accessible cylinders
*
* Flags:
* 1: unmountable
* 10: read-only
*
* Unallocated space:
* First Sector Last
* Sector Count Sector
* 0 20352 20351
* 16800576 4202688 21003263
* 59946816 54523008 114469823
* 131321280 12028032 143349311
*
* First Sector Last
* Partition Tag Flags Sector Count Sector Mount Directory
0 2 00 20352 16780224 16800575 /
1 3 01 21003264 34740864 55744127
2 5 00 0 143349312 143349311
5 0 00 16800576 4202688 21003263 /var/crash
7 0 00 97689600 61056 97750655
yogesh-test#prtvtoc /dev/rdsk/c1t1d0s2
* /dev/rdsk/c1t1d0s2 partition map
*
* Dimensions:
* 512 bytes/sector
* 424 sectors/track
* 24 tracks/cylinder
* 10176 sectors/cylinder
* 14089 cylinders
* 14087 accessible cylinders
*
* Flags:
* 1: unmountable
* 10: read-only
*
* Unallocated space:
* First Sector Last
* Sector Count Sector
* 0 20352 20351
* 16800576 4202688 21003263
* 59946816 54523008 114469823
* 131321280 12028032 143349311
*
* First Sector Last
* Partition Tag Flags Sector Count Sector Mount Directory
0 2 00 20352 16780224 16800575
1 3 01 21003264 34740864 55744127
2 5 00 0 143349312 143349311
5 0 00 16800576 4202688 21003263
7 0 00 97689600 61056 97750655
We need to create Metastate-database replicas on one slice (any slice, 4MB per state-device replica), usually we choose slice7 as best practice. Now we have to make sure that our slice7 is not in use and will have atleast 12MB space allocated. In my server S7 is not used for any FS and I have taken approx. 30MB space allocated to S7 for database replicas.
yogesh-test#format
Searching for disks…done
AVAILABLE DISK SELECTIONS:
0. c1t1d0 <SEAGATE-ST722402SSUN72G-0400 cyl 14087 alt 2 hd 24 sec 424>
/pci@1e,600000/pci@0/pci@a/pci@0/pci@8/scsi@1/sd@1,0
1. c1t3d0 <FUJITSU-JAN4173RCSUN72G-0501 cyl 14087 alt 2 hd 24 sec 424>
/pci@1e,600000/pci@0/pci@a/pci@0/pci@8/scsi@1/sd@3,0
Specify disk (enter its number): 1
selecting c1t3d0
[disk formatted]
Warning: Current Disk has mounted partitions.
/dev/dsk/c1t3d0s0 is currently mounted on /. Please see umount(1M).
/dev/dsk/c1t3d0s1 is currently used by swap. Please see swap(1M).
/dev/dsk/c1t3d0s5 is currently mounted on /var/crash. Please see umount(1M).
FORMAT MENU:
disk – select a disk
type – select (define) a disk type
partition – select (define) a partition table
current – describe the current disk
format – format and analyze the disk
repair – repair a defective sector
label – write label to the disk
analyze – surface analysis
defect – defect list management
backup – search for backup labels
verify – read and display labels
save – save new disk/partition definitions
inquiry – show vendor, product and revision
volname – set 8-character volume name
!<cmd> – execute <cmd>, then return
quit
format> p
PARTITION MENU:
0 – change `0′ partition
1 – change `1′ partition
2 – change `2′ partition
3 – change `3′ partition
4 – change `4′ partition
5 – change `5′ partition
6 – change `6′ partition
7 – change `7′ partition
select – select a predefined table
modify – modify a predefined partition table
name – name the current table
print – display the current table
label – write partition map and label to the disk
!<cmd> – execute <cmd>, then return
quit
partition> p
Current partition table (original):
Total disk cylinders available: 14087 + 2 (reserved cylinders)
Part Tag Flag Cylinders Size Blocks
0 root wm 2 – 1650 8.00GB (1649/0/0) 16780224
1 swap wu 2064 – 5477 16.57GB (3414/0/0) 34740864
2 backup wm 0 – 14086 68.35GB (14087/0/0) 143349312
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 1651 – 2063 2.00GB (413/0/0) 4202688
6 unassigned wm 0 0 (0/0/0) 0
7 unassigned wm 9600 – 9605 29.81MB (6/0/0) 61056
partition> q
FORMAT MENU:
disk – select a disk
type – select (define) a disk type
partition – select (define) a partition table
current – describe the current disk
format – format and analyze the disk
repair – repair a defective sector
label – write label to the disk
analyze – surface analysis
defect – defect list management
backup – search for backup labels
verify – read and display labels
save – save new disk/partition definitions
inquiry – show vendor, product and revision
volname – set 8-character volume name
!<cmd> – execute <cmd>, then return
quit
format> q
Noe we will create state-database replicas on S7 as shown below, we always create 3 replicas minimum to maintain Quorum. (server will crash if quorum is not maintained i.e correct copies of replicas are less than 51%)
state-database replicas:
========================
The state database is a collection of multiple, replicated database copies. Each copy, called a state database replica, ensures that the data in the database is always valid. Having copies of the state database protects against data loss from single points-of-failure. The state database tracks the location and status of all known state database replicas. During a state database update, each replica state database is updated. The updates take place one at a time to protect against corrupting all updates if the system crashes.
consensus algorithm:
====================
To avoid single points-of-failure, you should distribute state database replicas across slices, drives, and controllers. A majority of replicas must survive a single component failure. The Solaris Volume Manager software requires that half the replicas be available to run, and that a majority (half + 1) be available to boot.
yogesh-test#metadb -a -c 3 -f /dev/rdsk/c1t3d0s7
yogesh-test#metadb -i
flags first blk block count
a u 16 8192 /dev/dsk/c1t3d0s7
a u 8208 8192 /dev/dsk/c1t3d0s7
a u 16400 8192 /dev/dsk/c1t3d0s7
r – replica does not have device relocation information
o – replica active prior to last mddb configuration change
u – replica is up to date
l – locator for this replica was read successfully
c – replica’s location was in /etc/lvm/mddb.cf
p – replica’s location was patched in kernel
m – replica is master, this is replica selected as input
W – replica has device write errors
a – replica is active, commits are occurring to this replica
M – replica had problem with master blocks
D – replica had problem with data blocks
F – replica had format problems
S – replica is too small to hold current data base
R – replica had device read errors
yogesh-test#metadb -a -c 3 -f /dev/rdsk/c1t1d0s7
yogesh-test#metadb -i
flags first blk block count
a u 16 8192 /dev/dsk/c1t3d0s7
a u 8208 8192 /dev/dsk/c1t3d0s7
a u 16400 8192 /dev/dsk/c1t3d0s7
a u 16 8192 /dev/dsk/c1t1d0s7
a u 8208 8192 /dev/dsk/c1t1d0s7
a u 16400 8192 /dev/dsk/c1t1d0s7
r – replica does not have device relocation information
o – replica active prior to last mddb configuration change
u – replica is up to date
l – locator for this replica was read successfully
c – replica’s location was in /etc/lvm/mddb.cf
p – replica’s location was patched in kernel
m – replica is master, this is replica selected as input
W – replica has device write errors
a – replica is active, commits are occurring to this replica
M – replica had problem with master blocks
D – replica had problem with data blocks
F – replica had format problems
S – replica is too small to hold current data base
R – replica had device read errors
The first step when building a mirroring is to create RAID-0 volumes, which you later combine to form the mirror. Each RAID-0 volume becomes a submirror to the mirror. Use the metainit command to force the creation of the RAID-0 volume. (-f option is used for forcefull action).
Creating Submirrors for Root disk:
yogesh-test#metainit -f d10 1 1 c1t3d0s0
d10: Concat/Stripe is setup
yogesh-test#metainit -f d20 1 1 c1t3d0s1
d20: Concat/Stripe is setup
yogesh-test#metainit -f d50 1 1 c1t3d0s5
d50: Concat/Stripe is setup
yogesh-test#
yogesh-test#metastat -p
d50 1 1 c1t3d0s5
d20 1 1 c1t3d0s1
d10 1 1 c1t3d0s0
Creating Submirrors for Root-mirror disk:
yogesh-test#metainit -f d11 1 1 c1t1d0s0
d11: Concat/Stripe is setup
yogesh-test#metainit -f d21 1 1 c1t1d0s1
d21: Concat/Stripe is setup
yogesh-test#metainit -f d51 1 1 c1t1d0s5
d51: Concat/Stripe is setup
yogesh-test#metastat -p
d51 1 1 c1t1d0s5
d21 1 1 c1t1d0s1
d11 1 1 c1t1d0s0
d50 1 1 c1t3d0s5
d20 1 1 c1t3d0s1
d10 1 1 c1t3d0s0
Creating RAID1 i.e mirroring between two disk. Firstly we will create a Mirror which will be attached with both submirrors so that the OS can recongnize the mirror instead of submirror and server will remain up in case of any disk failure.
Attaching root-disk submirrors with do, d1, d5 (which are mirrors to submirrors).
yogesh-test#metainit d0 -m d10
d0: Mirror is setup
yogesh-test#metainit d1 -m d20
d1: Mirror is setup
yogesh-test#metainit d5 -m d50
d5: Mirror is setup
yogesh-test#metastat -p
d5 -m d50 1
d50 1 1 c1t3d0s5
d1 -m d20 1
d20 1 1 c1t3d0s1
d0 -m d10 1
d10 1 1 c1t3d0s0
d51 1 1 c1t1d0s5
d21 1 1 c1t1d0s1
d11 1 1 c1t1d0s0
Lets check the vfstab, till now no modification have been done and the FS are still on native devices.
yogesh-test#more /etc/vfstab
#device device mount FS fsck mount mount
#to mount to fsck point type pass at boot options
#
fd – /dev/fd fd – no -
/proc – /proc proc – no -
/dev/dsk/c1t3d0s1 – - swap – no -
/dev/dsk/c1t3d0s0 /dev/rdsk/c1t3d0s0 / ufs 1 no -
/dev/dsk/c1t3d0s5 /dev/rdsk/c1t3d0s5 /var/crash ufs 2 yes -
/devices – /devices devfs – no -
ctfs – /system/contract ctfs – no -
objfs – /system/object objfs – no -
swap – /tmp tmpfs – yes -
No entries in /etc/system file yet related to SVM.
yogesh-test#grep -i md /etc/system
yogesh-test#
Use the metaroot command to update the system’s configuration, as this is a root (/) mirror. (It will update your root-FS with md entry, for rest of the FS we need to do the modifications manually).
yogesh-test#metaroot d0
yogesh-test#more /etc/vfstab
#device device mount FS fsck mount mount
#to mount to fsck point type pass at boot options
#
fd – /dev/fd fd – no -
/proc – /proc proc – no -
/dev/dsk/c1t3d0s1 – - swap – no -
/dev/md/dsk/d0 /dev/md/rdsk/d0 / ufs 1 no -
/dev/dsk/c1t3d0s5 /dev/rdsk/c1t3d0s5 /var/crash ufs 2 yes -
/devices – /devices devfs – no -
ctfs – /system/contract ctfs – no -
objfs – /system/object objfs – no -
swap – /tmp tmpfs – yes -
Now the SVM (md) entry will be there in the system file after executing metaroot command.
yogesh-test#grep -i md /etc/system
* Begin MDD root info (do not edit)
rootdev:/pseudo/md@0:0,0,blk
* End MDD root info (do not edit)
yogesh-test#
Edit vfstab for rest of the Filesystems so that they come under SVM (under md).
yogesh-test#more /etc/vfstab
#device device mount FS fsck mount mount
#to mount to fsck point type pass at boot options
#
fd – /dev/fd fd – no -
/proc – /proc proc – no -
/dev/md/dsk/d1 – - swap – no -
/dev/md/dsk/d0 /dev/md/rdsk/d0 / ufs 1 no -
/dev/md/dsk/d5 /dev/md/rdsk/d5 /var/crash ufs 2 yes -
/devices – /devices devfs – no -
ctfs – /system/contract ctfs – no -
objfs – /system/object objfs – no -
swap – /tmp tmpfs – yes -
Yet we have not attached the submirrors of second disk, because we will firstly reboot our system to make sure that our system is coming up with rootdisk under SVM (RAID1) without errors.
yogesh-test#metastat -p
d5 -m d50 1
d50 1 1 c1t3d0s5
d1 -m d20 1
d20 1 1 c1t3d0s1
d0 -m d10 1
d10 1 1 c1t3d0s0
d51 1 1 c1t1d0s5
d21 1 1 c1t1d0s1
d11 1 1 c1t1d0s0
yogesh-test#metastat -ac
d5 m 2.0GB d50
d50 s 2.0GB c1t3d0s5
d1 m 16GB d20
d20 s 16GB c1t3d0s1
d0 m 8.0GB d10
d10 s 8.0GB c1t3d0s0
d51 s 2.0GB c1t1d0s5
d21 s 16GB c1t1d0s1
d11 s 8.0GB c1t1d0s0
yogesh-test# init 0
OK> boot <root-disk>
Server is booted from root-disk, below are the outputs which are showing that our server is successfully come under SVM.
yogesh-test#df -k
Filesystem kbytes used avail capacity Mounted on
/dev/md/dsk/d0 8262869 7440188 740053 91% /
/devices 0 0 0 0% /devices
ctfs 0 0 0 0% /system/contract
proc 0 0 0 0% /proc
mnttab 0 0 0 0% /etc/mnttab
swap 23826320 1640 23824680 1% /etc/svc/volatile
objfs 0 0 0 0% /system/object
sharefs 0 0 0 0% /etc/dfs/sharetab
/platform/sun4u-us3/lib/libc_psr/libc_psr_hwcap1.so.1
8262869 7440188 740053 91% /platform/sun4u-us3/lib/libc_psr.so.1
/platform/sun4u-us3/lib/sparcv9/libc_psr/libc_psr_hwcap1.so.1
8262869 7440188 740053 91% /platform/sun4u-us3/lib/sparcv9/libc_psr.so.1
fd 0 0 0 0% /dev/fd
swap 23825408 728 23824680 1% /tmp
swap 23824744 64 23824680 1% /var/run
/dev/md/dsk/d5 2059015 91502 1905743 5% /var/crash
yogesh-test#metastat -p
d5 -m d50 1
d50 1 1 c1t3d0s5
d1 -m d20 1
d20 1 1 c1t3d0s1
d0 -m d10 1
d10 1 1 c1t3d0s0
d51 1 1 c1t1d0s5
d21 1 1 c1t1d0s1
d11 1 1 c1t1d0s0
Now we will attach our second disk submirrors to the main mirror.
yogesh-test#metattach d0 d11
d0: submirror d11 is attached
yogesh-test#metattach d1 d21
d1: submirror d21 is attached
yogesh-test#metattach d5 d51
d5: submirror d51 is attached
yogesh-test#
Syncing can be checked with metastat -ac (in solaris10, in other version use metastat -t).
yogesh-test#metastat -ac
d5 m 2.0GB d50 d51 (resync-5%)
d50 s 2.0GB c1t3d0s5
d51 s 2.0GB c1t1d0s5
d1 m 16GB d20 d21 (resync-0%)
d20 s 16GB c1t3d0s1
d21 s 16GB c1t1d0s1
d0 m 8.0GB d10 d11 (resync-1%)
d10 s 8.0GB c1t3d0s0
d11 s 8.0GB c1t1d0s0
yogesh-test#metastat -ac
d5 m 2.0GB d50 d51 (resync-74%)
d50 s 2.0GB c1t3d0s5
d51 s 2.0GB c1t1d0s5
d1 m 16GB d20 d21 (resync-8%)
d20 s 16GB c1t3d0s1
d21 s 16GB c1t1d0s1
d0 m 8.0GB d10 d11 (resync-18%)
d10 s 8.0GB c1t3d0s0
d11 s 8.0GB c1t1d0s0
yogesh-test#metastat -ac
d5 m 2.0GB d50 d51
d50 s 2.0GB c1t3d0s5
d51 s 2.0GB c1t1d0s5
d1 m 16GB d20 d21 (resync-53%)
d20 s 16GB c1t3d0s1
d21 s 16GB c1t1d0s1
d0 m 8.0GB d10 d11
d10 s 8.0GB c1t3d0s0
d11 s 8.0GB c1t1d0s0
Syncing completed.
yogesh-test#metastat -ac
d5 m 2.0GB d50 d51
d50 s 2.0GB c1t3d0s5
d51 s 2.0GB c1t1d0s5
d1 m 16GB d20 d21
d20 s 16GB c1t3d0s1
d21 s 16GB c1t1d0s1
d0 m 8.0GB d10 d11
d10 s 8.0GB c1t3d0s0
d11 s 8.0GB c1t1d0s0
Its better to run installboot on mirrordisk. Though it will work without this as well.
yogesh-test#installboot /usr/platform/`uname -i`/lib/fs/ufs/bootblk /dev/rdsk/c1t1d0s0
Now we will boot our server with mirror-disk and check the booting is Okay or not.
yogesh-test# init 0
OK> boot <root-mirror>
Server booted with mirror-disk successfully.
yogesh-test#prtconf -pv | grep -i bootpath
bootpath: ‘/pci@1e,600000/pci@0/pci@a/pci@0/pci@8/scsi@1/disk@1,0:a’
yogesh-test#metastat -p
d5 -m d50 d51 1
d50 1 1 c1t3d0s5
d51 1 1 c1t1d0s5
d1 -m d20 d21 1
d20 1 1 c1t3d0s1
d21 1 1 c1t1d0s1
d0 -m d10 d11 1
d10 1 1 c1t3d0s0
d11 1 1 c1t1d0s0
yogesh-test#metastat -ac
d5 m 2.0GB d50 d51
d50 s 2.0GB c1t3d0s5
d51 s 2.0GB c1t1d0s5
d1 m 16GB d20 d21
d20 s 16GB c1t3d0s1
d21 s 16GB c1t1d0s1
d0 m 8.0GB d10 d11
d10 s 8.0GB c1t3d0s0
d11 s 8.0GB c1t1d0s0
lonhcltest1#metadb -i
flags first blk block count
a m p luo 16 8192 /dev/dsk/c1t3d0s7
a p luo 8208 8192 /dev/dsk/c1t3d0s7
a p luo 16400 8192 /dev/dsk/c1t3d0s7
a p luo 16 8192 /dev/dsk/c1t1d0s7
a p luo 8208 8192 /dev/dsk/c1t1d0s7
a p luo 16400 8192 /dev/dsk/c1t1d0s7
r – replica does not have device relocation information
o – replica active prior to last mddb configuration change
u – replica is up to date
l – locator for this replica was read successfully
c – replica’s location was in /etc/lvm/mddb.cf
p – replica’s location was patched in kernel
m – replica is master, this is replica selected as input
W – replica has device write errors
a – replica is active, commits are occurring to this replica
M – replica had problem with master blocks
D – replica had problem with data blocks
F – replica had format problems
S – replica is too small to hold current data base
R – replica had device read errors
yogesh-test#df -k
Filesystem kbytes used avail capacity Mounted on
/dev/md/dsk/d0 8262869 7440777 739464 91% /
/devices 0 0 0 0% /devices
ctfs 0 0 0 0% /system/contract
proc 0 0 0 0% /proc
mnttab 0 0 0 0% /etc/mnttab
swap 23853680 1648 23852032 1% /etc/svc/volatile
objfs 0 0 0 0% /system/object
sharefs 0 0 0 0% /etc/dfs/sharetab
/platform/sun4u-us3/lib/libc_psr/libc_psr_hwcap1.so.1
8262869 7440777 739464 91% /platform/sun4u-us3/lib/libc_psr.so.1
/platform/sun4u-us3/lib/sparcv9/libc_psr/libc_psr_hwcap1.so.1
8262869 7440777 739464 91% /platform/sun4u-us3/lib/sparcv9/libc_psr.so.1
fd 0 0 0 0% /dev/fd
swap 23852040 8 23852032 1% /tmp
swap 23852096 64 23852032 1% /var/run
/dev/md/dsk/d5 2059015 90753 1906492 5% /var/crash
yogesh-test#
yogesh-test#more /etc/vfstab
#device device mount FS fsck mount mount
#to mount to fsck point type pass at boot options
#
fd – /dev/fd fd – no -
/proc – /proc proc – no -
/dev/md/dsk/d1 – - swap – no -
/dev/md/dsk/d0 /dev/md/rdsk/d0 / ufs 1 no -
/dev/md/dsk/d5 /dev/md/rdsk/d5 /var/crash ufs 2 yes -
/devices – /devices devfs – no -
ctfs – /system/contract ctfs – no -
objfs – /system/object objfs – no -
swap – /tmp tmpfs – yes -
yogesh-test#
yogesh-test#grep -i md /etc/system
* Begin MDD root info (do not edit)
rootdev:/pseudo/md@0:0,0,blk
* End MDD root info (do not edit)
yogesh-test#
This completes are Mirroing process under SVM for Solaris OS. We have brough root disk and root-mirror disk under SVM and then successfully booted the server from both the disks. Now in case of single disk failure our server will not be down instead it will be running with the redundant disk and in the mean while we will replace our faulty disk. Hence downtime is protected. :)
Note: This is the same process which we will use in case our any activity (i.e patching/upgrade etc etc..) will fail and we have to bring up the server with mirror-disk and remirror it back with rootdisk. As I stated in my previous post (Kernel Patching), this is the rollback base for any failure activity.
 Note: 1.) I have created the concatination submirrors.
             2.) For any options used with meta commands you can check the complete details from man page. (like -a, -c, -f with metadb).
             3.) Setting up OBP to automatically boot from mirror disks (using eeprom, boot-device).
           4.) Booting with 50% of available replicas is possible/optional with the /etc/system setting “set md:mirrored_root_flag=1″ i.e to maintain quorum also add the mentioned line in /etc/system file.

0 comments:

Post a Comment