Monday 30 September 2013

Solaris host level SAN migration from Clariion to VMAX – Hands on Lab

During the past two to three years,  many organisations started migrating their Server storage from Old legacy SAN devices ( e.g. EMC Clariion )  to new  powerful SAN storage ( e.g. EMC Symetrix VMAX) because of the low performance and maintenance costs involved with legacy storage.  Depending on the budget allocated for the Storage Migration projects, some organisations  prefer the migration by “direct storage level data replication  using expensive migration tools”,  while the other companies ( who are with limited budget) prefer to do the migrations by host level data replication. In the later method,  the success rate of the migration project directly depends on the skill level and expertise of the  unix administrator who is implementing the migration project.  I believe this hands-on post will give some idea for the solaris admins whoever responsible for the Storage migrations. Before going to this post you might want to refer this post for the pre-planning tasks.




The Lab Setup that I have used for this post  was explained in the above diagram.  In simple words, we have a  Solaris server (  with Sybase DB), with four Filesystems and some sybase raw-devices configured on clarion storage disks. As part of the storage migration we want to copy the data from old storage ( clarion disks)  to new storage ( vmax disks) so that sybase DBA team can start their database from the new SAN storage.
Note : In this setup, we have only SVM volumes but no VxVM/VCS installed.
metadevice d25   — mounted on –> /localfs/app/SYBDB
metadevice d27  – mounted on –> /localfs/SYBDB_dumps
metadevice d28 — mounted on –>  /exportdata/tempdb
metadevice d29 — mounted on –> /exportdata/tempdb1
gurkulSolaris:root# df -kh | egrep -i “local|export”
Filesystem size used avail capacity Mounted on
/dev/md/dsk/d27 24G 22G 1.8G 93% /exportdata/tempdb1
/dev/md/dsk/d29 19G 16G 3.2G 84% /exportdata/tempdb
/dev/md/dsk/d28 33G 1.4G 32G 5% /localfs/SYBDB_dumps_1
/dev/md/dsk/d25 2.9G 1.7G 1.1G 62% /localfs/app/SYBDB

 

Initial Disk Configuration, Before The Migration

Please note that we have two device paths for each disks configured from the SAN storage, as shown in the diagram. And the actual EMC powerdevices ( emcpower0a – 6a ) will give you the idea about total number of physical disks connected to the machine
gurkulSolaris:root# echo|format 
Searching for disks…done
AVAILABLE DISK SELECTIONS:
0. c1t0d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>
/pci@1f,700000/scsi@2/sd@0,0
1. c1t1d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>
/pci@1f,700000/scsi@2/sd@1,0
2. c1t2d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>
/pci@1f,700000/scsi@2/sd@2,0
3. c1t3d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>
/pci@1f,700000/scsi@2/sd@3,0
4. c3t5006016130601837d0 <DGC-RAID5-0207 cyl 49150 alt 2 hd 128 sec 10> power0a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016130601837,0
5. c3t5006016930601837d0 <DGC-RAID5-0207 cyl 49150 alt 2 hd 128 sec 10> power0a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016930601837,0
6. c3t5006016130601837d1 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power1a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016130601837,1
7. c3t5006016930601837d1 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power1a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016930601837,1
8. c3t5006016130601837d2 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power2a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016130601837,2
9. c3t5006016930601837d2 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power2a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016930601837,2
10. c3t5006016930601837d3 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power3a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016930601837,3
11. c3t5006016130601837d3 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power3a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016130601837,3
12. c3t5006016130601837d4 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power4a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016130601837,4
13. c3t5006016930601837d4 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power4a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016930601837,4
14. c3t5006016130601837d5 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power5a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016130601837,5
15. c3t5006016930601837d5 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power5a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016930601837,5
16. c3t5006016930601837d6 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power6a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016930601837,6
17. c3t5006016130601837d6 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power6a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016130601837,6
18. emcpower0h <DGC-RAID5-0207 cyl 49150 alt 2 hd 128 sec 10> power0a
/pseudo/emcp@0
19. emcpower1h <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power1a
/pseudo/emcp@1
20. emcpower2a <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power2a
/pseudo/emcp@2
21. emcpower3a <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power3a
/pseudo/emcp@3
22. emcpower4h <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power4a
/pseudo/emcp@4
23. emcpower5a <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power5a
/pseudo/emcp@5
24. emcpower6h <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power6a
/pseudo/emcp@6
Specify disk (enter its number): Specify disk (enter its number):
Powerpath Configuration Before Migration
gurkulSolaris:root# powermt display dev=all
Pseudo name=emcpower0a
CLARiiON ID=CLA0123456 [SolarisServer]
Logical device ID=60060160F3F31200D072ABAEF789DA11 [EMC-CX-rg16-lun91-30gb]
state=alive; policy=CLAROpt; priority=0; queued-IOs=0
Owner: default=SP A, current=SP A Array failover mode: 1
==============================================================================
—————- Host ————— – Stor – — I/O Path – — Stats —
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t5006016130601837d0s0 SP A1 active alive 0 0
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t5006016930601837d0s0 SP B1 active alive 0 0
Pseudo name=emcpower1a
CLARiiON ID=CLA0123456 [SolarisServer]
Logical device ID=60060160F3F312006CA64996F789DA11 [EMC-CX-rg17-lun90-75gb]
state=alive; policy=CLAROpt; priority=0; queued-IOs=0
Owner: default=SP B, current=SP B Array failover mode: 1
==============================================================================
—————- Host ————— – Stor – — I/O Path – — Stats —
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t5006016130601837d1s0 SP A1 active alive 0 0
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t5006016930601837d1s0 SP B1 active alive 0 0
Pseudo name=emcpower2a
CLARiiON ID=CLA0123456 [SolarisServer]
Logical device ID=60060160F3F312009081674EF589DA11 [EMC-CX-rg18-lun89-75gb]
state=alive; policy=CLAROpt; priority=0; queued-IOs=0
Owner: default=SP A, current=SP A Array failover mode: 1
==============================================================================
—————- Host ————— – Stor – — I/O Path – — Stats —
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t5006016130601837d2s0 SP A1 active alive 0 0
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t5006016930601837d2s0 SP B1 active alive 0 0
Pseudo name=emcpower3a
CLARiiON ID=CLA0123456 [SolarisServer]
Logical device ID=60060160F3F31200A2CD9F3FF589DA11 [EMC-CX-rg19-lun88-75gb]
state=alive; policy=CLAROpt; priority=0; queued-IOs=0
Owner: default=SP B, current=SP B Array failover mode: 1
==============================================================================
—————- Host ————— – Stor – — I/O Path – — Stats —
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t5006016130601837d3s0 SP A1 active alive 0 0
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t5006016930601837d3s0 SP B1 active alive 0 0
Pseudo name=emcpower4a
CLARiiON ID=CLA0123456 [SolarisServer]
Logical device ID=60060160F3F312008252B11EF589DA11 [EMC-CX-rg20-lun87-75gb]
state=alive; policy=CLAROpt; priority=0; queued-IOs=0
Owner: default=SP A, current=SP A Array failover mode: 1
==============================================================================
—————- Host ————— – Stor – — I/O Path – — Stats —
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t5006016130601837d4s0 SP A1 active alive 0 0
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t5006016930601837d4s0 SP B1 active alive 0 0
Pseudo name=emcpower5a
CLARiiON ID=CLA0123456 [SolarisServer]
Logical device ID=60060160F3F312009A702312F589DA11 [EMC-CX-rg21-lun86-75gb]
state=alive; policy=CLAROpt; priority=0; queued-IOs=0
Owner: default=SP B, current=SP B Array failover mode: 1
==============================================================================
—————- Host ————— – Stor – — I/O Path – — Stats —
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t5006016130601837d5s0 SP A1 active alive 0 0
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t5006016930601837d5s0 SP B1 active alive 0 0
Pseudo name=emcpower6a
CLARiiON ID=CLA0123456 [SolarisServer]
Logical device ID=60060160F3F3120096A061DFF489DA11 [EMC-CX-rg22-lun85-75gb]
state=alive; policy=CLAROpt; priority=0; queued-IOs=0
Owner: default=SP A, current=SP A Array failover mode: 1
==============================================================================
—————- Host ————— – Stor – — I/O Path – — Stats —
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t5006016130601837d6s0 SP A1 active alive 0 0
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t5006016930601837d6s0 SP B1 active alive 0 0
Current Mount Information
gurkulSolaris:root# cat /etc/vfstab
#device device mount FS fsck mount mount
#to mount to fsck point type pass at boot options
#
fd – /dev/fd fd – no –
/proc – /proc proc – no –
/dev/md/dsk/d1 – - swap – no –
/dev/md/dsk/d0 /dev/md/rdsk/d0 / ufs 1 no –
/dev/md/dsk/d3 /dev/md/rdsk/d3 /var ufs 1 no –
/dev/md/dsk/d6 /dev/md/rdsk/d6 /localfs ufs 2 yes –
/dev/md/dsk/d27 /dev/md/rdsk/d27 /exportdata/tempdb1 ufs 2 yes –
/dev/md/dsk/d29 /dev/md/rdsk/d29 /exportdata/tempdb ufs 2 yes logging

Disk Controller configuration Before Migration
gurkulSolaris:root# cfgadm -al
Ap_Id Type Receptacle Occupant Condition
c0 scsi-bus connected configured unknown
c0::dsk/c0t0d0 CD-ROM connected configured unknown
c1 scsi-bus connected configured unknown
c1::dsk/c1t0d0 disk connected configured unknown
c1::dsk/c1t1d0 disk connected configured unknown
c1::dsk/c1t2d0 disk connected configured unknown
c1::dsk/c1t3d0 disk connected configured unknown
c2 scsi-bus connected unconfigured unknown
c3 fc-fabric connected configured unknown
c3::500009720829b920 disk connected configured unknown
c3::5006016130601837 disk connected configured unknown
c3::5006016930601837 disk connected configured unknown

Before going to Actual migration please take below configuration backup
gurkulSolaris:root# cd /var/tmp/CX700-VMAX-Migration/ 
gurkulSolaris:root# echo|format > format.out 
gurkulSolaris:root# powermt display dev=all > powermt.out 
gurkulSolaris:root# df -kl > df-kl.out 
gurkulSolaris:root# cp /etc/vfstab vfsab.old
gurkulSolaris:root# powermt display dev=all|egrep “Pseudo|CLARiiON|Logical” > Old-cx-lin-info

In Current Server We have Four Filesystems as given below
metadevice d25   — mounted on –> /localfs/app/SYBDB
metadevice d27  – mounted on –> /localfs/SYBDB_dumps
metadevice d28 — mounted on –>  /exportdata/tempdb
metadevice d29 — mounted on –> /exportdata/tempdb1
gurkulSolaris:root# df -kh | egrep -i “local|export”
Filesystem size used avail capacity Mounted on 
/dev/md/dsk/d27 24G 22G 1.8G 93% /exportdata/tempdb1 
/dev/md/dsk/d29 19G 16G 3.2G 84% /exportdata/tempdb 
/dev/md/dsk/d28 33G 1.4G 32G 5% /localfs/SYBDB_dumps_1 
/dev/md/dsk/d25 2.9G 1.7G 1.1G 62% /localfs/app/SYBDB
gurkulSolaris:root# metastat -p 
d25 -m d15 1 
d15 1 1 /dev/dsk/emcpower3f 
d27 -m d17 1 
d17 1 1 /dev/dsk/emcpower3e 
d28 -m d18 d38 1 
d18 1 1 c1t2d0s0 
d38 1 1 c1t3d0s0 
d29 -m d19 1 
d19 1 1 /dev/dsk/emcpower6a
As part of this Storage Migraiton, we want to move above 4 filesystems from Clarion Storage to VMAX storage. For that purpose  we will  create new filesystems on VMAX strorage, and mount them on different mount points  to facilitate the data copy from old storage to new storage. So we create new mount points as below

gurkulSolaris:root# mkdir /exportdata/tempdb1-new
gurkulSolaris:root# mkdir /exportdata/tempdb-new
gurkulSolaris:root# mkdir /localfs/SYBDB_dumps_1-new
gurkulSolaris:root# mkdir /localfs/app/SYBDB-new

Storage Migration Procedure


Step 1 :  SAN team will do the necessary tasks like  Zoning and assigning disks to host, before we  actually detect new SAN Devices from the Solaris side
Step 2 :  Detecting New storage from solaris
Current system Information, Just for information
# uname -r
SunOS gurkulSolaris.UNX.unixadminschool.com 5.10 Generic_137111-07 sun4u sparc SUNW,Sun-Fire-V440 
Detecting New Storage, Without Reboot.  (All the New Disks References were Highligted in Yellow )
gurkulSolaris:root# devfsadm
gurkulSolaris:root# echo|format
Searching for disks…done
c3t500009720829B920d0: configured with capacity of 20.00GB
c3t500009720829B920d1: configured with capacity of 20.00GB
c3t500009720829B920d2: configured with capacity of 36.00GB
c3t500009720829B920d3: configured with capacity of 36.00GB
c3t500009720829B920d4: configured with capacity of 117.00GB
c3t500009720829B920d5: configured with capacity of 65.00GB
c3t500009720829B920d6: configured with capacity of 25.00GB
c3t500009720829B920d7: configured with capacity of 5.00GB
AVAILABLE DISK SELECTIONS:
0. c1t0d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>
/pci@1f,700000/scsi@2/sd@0,0
1. c1t1d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>
/pci@1f,700000/scsi@2/sd@1,0
2. c1t2d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>
/pci@1f,700000/scsi@2/sd@2,0
3. c1t3d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>
/pci@1f,700000/scsi@2/sd@3,0
4. c3t500009720829B920d0 <EMC-SYMMETRIX-5874 cyl 21844 alt 2 hd 15 sec 128>
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w500009720829b920,0
5. c3t500009720829B920d1 <EMC-SYMMETRIX-5874 cyl 21844 alt 2 hd 15 sec 128>
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w500009720829b920,1
6. c3t500009720829B920d2 <EMC-SYMMETRIX-5874 cyl 39320 alt 2 hd 15 sec 128>
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w500009720829b920,2
7. c3t500009720829B920d3 <EMC-SYMMETRIX-5874 cyl 39320 alt 2 hd 15 sec 128>
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w500009720829b920,3
8. c3t500009720829B920d4 <EMC-SYMMETRIX-5874 cyl 63896 alt 2 hd 30 sec 128>
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w500009720829b920,4
9. c3t500009720829B920d5 <EMC-SYMMETRIX-5874 cyl 35497 alt 2 hd 30 sec 128>
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w500009720829b920,5
10. c3t500009720829B920d6 <EMC-SYMMETRIX-5874 cyl 27305 alt 2 hd 15 sec 128>
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w500009720829b920,6
11. c3t500009720829B920d7 <EMC-SYMMETRIX-5874 cyl 5460 alt 2 hd 15 sec 128>
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w500009720829b920,7
12. c3t5006016930601837d0 <DGC-RAID5-0207 cyl 49150 alt 2 hd 128 sec 10> power0a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016930601837,0
13. c3t5006016130601837d0 <DGC-RAID5-0207 cyl 49150 alt 2 hd 128 sec 10> power0a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016130601837,0
14. c3t5006016130601837d1 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power1a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016130601837,1
15. c3t5006016930601837d1 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power1a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016930601837,1
16. c3t5006016130601837d2 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power2a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016130601837,2
17. c3t5006016930601837d2 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power2a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016930601837,2
18. c3t5006016130601837d3 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power3a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016130601837,3
19. c3t5006016930601837d3 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power3a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016930601837,3
20. c3t5006016930601837d4 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power4a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016930601837,4
21. c3t5006016130601837d4 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power4a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016130601837,4
22. c3t5006016130601837d5 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power5a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016130601837,5
23. c3t5006016930601837d5 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power5a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016930601837,5
24. c3t5006016930601837d6 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power6a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016930601837,6
25. c3t5006016130601837d6 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power6a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016130601837,6
26. emcpower0h <DGC-RAID5-0207 cyl 49150 alt 2 hd 128 sec 10> power0a
/pseudo/emcp@0
27. emcpower1h <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power1a
/pseudo/emcp@1
28. emcpower2a <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power2a
/pseudo/emcp@2
29. emcpower3a <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power3a
/pseudo/emcp@3
30. emcpower4h <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power4a
/pseudo/emcp@4
31. emcpower5a <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power5a
/pseudo/emcp@5
32. emcpower6h <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power6a
/pseudo/emcp@6
Specify disk (enter its number): Specify disk (enter its number):
 Please note that the new VMAX devices are showing  as just native Solaris disks, but the emcpowerpath devices are not yet configured for the new devices. We will configure them by using  emc powerpath commands in next section
Check the Powerpath information for new disks. Please note that new disks can be identifies by  Symmetrix  ID and Logical Device ID ( i.e. LUN) number. And each device will have two device paths.
gurkulSolaris:root# powermt display dev=all
Symmetrix ID=SYM001010101
Logical device ID=0FAF
state=alive; policy=SymmOpt; priority=0; queued-IOs=0
==============================================================================
—————- Host ————— – Stor – — I/O Path – — Stats —
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t500009720829B920d0s0 FA 9eA active alive 0 0
3074 pci@1d,700000/lpfc@1/fp@0,0 c4t500009720829B91Dd0s0 FA 8eB active alive 0 0
Symmetrix ID=SYM001010101
Logical device ID=0FB0
state=alive; policy=SymmOpt; priority=0; queued-IOs=0
==============================================================================
—————- Host ————— – Stor – — I/O Path – — Stats —
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t500009720829B920d1s0 FA 9eA active alive 0 0
3074 pci@1d,700000/lpfc@1/fp@0,0 c4t500009720829B91Dd1s0 FA 8eB active alive 0 0
Symmetrix ID=SYM001010101
Logical device ID=0FB1
state=alive; policy=SymmOpt; priority=0; queued-IOs=0
==============================================================================
—————- Host ————— – Stor – — I/O Path – — Stats —
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t500009720829B920d2s0 FA 9eA active alive 0 0
3074 pci@1d,700000/lpfc@1/fp@0,0 c4t500009720829B91Dd2s0 FA 8eB active alive 0 0
Symmetrix ID=SYM001010101
Logical device ID=0FB2
state=alive; policy=SymmOpt; priority=0; queued-IOs=0
==============================================================================
—————- Host ————— – Stor – — I/O Path – — Stats —
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t500009720829B920d3s0 FA 9eA active alive 0 0
3074 pci@1d,700000/lpfc@1/fp@0,0 c4t500009720829B91Dd3s0 FA 8eB active alive 0 0
Symmetrix ID=SYM001010101
Logical device ID=0FB3
state=alive; policy=SymmOpt; priority=0; queued-IOs=0
==============================================================================
—————- Host ————— – Stor – — I/O Path – — Stats —
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t500009720829B920d4s0 FA 9eA active alive 0 0
3074 pci@1d,700000/lpfc@1/fp@0,0 c4t500009720829B91Dd4s0 FA 8eB active alive 0 0
Symmetrix ID=SYM001010101
Logical device ID=0FB4
state=alive; policy=SymmOpt; priority=0; queued-IOs=0
==============================================================================
—————- Host ————— – Stor – — I/O Path – — Stats —
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t500009720829B920d5s0 FA 9eA active alive 0 0
3074 pci@1d,700000/lpfc@1/fp@0,0 c4t500009720829B91Dd5s0 FA 8eB active alive 0 0
Symmetrix ID=SYM001010101
Logical device ID=0FB5
state=alive; policy=SymmOpt; priority=0; queued-IOs=0
==============================================================================
—————- Host ————— – Stor – — I/O Path – — Stats —
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t500009720829B920d6s0 FA 9eA active alive 0 0
3074 pci@1d,700000/lpfc@1/fp@0,0 c4t500009720829B91Dd6s0 FA 8eB active alive 0 0
Symmetrix ID=SYM001010101
Logical device ID=0FB6
state=alive; policy=SymmOpt; priority=0; queued-IOs=0
==============================================================================
—————- Host ————— – Stor – — I/O Path – — Stats —
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t500009720829B920d7s0 FA 9eA active alive 0 0
3074 pci@1d,700000/lpfc@1/fp@0,0 c4t500009720829B91Dd7s0 FA 8eB active alive 0 0
Pseudo name=emcpower1a
CLARiiON ID=CLA0123456 [SolarisServer]
Logical device ID=60060160F3F312006CA64996F789DA11 [EMC-CX-rg17-lun90-75gb]
state=alive; policy=CLAROpt; priority=0; queued-IOs=0
Owner: default=SP B, current=SP B Array failover mode: 1
==============================================================================
—————- Host ————— – Stor – — I/O Path – — Stats —
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t5006016130601837d1s0 SP A1 active alive 0 0
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t5006016930601837d1s0 SP B1 active alive 0 0
Pseudo name=emcpower4a
CLARiiON ID=CLA0123456 [SolarisServer]
Logical device ID=60060160F3F312008252B11EF589DA11 [EMC-CX-rg20-lun87-75gb]
state=alive; policy=CLAROpt; priority=0; queued-IOs=0
Owner: default=SP A, current=SP A Array failover mode: 1
==============================================================================
—————- Host ————— – Stor – — I/O Path – — Stats —
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t5006016130601837d4s0 SP A1 active alive 0 0
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t5006016930601837d4s0 SP B1 active alive 0 0
Pseudo name=emcpower2a
CLARiiON ID=CLA0123456 [SolarisServer]
Logical device ID=60060160F3F312009081674EF589DA11 [EMC-CX-rg18-lun89-75gb]
state=alive; policy=CLAROpt; priority=0; queued-IOs=0
Owner: default=SP A, current=SP A Array failover mode: 1
==============================================================================
—————- Host ————— – Stor – — I/O Path – — Stats —
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t5006016130601837d2s0 SP A1 active alive 0 0
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t5006016930601837d2s0 SP B1 active alive 0 0
Pseudo name=emcpower6a
CLARiiON ID=CLA0123456 [SolarisServer]
Logical device ID=60060160F3F3120096A061DFF489DA11 [EMC-CX-lun85-75gb]
state=alive; policy=CLAROpt; priority=0; queued-IOs=0
Owner: default=SP A, current=SP A Array failover mode: 1
==============================================================================
—————- Host ————— – Stor – — I/O Path – — Stats —
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t5006016130601837d6s0 SP A1 active alive 0 0
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t5006016930601837d6s0 SP B1 active alive 0 0
Pseudo name=emcpower5a
CLARiiON ID=CLA0123456 [SolarisServer]
Logical device ID=60060160F3F312009A702312F589DA11 [EMC-CX-rg21-lun86-75gb]
state=alive; policy=CLAROpt; priority=0; queued-IOs=0
Owner: default=SP B, current=SP B Array failover mode: 1
==============================================================================
—————- Host ————— – Stor – — I/O Path – — Stats —
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t5006016130601837d5s0 SP A1 active alive 0 0
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t5006016930601837d5s0 SP B1 active alive 0 0
Pseudo name=emcpower3a
CLARiiON ID=CLA0123456 [SolarisServer]
Logical device ID=60060160F3F31200A2CD9F3FF589DA11 [EMC-CX-rg19-lun88-75gb]
state=alive; policy=CLAROpt; priority=0; queued-IOs=0
Owner: default=SP B, current=SP B Array failover mode: 1
==============================================================================
—————- Host ————— – Stor – — I/O Path – — Stats —
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t5006016130601837d3s0 SP A1 active alive 0 0
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t5006016930601837d3s0 SP B1 active alive 0 0
Pseudo name=emcpower0a
CLARiiON ID=CLA0123456 [SolarisServer]
Logical device ID=60060160F3F31200D072ABAEF789DA11 [EMC-CX-rg16-lun91-30gb]
state=alive; policy=CLAROpt; priority=0; queued-IOs=0
Owner: default=SP A, current=SP A Array failover mode: 1
==============================================================================
—————- Host ————— – Stor – — I/O Path – — Stats —
### HW Path I/O Paths Interf. Mode State Q-IOs Errors
==============================================================================
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t5006016130601837d0s0 SP A1 active alive 0 0
3072 pci@1c,600000/lpfc@1/fp@0,0 c3t5006016930601837d0s0 SP B1 active alive 0 0
Just for Quick Reference Outout about new LUN devices 
gurkulSolaris:root# powermt display dev=all|egrep ‘Pseudo|CLARiiON|Logical’ 
Logical device ID=0FAF
Logical device ID=0FB0
Logical device ID=0FB1
Logical device ID=0FB2
Logical device ID=0FB3
Logical device ID=0FB4
Logical device ID=0FB5
Logical device ID=0FB6
Pseudo name=emcpower1a
CLARiiON ID=CLA0123456 [SolarisServer]
Logical device ID=60060160F3F312006CA64996F789DA11 [EMC-CX-rg17-lun90-75gb]
Pseudo name=emcpower4a
CLARiiON ID=CLA0123456 [SolarisServer]
Logical device ID=60060160F3F312008252B11EF589DA11 [EMC-CX-rg20-lun87-75gb]
Pseudo name=emcpower2a
CLARiiON ID=CLA0123456 [SolarisServer]
Logical device ID=60060160F3F312009081674EF589DA11 [EMC-CX-rg18-lun89-75gb]
Pseudo name=emcpower6a
CLARiiON ID=CLA0123456 [SolarisServer]
Logical device ID=60060160F3F3120096A061DFF489DA11 [EMC-CX-lun85-75gb]
Pseudo name=emcpower5a
CLARiiON ID=CLA0123456 [SolarisServer]
Logical device ID=60060160F3F312009A702312F589DA11 [EMC-CX-rg21-lun86-75gb]
Pseudo name=emcpower3a
CLARiiON ID=CLA0123456 [SolarisServer]
Logical device ID=60060160F3F31200A2CD9F3FF589DA11 [EMC-CX-rg19-lun88-75gb]
Pseudo name=emcpower0a
CLARiiON ID=CLA0123456 [SolarisServer]
Logical device ID=60060160F3F31200D072ABAEF789DA11 [EMC-CX-rg16-lun91-30gb]
Recreate device paths for new devices and Save the Powerpath configuration using below commands ( for information on powerpath please refer the post  EMC Power Path commands for system administrators )
gurkulSolaris:root# powercf -q
Could not find config file entry for:
—————————————
volume ID = 0SYM001010101faf
—————————————
adding emcpower14
Could not find config file entry for:
—————————————
volume ID = 0SYM001010101fb0
—————————————
adding emcpower13
Could not find config file entry for:
—————————————
volume ID = 0SYM001010101fb1
—————————————
adding emcpower12
Could not find config file entry for:
—————————————
volume ID = 0SYM001010101fb2
—————————————
adding emcpower11
Could not find config file entry for:
—————————————
volume ID = 0SYM001010101fb3
—————————————
adding emcpower10
Could not find config file entry for:
—————————————
volume ID = 0SYM001010101fb4
—————————————
adding emcpower9
Could not find config file entry for:
—————————————
volume ID = 0SYM001010101fb5
—————————————
adding emcpower8
Could not find config file entry for:
—————————————
volume ID = 0SYM001010101fb6
—————————————
adding emcpower7
gurkulSolaris:root# powermt config
gurkulSolaris:root# powermt save
gurkulSolaris:root# powermt display dev=all|egrep ‘Pseudo|CLARiiON|Logical’
Pseudo name=emcpower14a
Logical device ID=0FAF
Pseudo name=emcpower13a
Logical device ID=0FB0
Pseudo name=emcpower12a
Logical device ID=0FB1
Pseudo name=emcpower11a
Logical device ID=0FB2
Pseudo name=emcpower10a
Logical device ID=0FB3
Pseudo name=emcpower9a
Logical device ID=0FB4
Pseudo name=emcpower8a
Logical device ID=0FB5
Pseudo name=emcpower7a
Logical device ID=0FB6
Pseudo name=emcpower1a
CLARiiON ID=CLA0123456 [SolarisServer]
Logical device ID=60060160F3F312006CA64996F789DA11 [EMC-CX-rg17-lun90-75gb]
Pseudo name=emcpower4a
CLARiiON ID=CLA0123456 [SolarisServer]
Logical device ID=60060160F3F312008252B11EF589DA11 [EMC-CX-rg20-lun87-75gb]
Pseudo name=emcpower2a
CLARiiON ID=CLA0123456 [SolarisServer]
Logical device ID=60060160F3F312009081674EF589DA11 [EMC-CX-rg18-lun89-75gb]
Pseudo name=emcpower6a
CLARiiON ID=CLA0123456 [SolarisServer]
Logical device ID=60060160F3F3120096A061DFF489DA11 [EMC-CX-lun85-75gb]
Pseudo name=emcpower5a
CLARiiON ID=CLA0123456 [SolarisServer]
Logical device ID=60060160F3F312009A702312F589DA11 [EMC-CX-rg21-lun86-75gb]
Pseudo name=emcpower3a
CLARiiON ID=CLA0123456 [SolarisServer]
Logical device ID=60060160F3F31200A2CD9F3FF589DA11 [EMC-CX-rg19-lun88-75gb]
Pseudo name=emcpower0a
CLARiiON ID=CLA0123456 [SolarisServer]
Logical device ID=60060160F3F31200D072ABAEF789DA11 [EMC-CX-rg16-lun91-30gb]
Now Verify that Format Command showing the emcpowerpath devices for the VMAX storage
gurkulSolaris:root# echo|format
Searching for disks…done
c3t500009720829B920d0: configured with capacity of 20.00GB
c3t500009720829B920d1: configured with capacity of 20.00GB
c3t500009720829B920d2: configured with capacity of 36.00GB
c3t500009720829B920d3: configured with capacity of 36.00GB
c3t500009720829B920d4: configured with capacity of 117.00GB
c3t500009720829B920d5: configured with capacity of 65.00GB
c3t500009720829B920d6: configured with capacity of 25.00GB
c3t500009720829B920d7: configured with capacity of 5.00GB
emcpower7a: configured with capacity of 5.00GB
emcpower8a: configured with capacity of 25.00GB
emcpower9a: configured with capacity of 65.00GB
emcpower10a: configured with capacity of 117.00GB
emcpower11a: configured with capacity of 36.00GB
emcpower12a: configured with capacity of 36.00GB
emcpower13a: configured with capacity of 20.00GB
emcpower14a: configured with capacity of 20.00GB
AVAILABLE DISK SELECTIONS:
0. c1t0d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>
/pci@1f,700000/scsi@2/sd@0,0
1. c1t1d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>
/pci@1f,700000/scsi@2/sd@1,0
2. c1t2d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>
/pci@1f,700000/scsi@2/sd@2,0
3. c1t3d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>
/pci@1f,700000/scsi@2/sd@3,0
4. c3t500009720829B920d0 <EMC-SYMMETRIX-5874 cyl 21844 alt 2 hd 15 sec 128>
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w500009720829b920,0
5. c3t500009720829B920d1 <EMC-SYMMETRIX-5874 cyl 21844 alt 2 hd 15 sec 128>
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w500009720829b920,1
6. c3t500009720829B920d2 <EMC-SYMMETRIX-5874 cyl 39320 alt 2 hd 15 sec 128>
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w500009720829b920,2
7. c3t500009720829B920d3 <EMC-SYMMETRIX-5874 cyl 39320 alt 2 hd 15 sec 128>
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w500009720829b920,3
8. c3t500009720829B920d4 <EMC-SYMMETRIX-5874 cyl 63896 alt 2 hd 30 sec 128>
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w500009720829b920,4
9. c3t500009720829B920d5 <EMC-SYMMETRIX-5874 cyl 35497 alt 2 hd 30 sec 128>
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w500009720829b920,5
10. c3t500009720829B920d6 <EMC-SYMMETRIX-5874 cyl 27305 alt 2 hd 15 sec 128>
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w500009720829b920,6
11. c3t500009720829B920d7 <EMC-SYMMETRIX-5874 cyl 5460 alt 2 hd 15 sec 128>
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w500009720829b920,7
12. c3t5006016930601837d0 <DGC-RAID5-0207 cyl 49150 alt 2 hd 128 sec 10> power0a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016930601837,0
13. c3t5006016130601837d0 <DGC-RAID5-0207 cyl 49150 alt 2 hd 128 sec 10> power0a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016130601837,0
14. c3t5006016930601837d1 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power1a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016930601837,1
15. c3t5006016130601837d1 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power1a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016130601837,1
16. c3t5006016930601837d2 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power2a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016930601837,2
17. c3t5006016130601837d2 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power2a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016130601837,2
18. c3t5006016130601837d3 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power3a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016130601837,3
19. c3t5006016930601837d3 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power3a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016930601837,3
20. c3t5006016130601837d4 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power4a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016130601837,4
21. c3t5006016930601837d4 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power4a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016930601837,4
22. c3t5006016130601837d5 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power5a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016130601837,5
23. c3t5006016930601837d5 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power5a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016930601837,5
24. c3t5006016930601837d6 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power6a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016930601837,6
25. c3t5006016130601837d6 <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power6a
/pci@1c,600000/lpfc@1/fp@0,0/ssd@w5006016130601837,6
26. emcpower0h <DGC-RAID5-0207 cyl 49150 alt 2 hd 128 sec 10> power0a
/pseudo/emcp@0
27. emcpower1h <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power1a
/pseudo/emcp@1
28. emcpower2a <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power2a
/pseudo/emcp@2
29. emcpower3a <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power3a
/pseudo/emcp@3
30. emcpower4h <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power4a
/pseudo/emcp@4
31. emcpower5a <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power5a
/pseudo/emcp@5
32. emcpower6h <DGC-RAID5-0207 cyl 61438 alt 2 hd 256 sec 10> power6a
/pseudo/emcp@6
33. emcpower7a <EMC-SYMMETRIX-5874 cyl 5460 alt 2 hd 15 sec 128>
/pseudo/emcp@7
34. emcpower8a <EMC-SYMMETRIX-5874 cyl 27305 alt 2 hd 15 sec 128>
/pseudo/emcp@8
35. emcpower9a <EMC-SYMMETRIX-5874 cyl 35497 alt 2 hd 30 sec 128>
/pseudo/emcp@9
36. emcpower10a <EMC-SYMMETRIX-5874 cyl 63896 alt 2 hd 30 sec 128>
/pseudo/emcp@10
37. emcpower11a <EMC-SYMMETRIX-5874 cyl 39320 alt 2 hd 15 sec 128>
/pseudo/emcp@11
38. emcpower12a <EMC-SYMMETRIX-5874 cyl 39320 alt 2 hd 15 sec 128>
/pseudo/emcp@12
39. emcpower13a <EMC-SYMMETRIX-5874 cyl 21844 alt 2 hd 15 sec 128>
/pseudo/emcp@13
40. emcpower14a <EMC-SYMMETRIX-5874 cyl 21844 alt 2 hd 15 sec 128>
/pseudo/emcp@14
Specify disk (enter its number): Specify disk (enter its number):
Just label all new devices using format command. And also format each disk to create a Single slice If new disks are having disk space allocated to multiple slices
gurkulSolaris:root# format emcpower7a
emcpower7a: configured with capacity of 5.00GB
selecting emcpower7a
[disk formatted]
FORMAT MENU:
disk – select a disk
type – select (define) a disk type
partition – select (define) a partition table
current – describe the current disk
format – format and analyze the disk
repair – repair a defective sector
label – write label to the disk
analyze – surface analysis
defect – defect list management
backup – search for backup labels
verify – read and display labels
save – save new disk/partition definitions
inquiry – show vendor, product and revision
volname – set 8-character volume name
!<cmd> – execute <cmd>, then return
quit
format> label
Ready to label disk, continue? y
format> q
Create Filesystems to all the new disks as shown below
gurkulSolaris:root# newfs /dev/dsk/emcpower8a
newfs: /dev/rdsk/emcpower8a last mounted as /exportdata/tempdb-new
newfs: construct a new file system /dev/rdsk/emcpower8a: (y/n)? y
Warning: 1152 sector(s) in last cylinder unallocated
/dev/rdsk/emcpower8a: 52425600 sectors in 8533 cylinders of 48 tracks, 128 sectors
25598.4MB in 534 cyl groups (16 c/g, 48.00MB/g, 5824 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32, 98464, 196896, 295328, 393760, 492192, 590624, 689056, 787488, 885920,
Initializing cylinder groups:
……….
super-block backups for last 10 cylinder groups at:
51512864, 51611296, 51709728, 51808160, 51906592, 52005024, 52103456,
52201888, 52300320, 52398752
Mount new disks to the mount points that we created earlier, and check the sizes are matching
gurkulSolaris:root# mount /dev/dsk/emcpower8a /exportdata/tempdb-new
gurkulSolaris:root# mount /dev/dsk/emcpower14a /exportdata/tempdb1-new
gurkulSolaris:root# mount /dev/dsk/emcpower11a /localfs/SYBDB_dumps_1-new
gurkulSolaris:root# mount /dev/dsk/emcpower7a /localfs/app/SYBDB-new
gurkulSolaris:root# df -h | egrep -i “localfs|export”
Filesystem size used avail capacity Mounted on
/dev/md/dsk/d27 24G 22G 1.8G 93% /exportdata/tempdb1
/dev/md/dsk/d29 19G 16G 3.2G 84% /exportdata/tempdb
/dev/md/dsk/d28 33G 1.1G 32G 4% /localfs/SYBDB_dumps_1
/dev/md/dsk/d25 2.9G 1.7G 1.1G 62% /localfs/app/SYBDB
/dev/dsk/emcpower8a 25G 25M 24G 1% /exportdata/tempdb-new
/dev/dsk/emcpower14a 20G 20M 19G 1% /exportdata/tempdb1-new
/dev/dsk/emcpower7a 4.9G 5.0M 4.9G 1% /localfs/app/SYBDB-new
/dev/dsk/emcpower11a 35G 36M 35G 1% /localfs/SYBDB_dumps_1-new
Create new meta devices from the new VMAX disks
gurkulSolaris:root# metainit d51 1 1 /dev/dsk/emcpower8a
d51: Concat/Stripe is setup
gurkulSolaris:root# metainit d61 1 1 /dev/dsk/emcpower14a
d61: Concat/Stripe is setup
gurkulSolaris:root# metainit d71 1 1 /dev/dsk/emcpower7a
d71: Concat/Stripe is setup
gurkulSolaris:root# metainit d81 1 1 /dev/dsk/emcpower11a
d81: Concat/Stripe is setup
Create one mirrors from the above metadevices
gurkulSolaris:root# metainit d50 -m d51
d50: Mirror is setup
gurkulSolaris:root# metainit d60 -m d61
d60: Mirror is setup
gurkulSolaris:root# metainit d70 -m d71
d70: Mirror is setup
gurkulSolaris:root# metainit d80 -m d81
d80: Mirror is setup
Remount the new filesystems from the metadevices instead of emc devices
gurkulSolaris:root# umount /exportdata/tempdb-new
gurkulSolaris:root# umount /exportdata/tempdb1-new
gurkulSolaris:root# umount /localfs/app/SYBDB-new
gurkulSolaris:root# umount /localfs/SYBDB_dumps_1-new
gurkulSolaris:root# mount /dev/md/dsk/d50 /exportdata/tempdb-new
gurkulSolaris:root# mount /dev/md/dsk/d60 /exportdata/tempdb1-new
gurkulSolaris:root# mount /dev/md/dsk/d70 /localfs/app/SYBDB-new
gurkulSolaris:root# mount /dev/md/dsk/d80 /localfs/SYBDB_dumps_1-new
Verify the mounts with df command
gurkulSolaris:root# df -hl|grep md
/dev/md/dsk/d27 24G 22G 1.8G 93% /exportdata/tempdb1
/dev/md/dsk/d29 19G 16G 3.2G 84% /exportdata/tempdb
/dev/md/dsk/d28 33G 1.1G 32G 4% /localfs/SYBDB_dumps_1
/dev/md/dsk/d25 2.9G 1.7G 1.1G 62% /localfs/app/SYBDB
/dev/md/dsk/d50 25G 25M 24G 1% /exportdata/tempdb-new
/dev/md/dsk/d60 20G 20M 19G 1% /exportdata/tempdb1-new
/dev/md/dsk/d70 4.9G 5.0M 4.9G 1% /localfs/app/SYBDB-new
/dev/md/dsk/d80 35G 36M 35G 1% /localfs/SYBDB_dumps_1-new
Copying data from old filesystems to new filesystems
gurkulSolaris:root# cd /exportdata/tempdb1
gurkulSolaris:root# tar cf – . | (cd /exportdata/tempdb1-new; tar xfp -)
gurkulSolaris:root# cd /exportdata/tempdb
gurkulSolaris:root# tar cf – . | (cd /exportdata/tempdb-new; tar xfp -)
gurkulSolaris:root# cd /localfs/app/SYBDB
gurkulSolaris:root# tar cf – . | (cd /localfs/app/SYBDB-new; tar xfp -)
gurkulSolaris:root# cd /localfs/SYBDB_dumps_1
gurkulSolaris:root# tar cf – . | (cd /localfs/SYBDB_dumps_1-new; tar xfp -)
Once data copy completed, we can unmoount the old filesystems and mount the new filesystem on old mounts as given below
gurkulSolaris:root# umount /exportdata/tempdb-new 
gurkulSolaris:root# umount /exportdata/tempdb1-new 
gurkulSolaris:root# umount /localfs/app/SYBDB-new 
gurkulSolaris:root# umount /localfs/SYBDB_dumps_1-new 
gurkulSolaris:root# umount /exportdata/tempdb
gurkulSolaris:root# umount /exportdata/tempdb1
gurkulSolaris:root# umount /localfs/app/SYBDB
gurkulSolaris:root# umount /localfs/SYBDB_dumps_1
gurkulSolaris:root# mount /dev/md/dsk/d50 /exportdata/tempdb
gurkulSolaris:root# mount /dev/md/dsk/d60 /exportdata/tempdb1
gurkulSolaris:root# mount /dev/md/dsk/d70 /localfs/app/SYBDB
gurkulSolaris:root# mount /dev/md/dsk/d80 /localfs/SYBDB_dumps_1

And modify the /etc/vfstab to mount the new file systems during the boot, as below
gurkulindia-Sybase:root# cat /etc/vfstab
#device device mount FS fsck mount mount
#to mount to fsck point type pass at boot options
#

:::: file truncated for other entries :::
/dev/md/dsk/d70 /dev/md/rdsk/d70 /localfs/app/SYBDB ufs 2 yes logging
/dev/md/dsk/d60 /dev/md/rdsk/d60 /exportdata/tempdb1 ufs 2 yes –
/dev/md/dsk/d80 /dev/md/rdsk/d80 /localfs/SYBDB_dumps_1 ufs 2 yes –
/dev/md/dsk/d50 /dev/md/rdsk/d50 /exportdata/tempdb ufs 2 yes logging


Usaullay Database team will request for some RAW devices from the new storage for the Sybase purpose. We should create them  as shown below
Actual DBA Requirement for RAW devices 
Device_Name Size Disk Name
SYB_raw1( for dBA use) 30GB empcpower9a
SYB_raw2( for dBA use) 20GB emcpower10a
Format the disks emcpower9a and emcpower10a, and create a single slice with the required sizes. And then Change the ownership to sybase:sybase for the Sybase Raw device as shown below
# chown sybase:sybase /devices/pseudo/emcp@9:a,raw
# chown sybase:sybase /devices/pseudo/emcp@10:a,raw
# ls -l /devices/pseudo/emc* | egrep “@9|@10″|grep sybase
crw——- 1 sybase sybase 314, 81 Sep 10 14:40 /devices/pseudo/emcp@10:b,raw
crw——- 1 sybase sybase 314, 83 Sep 10 14:40 /devices/pseudo/emcp@10:d,raw

Once we have the raw devices ready, we can inform to the DBA team that we have both  file-systems and raw devices ready from the new storage.  The Data replication from  old raw devices to new raw devices has to be done by sybase team.  Once they all done, we will leave the server for a day to observe for any unexpected issues, If found any we can roll back to the old devices by mounting them back to original mounts.  If no errors found we can cleanup the old storage to let the storage team to reclaim the Clarion Disks.
I will be posting the cleanup procedure in separate  post, just stay connected. Mean while you are most welcome to drop your comments and feedback on this post.

0 comments:

Post a Comment