Showing posts with label EMC-SAN. Show all posts
Showing posts with label EMC-SAN. Show all posts

Saturday 30 November 2013

VMworld Buzz: vVolumes, VMware’s game changer for storage

I attended session VSP3205 — Technology Preview: VMware vStorage APIs for VM and Application Granular Data – Satyam Vaghani, Vijay Ramachandran which was rumoured to be all about what’s coming up for storage.
This certainly didn’t disappoint and turned out to be one of the coolest new technologies VMware is working on with its storage partners. Very strange that it wasn’t even mentioned in the keynote yesterday or not more highly billed.
This is a game changer in storage!
The session started looking at the issues with the current storage model and traditional storage, there’s no app level visibility, multiple apps (VMs) are lumped together in LUNs and get the same service level. There’s no per VM failover, you need to use existing replication solutions for applications. The storage arrays have no visibility into the files within a LUN.
So, the need is for more granular file management. There’s a mismatch in the granularity of data management between storage systems and vSphere. vSphere and storage currently see and manage different things.
A product management wish list was shown:
  1. Ability for VMware to offload VMDK – level operations to storage systems with snapshot, clone, replication, encryption, de-dupe, thin-provisioning etc.
  2. Build a framework where any current or future array operation can be leveraged with minimal disruption to vSphere infrastructure.
  3. No disruption to existing VM creation workflows & highly scalable – both number of VMDKs and operations
  4. Scale to 1000,000s to VM deployments per storage system.
What VMware wants is to be able to natively store VMDKs on a storage array using the spirit of RDMs but without the hassle or scale issues. The VM layer would be able to directly talk to the storage layer and the storage layer will be able to see down to the individual VMDK level.
So, looking into the future VMware is creating Capacity Pools which is a new way to manage capacity assignment. This actually requires a whole new storage system to do this and VMware has been working with EMC obviously, NetApp, HP, Dell and IBM.
Capacity Pools are an allocation of physical storage space, and a set of allowed data services on any part of that storage space. They can span storage system chassis, even across datacenters.
A storage admin would create a Capacity Pool with a service policy attached, say 14TB with snapshots and replicas allowed. A VM admin would then be able to carve out VM volumes from the Capacity Pool until it runs out of space, volumes could be primary VMDKs, backup copies, replicas, clones thin, thick etc.
To connect to the storage device you would use an IO Demultiplexer which is a hugely scalable system whereby you would have a single (load balanced, obviously) connection from computer clusters to storage systems, a single mount point. No more LUNs and their complexity and in fact bypassing the whole SAN/NAS debate which is as of today, old news! In fact whether you connect to the IO Demultiplexer with FC or 10GbE, it doesn’t matter, the storage will still be presented in the same way.
The current VAAI and VASA systems would be extended to talk to this and provide the policy driven storage and offload as many storage related tasks to the storage arrays.
The way this is a game changer is it drags the storage arrays into the virtualisation (OK, also cloud) universe by truly virtualising storage rather than the current system which is a bit of a fudge and uses the old aspects of traditional block storage fitted into a virtual world. I’m a big proponent of NFS and this combines the usability of NFS at the VMDK level with a whole lot more.
The only questions are …when can we have it…and how difficult will it be to implement for the storage vendors.
A very interesting session with loads more information to come I hope.

NetApp PowerShell Toolkit, DataONTAP 1.7 released with VHD/VMDK file conversion and VHD alignment

NetApp has today updated it’s Powershell Toolkit, DataONTAP to version 1.7.
I’ve said it before but a rebranding to PowerONTAP would be a much cooler!
The major features include:
VHD/VMDK file conversion. You can use ConvertTo-NaVhd and ConvertTo-NaVmdk to use NetApp FlexClone to convert between VHD files used on Hyper-V & XenServer and VMDK files on VMware. That could make any hypervisor migrations so much easier.
VHD partition detection and alignment using Get-NaVirtualDiskAlignment and Repair-NaVirtualDiskAlignment on MBR fixed VDH files.
Data ONTAP 8.1 Cluster-Mode Support with 227 new cmdlets, yes that is 227 NEW cmdlets bringing the total Cluster-Mode cmdlets to 375.
CIFS rapid file cloning so you can duplicate files within CIFS shares (great for say refreshing test datasets from prod) using file level FlexClone.
Cluster Shared Volumes (CSV) Space reclamation so you can now reclaim space not just in NTFS LUNs but also in CSVs.
Here is the list of the new Cluster-Mode cmdlets
  • Cifs (32 cmdlets)
  • Clone (1 cmdlet)
  • Cluster peer (6 cmdlets)
  • Disk (10 cmdlets)
  • Exports (9 cmdlets)
  • Fc (4 cmdlets)
  • Fcp (20 cmdlets)
  • File (9 cmdlets)
  • Igroup (10 cmdlets)
  • Iscsi (30 cmdlets)
  • Net (27 cmdlets)
  • Nfs (13 cmdlets)
  • Portset (5 cmdlets)
  • Quota (9 cmdlets)
  • Security (13 cmdlets)
  • Sis (10 cmdlets)
  • Snapmirror (16 cmdlets)
  • Storage adapter (3 cmdlets)
and here are the other new cmdlets
  • ConvertTo-NaVhd
  • ConvertTo-NaVmdk
  • Get-NaVirtualDiskAlignment
  • Repair-NaVirtualDiskAlignment
  • Enable-NaStorageAdapter
  • Get-NaStorageAdapter
  • Get-NaStorageAdapterInfo
  • Get-NaControllerError
  • ConvertTo-SerializedString
  • ConvertFrom-SerializedString
NetApp has updated its “Making the Most of Data ONTAP PowerShell Toolkit” with the new 1.7 features
Have a look at the PPT, Getting Started With Data ONTAP PowerShell Toolkit for installation instructions.
Well done NetApp, great to see you going further and faster with PowerShell!

NetApp PowerShell Toolkit, DataONTAP 3 released with new Performance Monitoring and full ONTAP 8.2 API Support

Two major features have been added:
A new cmdlet Invoke-NcSysstat which is like Invoke-NaSysstat and allows you to monitor cluster system performance stats for:  System, FCP, NFSv3, NFSv4, CIFS, iSCSI, Volume, Ifnet, LUN, and Disk.
Invoke-NcSysstat works in both the cluster and Vserver context for Data ONTAP 8.2 and up. For Data ONTAP versions previous to 8.2, Invoke-NcSysstat must be run in the cluster context. Ifnet and Disk performance stats aren’t available when running against the Vserver context.
image
Invoke-NcSysstat can also aggregate performance stats for selected objects.
imageONTAP 8.2 API support is now complete with 67 new cmdlets in the clustered ONTAP set and 27 cmdlets with new parameters for Data ONTAP 8.2for a total of 1738 cmdlets.

The following parameters have been added to the 7-mode cmdlet set:
  • ConvertTo-NaVhd: FollowParent
  • ConvertTo-NaVhdx: FollowParent
  • Read-NaDirectory: Encode
  • Set-NaDiskOwner: All, DiskCount, DiskType
  • Start-NaClone: IgnoreStreams, IgnoreLocks
    The following parameters have been added to the clustered ONTAP cmdlet set:
  • Add-NcSnmpCommunity: VserverContext
  • Copy-NcLdapClientSchema: VserverContext
  • Get-NcNfsService: Query
  • Get-NcPerfInstance: VserverContext
  • Invoke-NcClusterHaTakeover: BypassOptimization, SkipLifMigration
  • New-NcClone: IgnoreStreams, IgnoreLocks, QosPolicyGroup
  • New-NcFlexcachePolicy: MetafilesTimeToLive, SymbolicTimeToLive, OtherTimeToLive, DelegationLruTimeout, PreferLocalCache, Vserver
  • New-NcLdapClient: VserverContext
  • New-NcUser: Comment
  • Read-NcFile: Stream
  • Remove-NcFlexcachePolicy: Vserver
  • Remove-NcLdapClient: VserverContext
  • Remove-NcLdapClientSchema: VserverContext
  • Remove-NcSnmpCommunity: VserverContext
  • Set-NcAutoSupportConfig: IsOndemandEnabled, IsOndemandRemoteDiagEnabled, OndemandServerUrl, OndemandPollingInterval, MinimumPrivateDataLength
  • Set-NcDiskOwner: All, DiskCount, DiskType
  • Set-NcFlexcachePolicy: MetafilesTimeToLive, SymbolicTimeToLive, OtherTimeToLive, DelegationLruTimeout, PreferLocalCache, Vserver
  • Set-NcLdapClient: VserverContext
  • Set-NcLdapClientSchema: VserverContext
  • Set-NcRoleConfig: MinPasswordSpecialCharacter, PasswordExpirationDuration, MaxFailedLoginAttempts, LockoutDurationDays, RequireInitialPasswordUpdate
  • Set-NcUser: Comment
  • Write-NcFile: Stream
Enhancements
  • Get-NaVirtualDiskAlignment and Repair-NaVirtualDiskAlignment can now detect and repair alignment issues for both VHD and VHDX format disks.
  • Set-NaVirtualDiskSize now supports growing and shrinking VHDX format virtual disks.
  • The NDMP copy cmdlets (Start-NaNdmpCopy and Invoke-NaNdmpCopy) have been improved.  The credentials cache will look up credentials based on the IP addresses and hostnames associated with the IP addresses.  The DstController parameter is no longer a required parameter.  If DstController is omitted, the SrcController is used as the destination.
  • Copy-NaHostFile will now offload the copy operation to Data ONTAP whenever cloning is not possible.
  • Show-NaHelp now displays help for aliased cmdlets in the clustered ONTAP online help documentation.
Fixes
  • Timestamp output sometimes not returned correctly for Get-NcPerfData.
  • Get-NaSnapmirrorConnection address pairs were missing for Data ONTAP 8.2.
  • Controller connection serialized then deserialized with ConvertTo-SerializedString and ConvertFrom-SerializedString would not initialize ONTAPI version properly.
  • Several of the “Related Links” in the online help documentation were broken.
  • SCSI unmap performance increase.
  • NDMP copy cmdlets (Invoke-NaNdmpCopy and Start-NaNdmpCopy) could hang if the source path does not exist.
  • Divide-by-zero error could occur in Get-NcEfficiency.
NetApp has updated its “Making the Most of Data ONTAP PowerShell Toolkit” with the new 3.0 features
Have a look at the PPT, Getting Started With Data ONTAP PowerShell Toolkit for installation instructions.

NetApp releases its Virtual Storage Appliance, Data ONTAP Edge

NetApp has released its long awaited Virtual Storage Appliance which it has called Data ONTAP Edge and also has made available an evaluation version for the masses.
NetApp previously had a VSA which was available only for partners but it was fairly limited in what it could do, had very limited disk space and wasn’t the simplest VSA to set up.
I have worked with the previous VSA and blogged on how to make it a little more usable by increase the disk space in my post Installing & Maximising the NetApp ONTAP 8.1 Simulator which has been one of the most popular posts on this blog which shows you the interest.
Unfortunately my hackery wasn’t entirely successful and the expanded VSA wasn’t always stable despite my best efforts.
NetApp has now answered the call and released a VSA which is no longer just for evaluation use and available for everyone. It is being pitched as "a low-cost remote office storage solution".  The maximum usable storage is 5 TB which is a vast improvement from the 20 GB available with the previous unmodified VSA. This release of ONTAP Edge runs ONTAP 8.1.1. It’s a VM that runs on ESXi which requires 2 vCPUs, 4 GB RAM and obviously available disk space.
I don’t know details on pricing or how this will be structured and whether you will be charged based on capacity.
image


NetApp calls the OS it runs Data ONTAP-v which is the same standard data management OS as every other ONTAP system, one of NetApp’s major strengths. You can learn how to administer a NetApp filer with System Manager using the VSA and use the same management functions on their biggest systems.
All the usual ONTAP features are available such as SAN, NAS, Snapshots, replication, and deduplication. SnapVault, SnapRestore and FlexClone are also built in. ONTAP Edge can use VMware VAAI, VACI, VADP and can also integrate with VMware SRM.
As the VSA is a VM it does lack some features compared to physical arrays such as Fibre Channel and FCoE LUNs, Data Compression and for now Cluster-Mode.
One of the things to also bear in mind is that you will most likely be installing this VM on the local disk of your ESXi hosts. That means the VM isn’t highly available if your ESXi host dies. Now, there’s possibly no reason why you couldn’t store this VSA on another shared storage system which is highly available and head down the path of storage Inception but this sort of defeats the object. Saying that you could have the VSA running in your branch office on for example the VMware VSA (making it highly available) and still be able to use ONTAP functionality like SnapMirror to backup VMs or CIFS file data from your branch office to your head office giving you a consistent backup system.
The released VSA is a production ready system but there is a 90 day evaluation VSA you can download to give it a try. You do need a NetApp NOW account to access the Evaluation which strangely says "Please enter your e-mail account at your company’s domain. Do not use a free e-mail account, such as yahoo.com, hotmail.com, gmail.com, etc. Accounts created with free e-mail addresses will be inactivated." although I was able to register a Guest account using a gmail.com address. I’m hoping this is an oversight as making your VSA available to the masses yet not if you have a free email address is a bit daft in my opinion.
The Evaluation can be downloaded from here. There is no technical available for the evaluation but you can try the NetApp Community pages https://communities.netapp.com/community/products_and_solutions/data-ontap-edge. Once you register you will be emailed the license keys.
There are some virtual hardware requirements listed to use Data ONTAP Edge:
  • Two dedicated physical CPU cores
  • 4GB dedicated memory
  • Minimum 57.5GB disk space for the Data ONTAP Edge system
and also some physical server requirements for the ESXi host:
  • Quad core, or two dual-core, 64-bit Intel® x86 processors
  • 2.27GHz or faster processor speed
  • Greater than 4GB memory (recommend 8GB or more)
  • Four or more physical disks on the server
  • Single Gigabit Ethernet network
  • Hardware RAID – must have a battery-backed write cache
and the power management BIOS settings must also be disabled on the ESXi host or the VSA will install but not start.
  • CPU power management (pstates) must be disabled. Some CPUs may use different terminology for this setting, such as "performance states."
  • Idle states (cstates) must be disabled.
These settings can be seen in the vSphere client under the Configuration tab and Power Management. Active Policy must be listed as Not supported.
Releasing the VSA and making an evaluation version available to everyone (those without a free email address!) is a great step forward by NetApp and another step towards the true Software Defined Datacenter

Installing & Maximising the NetApp ONTAP 8.1 Simulator



NetAppFooter
UPDATE 7 September 2012:  NetApp has released its Virtual Storage Appliance, Data ONTAP Edge which is a licensed functional VSA.
One of the strengths of NetApp’s storage offering is that its controller operating system called Data ONTAP is the same across every NetApp Filer storage product they sell. This is hugely significant as it means if you know how to administer and configure the smallest FAS2000 Series Filers you are pretty much up to speed to administer and configure Filers up to their biggest and beefiest FAS6200 Series Filers so you don’t have to learn another management interface when you upgrade.
NetApp has also produced a Data ONTAP Simulator which in their words is a “A tool that gives you the experience of administering and using a NetApp storage system with all the features of Data ONTAP at your disposal.”
There have been various versions of the simulator over the years which initially could be installed on a simple RedHat or SuSE Linux box using emulated disks and have the same look and feel as a real NetApp Filer (without the rack space requirement or electricity bill!). Things progressed over the years and you could use more Linux distros. Nowadays there is a pre-built VMware virtual machine so you don’t have to install RedHat or SuSE beforehand.
Unfortunately the simulator is only available to existing NetApp Customers and Advantage Partners and requires a login to the NetApp Support Site, http://now.netapp.com
I was about to start a serious rant about this limited availability when surprisingly whilst writing this post Vaughn Stewart sent out a tweet that NetApp are in fact looking at the possibility of opening up access to the simulator for version 8.1 which is fantastic news.
I fiercely believe opening up access to learning and training tools allows so many more people to learn about your technology in their own time and if they know about your technology they are more likely to buy!
So, although the current status is the simulator is not available to all, hopefully this will change soon and this has saved me and you from a far longer rant!
There are however some limitations to the simulator:
  • There isn’t any official support by NetApp and it works more on a community support model using NetApp’s community support forums.
  • There is still quite a bit of configuration to do to get the simulator up and running, it’s not a plug and use VSA.
  • The simulator is limited in the number of disks it can support and the size of the disks. In previous versions of the simulator it was provided with 2 x simulated shelves of 14 x 1 GB Disks with only 20 GB usable space which with some hacking could be extended by 2 more shelves. The current version 8.1 is provided with the same disk configuration but I’ll show you how this can be changed to use up to 4 x shelves of 4 GB disks with 180 GB available space which is a vast improvement and far more usable but you will need VMware Workstation to do this as ESXi doesn’t support changing the size of IDE disks.
  • The simulator isn’t as robust as other true VSAs as it replicates a real Filer where you would have redundant PSUs so doesn’t play nicely when you just power it off! This is unfortunate as previous versions did seem to be a little more robust but the 8.1 version needs to be handled with care.
  • You can’t by default connect to multiple simulators with NetApp OnCommand System Manager as the built in serial numbers are the same and System Manager doesn’t play nicely but I’ll show you how this can be changed.
  • You can’t clone a fully configured simulator as you can’t change the serial number after the initial configuration.
  • The simulator isn’t suited to running anything that is performance heavy as it is aimed at testing the software features rather than any IOPS benchmarking.
I’m going to go though the steps to amend the simulator disk size to the maximum available disks and configure the basic simulator settings so you can use it as a VSA using the following steps:
  1. Download the Simulator
  2. Add to VMware workstation or ESX(i)
  3. Increase the simulator VM disk size
  4. Amend the default serial number
  5. Run the initial configuration
  6. Add 2 new shelves of 14 GB Disks
  7. Move the current config to the new shelves
  8. Delete the 2 x shipped 1 GB disk shelves
  9. Add 2 x 4 GB Disk shelves to replace the shipped 1 GB disk shelves
Hopefully the steps I’m going to go through in this post will one day be put into the simulator as delivered by NetApp to save you the hassle!
Download the Simulator
The simulator can be downloaded from http://now.netapp.com/NOW/download/tools/simulator/ontap/8.0/ (you will need a NetApp login which is only available if you are a Customer or Advantage Partner)
Check that you can meet the hardware and software requirements listed on the download page.
I am going to set up and configure the 7-mode version rather than the cluster mode and on VMware workstation but you can upload the VM to ESX(i) but only after the disk size has been amended as you can’t amend an IDE disk in ESX(i).
Download the 7-mode version which is a .ZIP file either for VMware Workstation, VMware Player and VMware Fusion or the version for VMware ESX(i).
Add to VMware workstation or ESX(i)
Extract the .ZIP file. Copy the extracted files to a folder on your workstation or upload them to a datastore if you are going to be using it on ESX(i) but remember the issue with increased an IDE disk on ESX(i).
Double click on the .vmx file to add to VMware Workstation or browse the ESX(i) datastore and add to inventory.
You should now have a new VM called vsim-7m.
image
Rename the VM to what you will ultimately call your simulator to avoid confusion.
image
Edit the VM Settings.
As this is a testing simulator for a virtual hosting environment I am going to only have 1 network card as I don’t need to test any network failover configuration but if you do need some more network functionality leave them in and configure them as you would a physical filer.
Set the Network Adapter to a Network connection or port group on your network.
image
Increase the simulator VM disk size
The shipped VM comes with a 48 GB VM disk which needs to be expanded to accommodate the new data in the emulated simulator disks when you write data to them.
You need to increase the VM disk size and then extend the partitions and also slices within the simulator to take advantage of the increased VM disk space.
The current 48 GB VM Disk contains 28 x emulated 1 GB disks and then has another 20 GB so if we are going to have 56 x emulated 4 GB disks + 20 GB then we will need a 244 GB VM Disk. I’m not sure if this is actually the correct figure but it works for me. You can make this VM disk thin-provisioned so you don’t have to have all 244 GB available until you write to it.
Edit the VM Settings and increase the HardDisk from 48 GB to 244 GB.
image
If you want to run the simulator on ESX(i) you can now upload it from VMware workstation with the amended IDE disk size.
The simulator VM runs on FreeBSD and uses the UFS file system.  Unfortunately the usual Linux partition manager GParted can’t see UFS partitions so it isn’t much help so you need to use a FreeBSD boot disk.
I downloaded the FreeBSD 8.2 i386 LiveFS .ISO from freebsd.org.
ftp://ftp.freebsd.org/pub/FreeBSD/releases/i386/ISO-IMAGES/8.2/FreeBSD-8.2-RELEASE-i386-livefs.iso
Edit your simulator VM settings and add a new CD/DVD Drive as the simulator doesn’t come with one.
Attach the FreeBSD LiveFS .ISO
View the console so you can see the simulator boot process.
Boot the VM and change the BIOS boot order to boot first from CD-ROM.
image
Press Enter to Boot FreeBSD [default].
image
Select your country and press Enter.
image
Select your keyboard layout if prompted and press Enter.
image
Enter F and Enter to launch the Fixit repair mode option.
image
Select CDROM/DVD and press Enter.
image
You will now enter “fixit” mode.
Find the VM disk files:
1
dmesg | grep VMware
You will see there are two VMware Virtual IDE Hard Drive disks, ad0 which is 249856 MB and ad1 which is 1024 MB.
image
So, ad0 is the disk which is 244 GB.
To view the partitions on the disk:
1
fdisk ad0
You will see that partition 4 which is 44007 MB must be the partition which holds the emulated simulator disks and this is the partition that needs to be extended.
image
To extend the partition using the interactive slice editing process:
1
fdisk –u ad0
Press Enter to not change the BIOS info.
image
Press Enter to bypass editing the partitions until you get to partition 4 then type Y.
Press Enter to keep the default “sysid”.
Press Enter to keep the default “start”.
Now you need to enter the new sector size. I have worked out what the number needs to be in a completely non-scientific way using a whole host of different boot disks and a million different options! If someone else knows how to work this out in a far better way I would love to know how!
Enter 507513006 as the “size” and press Enter.
Press Enter to skip beg/end address.
You will now see a summary of the partition changes that fdisk will apply.
Press Y to be happy with the entry.
Press Enter to not change the active partition.
image
You will see a summary of the new partition table showing the new size of 247809 MB.
image
Press Y to write the new partition table.
You may get a message fdisk: Class not found but this can be ignored.
You then need to extend the slice in the partition.
To view the slices use:
1
gpart list
You can then find the index of the partition in the slice. You can see under the Geom name: ad0s4 that ad0s4b is the slice which is 39 GB and that corresponds to index 2.
image
gpart by default will make the partition use all the available space. –i is the index number.
Extend the slice with the following command:
1
gpart resize –i 2 ad0s4
image
The disk, partition and slice has now been extended and you can disconnect the mounted .ISO from the VM and shut down the simulator.
Amend the default serial number
Boot the simulator again.
When you see Hit [Enter] to boot immediately, or any other key for command prompt, hit Ctrl-C.
You will then enter the SIMLOADER prompt.
I’ve used the same steps and number scheme from this NetApp Communities post.
Enter the following commands to set your unique serial number. (I’ve used 1111111101 – That’s 8 x 1s + 01)
1
2
3
set bootarg.nvram.sysid=1111111101
set SYS_SERIAL_NUM=1111111101
boot
image
Run the initial configuration
The simulator will continue to boot.
When you see Press Ctrl-C for Boot Menu, hit Ctrl-C.
Enter option (4) Clean configuration and initialize all disks.
image
When the simulator says Zero disks, reset config and install a new file system, type Y.
Type Y to confirm erasing all the data on the disks.
The simulator will set up the disks and reboot.
Once it has rebooted the configuration will continue.
image
When prompted enter the new hostname. Mine is lonfiler01.
Press Enter to accept the default [n] for IPv6.
Press Enter to not configure interface groups.
Enter the IP address for your network.
Enter the Subnet Mask.
Press Enter to accept the default media type.
Press Enter to accept the default flow control.
Press Enter to accept the default of no jumbo frames.
Press Enter to not continue setup through the web interface.
Enter the default gateway IP Address.
Press Enter to bypass configuring an administrative host.
Enter a timezone.
Enter a location name.
Press Enter to accept the default HTTP root file directory.
Press Enter to not run the DNS resolver.
image
Press Enter to not run the NIS client.
Press Enter to continue, auto support will be turned off later.
Press Enter to not run the Shield Alternate Control Path Management interface for SAS shelves.
Enter and confirm the root password.
The initial configuration will continue. Once complete, press Enter to bring up a login prompt and enter the root password you had entered.
It’s now best to continue configuration through an SSH connection with Putty so you can copy and paste the configuration commands.
Some of the ONTAP functions are licensed already in the simulator but some aren’t so you may as well add all the options that aren’t added from this list.
01
02
03
04
05
06
07
08
09
10
11
12
license add BSLRLTG #iscsi
license add BQOEAZL #nfs
license add ANLEAZL #flex_clone
license add DFVXFJJ #snapmirror
license add DNDCBQH #snaprestore
license add JQAACHD #snapvalidator
license add BCJEAZL #snapmanagerexchange
license add RKBAFSN #smdomino
license add ZOJPPVM #snaplock
license add PTZZESN #snaplock_enterprise
license add PDXMQMI #sv_ontap_sec
license add RIQTKCL #syncmirror_local
Add 2 new shelves of 14 GB Disks
Now you are ready to add the new shelves.
Enter advanced mode and unlock the diagnostic user. Enter a password and confirm.
1
2
3
priv set advanced
useradmin diaguser unlock
useradmin diaguser password
Launch the systemshell and login as diag and enter the password you have just set.
1
systemshell
image
The next step will use the makedisks script to add 14 x 4 GB disks as shelf 2 and 3 of type 31 which is:
31     NETAPP__ VD-4000MB-FZ-520 4194,304,000 B 4289,192,960 B     Yes     520
1
2
3
4
setenv PATH "${PATH}:/sim/bin"
cd /sim/dev
sudo makedisks.main -n 14 -t 31 -a 2
sudo makedisks.main -n 14 -t 31 -a 3
image
Then exit the systemshell and lock the diaguser.
1
2
exit
useradmin diaguser lock
Set admin mode.
1
priv set admin
Reboot the filer.
1
reboot
Once the filer has come back up, connect and login again.
Assign all the disks that are unowned to the filer.
1
disk assign all
Move the current config to the new shelves
We will move the current root volume vol0 to a new aggregate and new root volume on the new shelves.
First, create a new aggregate with the new shelves. Turn of checking for spare disks (which will still error in the logs but at least allow you to create a bigger aggregate) and set the raid group size to 28.
1
2
options disk.maint_center.spares_check off
aggr create aggr1 -r 28 28@4G
Wait until the aggregate has been created and then create a new flexible volume vol1 on aggregate aggr1 of size 850Mb.
1
vol create vol1 aggr1 850m
Then we need to ndmp copy the contents of the old root volume into the new root volume.
1
2
ndmpd on
ndmpcopy /vol/vol0 /vol/vol1
Wait until the ndmp copy has finished.
Set the new volume as the root volume.
1
vol options vol1 root
Reboot the filer.
1
reboot
Once the filer has come back connect with Putty and login again.
Something seems to go funny with the SSL certificates so you cannot also connect with OnCommand System Manager. The easiest way is to just recreate them.
1
2
options ssl.enable off
secureadmin setup ssl
Enter Y to proceed and then you can just accept all the defaults or if you are more security minded you can enter more accurate values for your certificate requirements.
Then turn back on SSL.
1
options ssl.enable on
image
Delete the 2 x shipped 1 GB disk shelves
Now we can remove the old vol0 and aggr0 as they are no longer holding the root volume.
1
2
vol offline vol0
vol destroy vol0
Y to confirm deletion.
1
2
aggr offline aggr0
aggr destroy aggr0
Y to confirm deletion.
Now we can remove the old disks and shelves.
1
2
3
options disk.auto_assign off
priv set advanced
disk remove_ownership v5.*
Y to Confirm.
1
disk remove_ownership v4.*
Y to Confirm.
We may as well rename the root volume and aggregate back to what they were originally just to keep things clearer.
Rename the existing root volume.
1
vol rename vol1 vol0
Rename the existing aggregate.
1
aggr rename aggr1 aggr0
Add 2 x 4 GB Disk shelves
Now we can go ahead and add 2 more 4 GB disk shelves to replace the 2 x 1 GB Disk shelves that have just been removed.
Enter advanced mode and unlock the diagnostic user. Enter a password and confirm.
Launch the systemshell and login as diag and enter the password you previously set.
1
2
useradmin diaguser unlock
systemshell
image
Now we need the remove the old simulated disks from shelf 0 and 1.
1
2
3
4
5
setenv PATH "${PATH}:/sim/bin"
cd /sim/dev/,disks
sudo rm v0*
sudo rm v1*
sudo rm ,reservations
Again use the makedisks script to add 14 x 4 GB disks as shelf 0 and 1 of type 31 which is:
31 NETAPP__ VD-4000MB-FZ-520 4194,304,000 B 4289,192,960 B Yes 520
1
2
3
cd /sim/dev
sudo makedisks.main -n 14 -t 31 -a 0
sudo makedisks.main -n 14 -t 31 -a 1
Then exit the systemshell.
1
exit
Lock the diaguser and set admin mode.
1
2
useradmin diaguser lock
priv set admin
Reboot the filer.
1
reboot
Once the filer has come back connect and login again.
You may as well turn off the pesky autosupport warnings and then assign all the new disks that are unowned to the filer.
1
2
3
4
options autosupport.support.enable off
options autosupport.enable off
options disk.auto_assign on
disk assign all
You can now add these disks to the existing aggregate.
1
aggr add aggr0 28@4G
Y to Confirm adding the disks and again for the low spare warning.
You now have an aggregate spanning 4 x simulated shelves of 14 x 4 GB disks giving you 180.97 GB remaining available disk space after the root volume which is much better than the 20 GB you had initially!
Now is a good time to halt the filer and take a snapshot so you can easily revert if you have an issue.
You can also now go ahead and create your volumes and exports, LUNs, CIFS shares or whatever you want to use your simulator to test.
Remember, never just power off the simulator as you will land up with complicated wafl consistency issues which are difficult and sometimes impossible to fix.
Hopefully all these steps will be redundant when NetApp publicly release their simulator with the maximum possible disk size already configured and available as a simple to import virtual appliance.

NetApp Simulator 8.1 7-Mode Walkthrough Setup Guide

This walkthrough guide is specifically using the current latest version of the NetApp Simulator ONTAP – version 8.1, and running the lab inside VMware Workstation 8 (*see Appendix B for Simulate ONTAP 8.1 Hardware and Software Requirements.) The NetApp Simulator is an excellent tool to learn about NetApp filers in a virtual lab environment.

Step 1: Download or Obtain the Binaries

Go to http://now.netapp.com/NOW/cgi-bin/simulator and login with your NetApp NOW account.

Select the Simulator 8.x link, and choose the links to download the Simulate ONTAP 8.1 7-mode and C-mode simulators for VMware Workstation or ESX as required (*see Appendix C for Differences Between ONTAP 7-Mode and C-Mode,) and the 8.1 licenses
Extract the downloaded zip files – vsim-7m.zip and vsim-cm1.zip – to their respective folders.

Note: Whilst we are downloading software, go to https://now.netapp.com/NOW/cgi-bin/software/ and download the OnCommand System Manager (latest version at 7th December 2012 is 2.0R1 which supports ONTAP 7.3.2 and above.) The OnCommand System Manager provides a GUI to manage NetApp Filers. The file name for the Windows version is sysmgr-setup-2-0R1-win.exe .

Step 2: Import the Simulator Into VMware Workstation

In this post we are just going to focus the ONTAP 8.1 7-mode simulator (the procedure for the C-mode simulator is pretty similar)
Create a copy of the vsim-7m folder and within VMware Workstation -> File Menu -> Open
- and point to the DataONTAP.vmx file and select Open
- rename the imported machine as desired and power on

Step 3: Booting the Simulator and Initial Configuration
On first boot, wait for the simulator to reach "Press Ctrl-C for Boot Menu" and press Ctrl-C
- this brings up the boot menu, and choose option 4 for 'Clean configuration and initialize all disks'
- Selection (1-8)? 4
- Zero disks, reset config and install a new file system?: Yes
- This will erase all the data on the disks, are you sure?: Yes

The virtual filer will reboot, re-initialize and run through the wipe procedure
- Please enter the new hostname []: VFILER01
- Do you want to enable IPv6? [n]:
- Do you want to configure interface groups? [n]:
- Please enter the IP address for Network Interface e0a []: 192.168.0.111
- Please enter the netmask for Network Interface e0a [255.255.255.0]:
- Please enter media type for e0a {100tx-fd, tp-fd, 100tx, tp, auto (10/100/1000)} [auto]:
- Please enter flow control for e0a {none, receive, send, full} [full]:
- Do you want e0a to support jumbo frames? [n]:
- Please enter the IP address for Network Interface e0b []:
- Please enter the IP address for Network Interface e0b []:
- Please enter the IP address for Network Interface e0b []:
Note: Here I leave configuring the 3 interfaces e0b, e0c, e0d for later
- Would you like to continue setup through the web interface? [n]:
Note: Default answer is no to the above question, and my preference is to continue via the console (also the Filer's interfaces do not come up until after the setup is completed!) For reference, the link given is https://IPADDRESSofFILER/api
- Please enter the name or IP address of the Ipv4 default gateway: 192.168.0.2
Screen Ouput: The administration host is given root access to the filer's /etc files for system administration. To allow /etc root access to all NFS clients enter RETURN below.
- Please enter the name or IP address of the administration host: 192.168.0.11
- Please enter timezone [GMT]:
- Where is the filer located? []:
- Enter the root directory for HTTP files [/home/http]:
- Do you want to run DNS resolver? [n]:
- Do you want to run NIS client? [n]:
Screen Output: This system will send event messages and weekly reports to NetApp Technical Support. To disable this feature, enter "options autosupport.support.enable off" within 24 hours. Enabling AutoSupport can significantly speed problem determination and resolution should a problem occur on your system. For further information on AutoSupport, please see: http://now.netapp.com/autosupport/
- Do you want to configure the Shelf Alternate Control Path Management interface for SAS shelves [n]:
- Setting the administration (root) password for VFILER01 ... New password: XXXXXXXX
- Retype new password: XXXXXXXX

Then wait for the network interface to become pingable and the system to complete its initialization.

*Continuing from here, see these later tutorials (time permitting - this list will grow):
NetApp Basic NFS Configuration Walkthrough with VMware
NetApp Basic iSCSI Configuration
Installing the NetApp Virtual Storage Console (VSC) for VMware vSphere and Using it to Optimize NFS Settings
NetApp Data ONTAP 8.1 Enabling SFTP Access to /etc
Step 4: Managing Your NetApp Virtual Filer

After the NetApp virtual Filer has initialized, it can be managed using the root credentials via:
1: Direct Console (*see Appendix D: CLI commands)
2: SSH connection (*see Appendix D: CLI commands)
3: NetApp OnCommand System Manager (installable on Linux and Windows platforms)
Feel free to play around with the simulator to your heart's content!
Appendix A: Useful links and Credits

http://now.netapp.com/NOW/knowledge/docs/docs.cgi – NetApp product documentation

Appendix B: Simulate ONTAP 8.1 Hardware and Software Requirements

Hardware requirements
Dual core 64-bit Intel or AMD system
2 GB RAM for one instance of simulator
3 GB RAM for two instances of simulator (4 GB recommended)
20 GB free disk space for each instance of the simulator
VT support for Intel system
Software requirements
Microsoft Windows XP, Windows 7, or Windows Vista
VMware Workstation, VMware Player, or VMware vSphere Client (if running on a VMware ESX/ESXi host server)

Appendix C: Difference Between ONTAP 7-Mode and C-Mode


FAS arrays run Data Ontap, which is available in two modes:

Data Ontap 7-mode (or classic mode) allows FAS arrays to be deployed as a local two-node cluster, a geographically spanned MetroCluster, and as a remote distributed FlexCache, which enables capabilities like LDVM for VMware.

Data Ontap C-Mode (or cluster mode) expands a NetApp storage cluster from 2 nodes to 24 nodes, increases the features found in 7-mode to include endless scaling, global name spaces, and the complete separation of data and data access from the hardware layer in the form of next generation vFilers (known as vServers).

Appendix D: CLI Commands

VFILER01> ?
acpadmin / aggr / arp / autosupport / backup / bmc / cdpd / cf / charmap / cifs / clone / config / date / dcb / df / disk / disk_fw_update / dns / download / du / dump / echo / ems / environment / exportfs / fcadmin / fcnic / fcp / fcstat / file / flexcache / fpolicy / fsecurity / ftp / halt / help / hostname / httpstat / ic / ifconfig / ifgrp / ifstat / igroup / ipsec / ipspace / iscsi / key_manager / keymgr / license / lock / logger / logout / lun / man / maxfiles / mt / nbtstat / ndmpcopy / ndmpd / ndp / netdiag / netstat / nfs / nfsstat / nis / options / orouted / partner / passwd / ping / ping6 / pktt / portset / priority / priv / qtree / quota / radius / rdate / rdfile / reallocate / reboot / restore / revert_to / rlm / route / routed / rshstat / sasadmin / sasstat / savecore / sectrace / secureadmin / setup / sftp / shelfchk / sis / smtape / snap / snaplock / snapmirror / snapvault / snmp / software / source / sp / stats / storage / sysconfig / sysstat / system / timezone / traceroute / traceroute6 / ups / uptime / useradmin / version / vfiler / vlan / vmservices / vol / vscan / wcc / wrfile / ypcat / ypgroup / ypmatch / ypwhich

TechTips - How to Create a CIF Share on NetApp San

This tutorial will describe how to create a CIF share on a NetApp storage system using the wizard in the FilerView. Storage is always important in any network environment; NetApp has made it relatively simple to create CIF shares on their SAN's using the Filer and their Wizard tool. I will walk you through all of the steps to create CIF Shares from your NetApp SAN and share them to your Windows PC. It's a quick and easy process as long as you know the right steps.
So to create your CIF Share from your NetApp SAN do the following:
Open a Web Browser (IE usually works best)
Type in http://IP_for_filer/na_admin
You might be prompted for username and password (default user name is root and there is no password)
Click on FilerView Icon
It will prompt you for username and password (default user name is root and there is no password)
Now that you are inside of the FilerView you will want to create the CIF, follow these instructions:
Click on Volumes (this will expand its options)
Click on Manage (this will show you all existing CIFS and allow you to add more)
Click Add (the wizard will open)
Click Next
Choose Flexible
Click Next
Name your volume (TEST or whatever you choose)
Choose your language
Click Next
Select the Aggregate you want to put the CIF on (aggr0 was my choice)
Choose volume size (depends on your preference and space available)
Leave Space guarantee
Click Next
Click Commit
Click Close
Now you have setup your CIF you will want to manage the settings of the newly created share:
Click Qtrees
Click Manage
Click the name of the share you just created (TEST in my case)
Set Security Style to NTFS (if already set leave alone)
Click Apply
Now we have created the CIF and set it up the way we want it to be. Now we need to make sure it is shared and we can get to it from our Windows based machine. To do this follow the below instructions:
In the FilerView
Click CIFS
Click Shares
Click Manage
Click Add Share
Create the share (TEST)
Volume path (/vol/TEST)
Fill out Share description, max users and force group with your setup info you choose
Click Add
Now the share should be accessible and we can do a couple of tests to make sure it is. To test via the UNC path do the following:
Click Start
In Start Search type \\IP_Filer\TEST (the name at the end TEST in my case is your share name you created)
Hit Enter
If the share opens then you have created it successfully. You can now map a network drive to this share as a secondary test. To map a network drive to your file share; do the following:
Right Click on My Computer
Click Map Network Drive
Choose drive letter (I choose X: )
In Folder type \\IP_of_Filer\TEST (the name at the end TEST in my case is your share name you created)
Uncheck Reconnect at logon (unless you want the drive to map every time)
Click Finish
The share should open and if you open My Computer you should also see a drive with the letter X: and the Filer IP address listed, you can double click this to open the CIF Share as well. So now you have created a CIF Share via the Filer Wizard and opened the share in Windows.

Netapp Vfiler tutorial

What is NetApp Vfiler:
Multi Store enables you to partition the storage and network resources of a single storage
 system so
 that it appears as multiple storage systems on the network.
Multi store is a optional software License its available in NetApp you need to buy this feature.
Benefit of Using Multistore:
1. Virtualization
2. Consolidation
3. Security

4. Disaster recovery and data migration


MultiStore for service providers and enterprises:
Service providers, such as ISPs and SSPs, can partition the resources of a storage 
system to create many vFiler units for client companies. Similarly, the information
 technology (IT) department  of an enterprise can create vFiler units for various departments,
 or customers, within the enterprise.
The administrator for each customer can manage and view files only on the assigned vFiler unit, 
not on other vFiler units that reside on the same storage system.



What is default Vfiler
MultiStore, Data ONTAP automatically creates a default vFiler unit on the hosting storage 
 system.The name of the default vFiler unit is vfiler0.

Initially, vfiler0 owns all the resources on the storage system. After you create additional 
vFiler units,vfiler0 owns the resources that are not owned by the vFiler units you have created.
You cant destroy default vfiler0.


Number of vFiler units allowed

You can have Maximum 65 Vfiler Unit on single storage system.

You can create 64 vfiler,65th is the Default vfiler vfiler0

Supported interfaces and protocols

Ethernet interfaces and the NFS, CIFS, iSCSI, HTTP, RSH, SSH, and FTP protocols are 
supported on vFiler units
To check the vfilers, 
FASSENTHIL> vfiler status
vfiler0 is the default one.
To create a vfiler, first down the network interface.
FASSENTHIL> ifconfig -a
ifconfig ns1 down

What an IPspace is :

An IPspace defines a distinct IP address space in which vFiler units can participate.
IP addresses defined for an IPspace are applicable only within that IPspace. A distinct routing table
 is maintained for each IPspace. No cross-IPspace traffic is routed.
Each IPspace has a unique loopback interface assigned to it. The loopback traffic of each IPspace
 is completely isolated from the other IPspaces.
To create an ipspace,
FASSENTHIL> ipspace create <ipspace name> interface name list

ipspace create vfiler1-ipspace ns1
To create vfiler,
vfiler create <vfilername>  -s <ipspace>  volumename
vfiler create vfiler-senthil -s vfiler1  /vol/vfvol
CIFS setup:
 To check the vfiler status.
FASSENTHIL>vfiler status ............. Now two vfilers are running.
To list ipspaces:
FASSENTHIL>ipspace list
Creating a qtree on a new vfiler.
qtree create /vol/vfvol/test1
Creating CIFS shares.
FASSENTHIL>cifs shares -add myshare /vol/vfvol/test1
To list the cifs shares.
FASSENTHIL> cifs shares


To switch between to vfilers console.

vfiler context <vfiler name>

FASSENTHIL> vfiler context vfiler-senthil

vfiler-senthil@fassenthil>





To access the cifs shares in windows client.

Click run and type \\10.0.0.122




Now you are able to access the two shares. The shares are in vfiler-senthil.




To check the cifs sessions.

FASSENTHIL>cifs sessions






CIFS shares in the vfiler0(Default vfiler0) Base storage system.




Accessing the \\10.0.0.121




Now you are accessing the vfiler0's cifs shares.



To stop the vfiler.

FASSENTHIL> vfiler stop vfiler-senthil


Check the vfiler status. Now the vfiler-senthil is stopped.





To start the vfiler:

FASSENTHIL> vfiler start vfiler-senthil

Check the status of the vfiler. Now it is running.




To remove the vfiler:

FASSENTHIL> vfiler stop vfiler-senthil

FASSENTHIL> ifconfig ns1 down

FASSENTHIL> vfiler destroy vfiler-senthil

Check the status of the vfiler.



The vfiler vfiler-senthil has removed.





Only default vfiler0 alone running.