Extreme Neworks, VMware and NetApp Configuration Notes (w/ VLANs)
We setup a small SAN, using 1Gb networking. We have deployed a NetApp, Extreme Summit x450a and VMware 4. ESXi. Along the way we had some problems, specifically w/ Jumbo Frames.
Our requirements were fairly specific:
- VMware and Netapp compatible Port Groups (LACP, Trunking, Etherchannel, etc) must work across the stack for failover.
- Support for at least 10 groups/Etherchannels.
- Enough packet buffers to support iSCSI + Jumbo Frames
- Strong intrinsic VLAN support.
- 10Gb-E or better stack interconnect
- Published Best-Practices for both NetApp and VMware configuration.
Our first goal in making the network switch-fabric our I/O backplane was redundancy. We wanted to be sure the everything kept working, even in the midst of severe problems. That led us to stackable switch technology. We reviewed the following
- HP A5120-24G
We didn't get extensively into this switch. It took a while to get a quote together, it didn't feel quite right as a deal. Also, we just didn't see the real-world usage reflected in support forums and help guides. Maybe this was because HP is just That Awesome™, but maybe not. Regardless it didn't end up making the cut, despite having the best overall price.
Overall: Good Product
- Cisco 2960S
This switch represents the industry standard for our switching needs. This is their smallest stackable switch where a port-group (Etherchannel) can be created using ports across both switches. It's weak because it only allow six of the Etherchannel ports. The loser web-management portal requires IE7 on Windows XP.
- Extreme x450a
This switch runs on Linux - which we care about. I have not seen a limit on the number of port-groups available. There were several configuration templates we found, but we didn't find a comprehensive guide.
Extreme - we purchased 2x Summit x450a's. These are a small-core enterprise switch, which matches us perfectly.
The background above was just to explain this section, which is what we ended up forming our initial foundation.
We started with the VMware Machines. I had extra NICs in them, and I used that to boot-strap the configuration. Later these will be separate port-groups just for vMotion. For the moment, they are plugged into one of our two primary LANs for OOB management. I used them as a setup tool for the VMware server. At another site, I setup a four-port trunk group, but I left one NIC unplugged, and assigned to the default vSwitch0 in ESXi. Once I had the setup complete, I moved the vmnic to the new 'vSwitch' and left the old configuration behind.
Once you have the VMware server ready, configure your Extreme SAN, and get all the settings in then use vmkping ( from the VMware command line to verify the Jumbo-frames.
NetApp configuration is largely dependant upon your consultant. We have a two-head FAS2040, with Active/Active configuration. This means that we were careful with the setup. Our consultant stood it up and was also a mainline install consultant, which means he was more pragmatic about network configuration. We configured the Out of Band management, then configured all ports into a single 'vif'. This uses iphash load-balancing, which is well-covered elsewhere. Because of that, we have the following config on the NetApp vlan interfaces.
IPs used for NFS (vlan:41)
IPs for iSCSI (vlans: 42, 43, 44, 45)
- vlan42: 10.140.2.31
- vlan43: 10.140.3.32
- vlan44: 10.140.4.33
- vlan45: 10.140.5.34
Note: we have one octect for each port, which makes sense if you understand ip-hash load-balancing.
The last octet changes to cause traffic to evenly spread across all ports evenly if you're using MPIO or MCS with your iSCSI Clients.