Pages

Saturday, August 31, 2013

What else you need to study for the VCP-DCV after taking the vSphere Install Configure Manage Class

Toward the last day of an ICM (Install, Configure, Manage) class, I am often asked about what is going to be on the exam, and why everything on the exam is not in the class.  There isn't enough time to teach everything needed for the exam unless that was all that we focused on, passing the exam.  If you want classwork to cover all of the exam topics, you have two choices, take the Fast Track class (5, 10 hour days) or take the ICM and the What's New class (2 day course).  The What's New class covers upgrading from a previous version, advanced storage and networking, and autodeploy.

Once you have taken the ICM (or perhaps even before) I recommend that you download the Exam Blueprint and take the Practice Exam on the certification page.  If you are interested in more advanced practice exams, check out VMwares new official practice exam partner, MeasureUp.

Here are the subjects that are not covered at all in the ICM class:
 Studying those four additional topics should cover the majority of the exam questions.  However I still recommend that you use the Mock Exam, Measure Up, the certification book or study websites to make sure that your knowledge is complete before taking the exam.  You can create a lab in a box using Lab Guides AutoLab

The best web resource that I know of for study is Damian Karlson's VCP5 Resources page.

 If you come across a subject that you want to study in more detail (detail beyond what you might need for the exam) I would start with Sean Crookstons VCAP-DCA page.

Monday, August 26, 2013

Updates from VCI Day 2013 at VMworld

Yesterday was VCI day at VMworld.  I was lucky enough to be able to fit it in to my schedule to attend, and get a day to explore San Francisco to boot.  We did get an overview of some of the new releases coming out, most of which you can read about directly from VMware.  Here are a few things that stood out to me from the technical discussion.
  • Faster performance in the web client, and a new warning in the Windows client warning that it was going away (we all knew that was coming).
  • Drag and Drop  in the web client.
  • Sphere Flash Read Cache and vSAN
  • SSO has been completely rewritten under the hood.
  • The vCenter Virtual Appliance embedded database now supports up to 500 hosts and 5000 VMs
  • Packet capture from the command line now supports vnics and vmnics in addition to vmks.
In addition to the updates to tech, there are some updates to the certification program.  VMware has announced their new entry level cert, the VCA.  There is a 3 hour online class and online test to get the cert.  The VCA is intended to show "I'm familiar with Cloud/Virtual Desktop/Viritualization Technology" not I am qualified to work on it.  If you are at #vmworld, you can get a discount and check it out while you are there.  

Something else new in the certification world.  VMware is partnering with measureup.com as their official VMware Practice Exam Partner.  I will take the opportunity in the next few weeks as I am studying for my Cloud cert to test out the program and report back.

Finally, VMware has introduced a new tool to help navigate online and instructor led training, VMware Learning Paths.  This site along with VMware Learning Videos and VMware Ceritification Videos can help you get the most out of your training dollars and your time with an instructor.

Sadly I am not attending the rest of VMworld this year, but there is lots to follow on twitter and Google plus using #vmworld.

On a personal note, for years when someone would give me directions that included "up"  like "go up to Wadsworth" I would inform them that up is not a direction, it does not tell me where to go unless I'm in a helicopter.  Well, on Saturday I learned that in San Francisco, "up" is in fact a direction.


Monday, August 12, 2013

Changes to ping command in ESXi 5.1

Along with the changes mentioned in my last post related to how ESXi responds to a ping, there are also some changes to the ping command.

There are two new options:
-I <interface> outgoing interface - for IPv6 scope or IPv4
      (IPv4 advanced option; bypasses routing lookup)
-N <next_hop>  set IP_NEXTHOP - requires -I option
      (IPv4 advanced option; bypasses routing lookup)

The -I option has been around for a while, but it never did what people thought it did.  It had no effect for IPv4.  Now it does effect IPv4 so that you can select the outgoing interface for a ping, rather than relying out the routing table.  This give you the ability to test whether the second interface of a multipathing group has connectivity.

The second option, -N, I have not played around with yet, but it appears to allow you to specify the next hop, effectively temporarily adding in a router for your destination.  I will update this article when I've had a chance to play around with it, or if anyone else has had a chance to experiment with that option, let me know what the results were in the comments.

Sunday, August 11, 2013

Changes to ICMP Ping response in 5.1

In ESXi 5.1, the default behavior of ping has changed from previous releases.  According to KB 2042189 " ICMP Echo replies are now only sent back out the same interface that the Echo Request was received on"  What exactly does that mean, and why does it matter?

Lets say we have three interfaces:
mgmt   vmk0  10.10.20.12
iSCSI1 vmk1  10.10.30.12
iSCSI2 vmk2  10.10.30.13

The iSCSI array is at 10.10.30.50, and both vmk's are bound to the iSCSI intiator.  Since pings are not iSCSI traffic they are not handled by the initiator, instead they are handled by vmkernel routing.  In previous versions, this would mean all ping replies went out vmk1, since it was the first interface in the routing table.  Even if you ping vmk2, the reply will go out of vmk1.  This isn't usually a problem, but what happens if the vmnic that vmk1 is bound to goes down?  vmk2 is still up, but cannot reply to pings.  It isn't very often that this will cause an issue, but there are times when it does.

This change might also cause problems if you have your routing tables set up in such a way that the packets don't follow the same route in both directions.  For instance:
  
In previous versions of ESXi, the response would have used the routing table (green) even though this was not the same path that the packet was transmitted on.  With 5.1 (red), the reply has to go out vmk1, and because there is no way for it to reach the source address from there, the ping will fail.  Generally you won't see this type of configuration, and you really shouldn't be testing interfaces other than management from outside of it's subnet, but it does happen sometimes.

This is a relatively minor change, but it could cause some unexpected results, so it is important to be aware of.

See also Changes to ping command in ESXi 5.1.

Sunday, August 4, 2013

Understanding vmkernel routing

The vmkernel routing mechanism works largely the same as any standard unix routing table, with a few twists.

Lets start with the basics of how routing works.  Routing is done based on the destination address.  That address is compared with all of the networks that are specified in the routing table.  Based on the network address, and the subnet mask, if there is a match, the vmkernel will use that interface to send the packet out.  The last entry will be the default gateway, which matches all remaining addresses.  You can view the vmkernel routing table using esxcfg-route -l (esxcli network ip route ipv4 list in 5.1).  In order to get a clear picture, I'm going to show the configuration of the vmkernel interfaces as well. (click on the image for a larger version)
In this configuration, I have four vmkernel interfaces, they are labeled for management, iSCSI, vMotion, and heartbeat.  Keep in mind how they are labeled has nothing to do with how traffic is actually sent.   Basic vmkernel routing is very simple, lets say the host wants to communicate with the ip 192.168.2.20, based on the routing table, this traffic will go out interface vmk2.  If however we want to communicate with 172.20.2.10, because we don't have a local entry that matches, we send it to the default gateway, and from there it will get to it's destination (hopefully).  Notice that even though I have four interfaces, there are only three local entries in the routing table.  This is because I have two vmkernel interfaces on the same subnet.  When there are two interfaces on the same subnet, the first one create will always be the one used for outgoing traffic.  More on this later.

There can be only one (default gateway).  Despite the fact that there is a screen in the vSphere client that says "Default Gateways" and that it shows up on every vmkernel interface you create or edit, there is only one default gateway.  The vmkernel does not do any type of dynamic routing.  So any time you see this screen, you are always editing the same default gateway.
So don't worry about the fact that the default gateway listed isn't on the same subnet as the interface you are editing.  Unless you are editing the management interface, it shouldn't be.

Okay, that is the easy stuff, now lets look at what makes the vmkernel routing table a little more exciting.  We'll start with this screen.

 See those checkboxes that say "Use this port group for".  You might think that if you check the one for vMotion then the vmkernel would send all vMotion through this vmk, and if you select the one for management traffic, then that vmk would be used for management traffic.  You would be mostly right, but it's not quite that simple.  Let's start at the top.
Use this port group for vMotion.  More accurately what you are saying is "have vCenter tell the other host that this ip is the one to use to initiate a vMotion on."  That is a bit wordy so I can see why they don't use it, and the difference is subtle.  Where it matters is if you have two interfaces on the same subnet, one used for vMotion, and for instance, one use for management traffic.  Part of your vMotion traffic might go out the wrong interface.  More information on the two interfaces same subnet issue at the bottom.
Use this port group for Fault Tolerance logging.  Mostly the same as the vMotion interface, however be aware that what VMware means by "logging" is the vLockstep data that is sent from the primary VM to the secondary VM.  This is a lot of data, plan accordingly.
Use this port group for management traffic.   This one is a bit misleading.  What exactly is "management traffic"?  It has nothing to do with the vSphere client, or vCenter.  I can (and have) connected both the client and vCenter server to the vMotion or iSCSI interface.  As long as I can get to that subnet.  This can be very helpful when trying to resolve an issue where the connection to vmk0 was accidently dropped.  What VMware means by "management traffic" is actually "HA traffic."  Any interface with this box checked will be used for HA heartbeats.  Aside from vmk0, I usually create a secondary heartbeat interface on the same subnet as my VMs, because really, that is the subnet I want to make sure is up.  Aside from this, I know of no other purpose of the "management traffic" checkbox.

None of these check boxes affect the routing table.  They are only there to tell other hosts which IP to communicate to, they have nothing to do with out the vmkernel routes outgoing traffic.  The one possible exception to this is if you are using multi-nic vMotion.  I have not done any expirementing with multi nic vMotion yet.  I will save that for a future entry.

iSCSI Port Binding
One thing that can affect routing is iSCSI Port Binding.  If you are using software iSCSI, and you bind vmkernel interface to the iSCSI initiator, then how your iSCSI traffic communicates is now handled by the iSCSI initiator, not the vmkernel routing table.  Just enabling iSCSI is not enough, you have to add the vmk to the iSCSI initator using the Network Configuration table under the storage configuration.
If you have not followed these steps to bind your interfaces to your iSCSI initiator, and you have more than one vmk configured for iSCSI, you are currently only using one of them.

Now, the biggest cause of problems with vmkernel routing... (drumroll please...)

Two interfaces on the same subnet
If you have two vmkernel interfaces on the same subnet, the first one created will always be the one used to send traffic to that subnet, regardless of what the traffic type actually is.  This can lead to strange network issues, and means you should follow one simple rule: Every traffic type should have it's own subnet.  For instance, one subnet each for management, vMotion, Fault Tolerance, vSphere Replication, and IP storage.  Only two of these traffic types support multiple vmk interfaces.  vMotion and iSCSI.  I don't know much about Multi NIC vMotion yet, but iSCSI can have multiple vmks on the same subnet if you are bound to the iSCSI initiator.  If you break these rules, bad things can happen.   For instance, if you have management and iSCSI on the same subnet, your iSCSI traffic might end up going out your management vmk.  This can be very bad news if your iSCSI vmk is bound to a 10GB NIC but your management vmk is bound to a 1GB NIC.  This is a somewhat common problem on hosts that have been upgraded from ESX to ESXi.  Or lets say you have your iSCSI and vMotion vmks on the same subnet.  Only half of the vMotion connection will get created correctly (the other half accidently going out the iSCSI interface) and you will get vMotion network errors.   There are more wacky ways that having multiple traffic types on the same subnet can cause problems with networking.  I've seen plenty, but I am sure there are others that I haven't even imagined yet, so don't do it.