Add iSCSI Target to Software iSCSI Initiator on All Hosts in a Datacenter

If you’re like me, you have noticed that adding an iSCSI target to many hosts takes a really long time in the vSphere client. You have to navigate to the configuration tab of each host, enter the storage adapters section, select the iSCSI initiator you want to update, go to properties, to dynamic discovery, add the IP address, and finally rescan the HBA to see the new block device(s).

Many of these steps have a lag time of up to 30 seconds. These steps are obviously not difficult, but this time really starts to add up when you’re updating many hosts.

I’ve created the script below to automate this process, and save some time. When running the script, you’ll be prompted for the vCenter server you’d like to connect to, the IP address of the new iSCSI target, and whether or not you’d like to initiate a rescan on each host after adding the new target IP address.

If there is more than one datacenter in the selected vCenter, you’ll have to choose which datacenter you would like to work with. This script assumes that you want to add the target to all hosts in the datacenter, but you could easily take the section of code that selects the datacenter and reuse it to make a cluster selection.




Add NFS Datastore to all Hosts in a Chosen Cluster

Have you ever wished you could add a datastore to all hosts for a given cluster at once, rather than having to add it to each host one-by-one? This PowerCLI script was created to accomplish just that.

It’s pretty straight forward – run it and you’ll be prompted for some input, namely:

  • Name of a vCenter
  • If more than one cluster exists in that cluster, a cluster selection
  • IP of NAS hosting the NFS export you wish to mount
  • NFS export path
  • Datastore name

Note: As the script runs, you’ll see some red error text. This is the script returning an error on getting a datastore with the name provided.  There are better (far more elegant) ways to do this, I’ll update the script with a better method down the road…




Script to Backup MySQL Database

Being that that this blog is “Proudly Powered by WordPress,” which uses a MySQL database, I wanted to find a way to schedule regular backups of MySQL databases. I am not that well versed with MySQL – all of the DB’s which I work with professionally are either SQL Server or Oracle – so I was looking to MySQL WorkBench to do the scheduling of regular backups, and to my dismay, you can’t schedule backups there.

I am not sure if this is something that Oracle has stripped out, or what, but it’s pretty crazy. So, I created this quick and easy PowerShell script to use the MySQLDump.exe to export the DB to a specified location, and cleanup any backups older than 1 month old.

Use the Task Scheduler to create a task to execute this script daily or weekly, and you’re set with your local backups of your MySQL DB.

Hope someone new to MySQL stumbles across this in less time than I took to realize that this functionality was not in MySQL WorkBench and to write the script. Enjoy!



Binding the ESX Software iSCSI Initiator to Two (or more) VMkernel Ports

If you are running vSphere in an iSCSI environment, I am sure that you’ve read the iSCSI SAN Configuration Guide (4.1 here and 4.0 here). One of the major functionality improvements made from VI3 to vSphere 4 was the multpathing capabilities of the software iSCSI initiaor on the ESX host.

In order to get the most throughput out of your ESX software iSCSI initiator, you need to bind it to two separate physical NICs. To do this you need to create two VMkernel port groups, make only one NIC active for each port group, then get on the ESX CLI and bind the software initiator to both VMkernel port groups. 

The details are as follows, and assumes you already have an iSCSI initiator configured on the host.  You will need:

  1. Two pNICs to be used for the iSCSI traffic
  2. An additional IP address in the iSCSI network

To make this binding of the iSCSI software initiator to both pNICs on a CLASSIC vSWITCH, follow these steps:

  1. Login to the vCenter or ESX Server with the vSphere client and go to the networking tab of the host you need to configure
  2. If you don’t already have both pNICs added to your iSCSI vSwitch, add your NIC
  3. Create a new VMkernel portgroup on your vSwitch (VMkernel 2)
  4. On the next screen give the VMkernel an appropriate IP address and finish the wizard
  5. After the VMkernel portgroup has been created, go into the properties of your original VMkernel port group, and go to the NIC Teaming tab
  6. Check the box to override the vSwitch NIC failover order, moving on of the vNICs into the “Unused Adapters” grouping
  7. Now open the properties for the VMkernel 2 port group and change the NIC teaming policy so that the pNIC set to “Unused Adapters”  in step 6 is set to “Active Adapters” and the pNIC from “ActiveAdapters” in step 6 is set to “Unused Adapters” here on VMkernel 2.
  8. Open a putty session to your host and type the following commands, where vmhba32 is your iSCSI initiator (you can find this on the storage adapters section on the configuration tab of your host):
    1. esxcli swiscsi nic add -n vmk0 -d vmhba32
    2. esxcli swiscsi nic add -n vmk1 -d vmhba32
  9. Verify that both portgroups are configured for the software initiator
    1. esxcli swiscsi nic list -d vmhba32

If you’re using a vNetwork Distributed Switch, the process is nearly the same, only the steps for changing the pNIC failover policy is in a slightly different place.

  1. Go to the networking section in your vCenter
  2. Go to your iSCSI vDS, and create a new port group for your second VMkernel
  3. You can set the teaming and failover now, so set a dvUplink in each of the VMkernel portgroups as “Unused” by right clicking on the port group and going to “Edit Settings” and selecting the “Teaming and Failover” section
  4. Now you can go to each host and configure a virtual adapter in the VMkernel 2 portgroup
  5. At this point you should have two VMkernel ports, each using only one pNIC, so you can go into the CLI and execute the following commands to bind the software iSCSI initiator to the two VMkernel ports.
    1. esxcli swiscsi nic add -n vmk0 -d vmhba32
    2. esxcli swiscsi nic add -n vmk1 -d vmhba32
  6. You can verify that the settings are in place with
    1. esxcli swiscsi nic list -d vmhba32

That’s it – you will now be load balancing the software iSCSI initiator traffic over two pNICs. You can verify the load by opening a putty session to the host, entering into ESXTOP, hitting “n” and monitoring your vmk0 and vmk1 traffic.

HP Blades on vSphere 4.1…What’s the Deal?

I have 3 datacenters which run exclusively HP Blades for our vSphere infrastructure. I just picked up the second note hot off the twitter wire about potentially serious issues…which have me wondering, “How many issues have NOT been reported yet?”

The first issue that I came across was the lack of the HP NMI driver for the BL460c blades. Although this is a very simple resolution, it still requires that you reboot the host, which means that no VMs can be running on it. I know that for most people this is not a big deal. Simply put your host into maintenance mode, install the patch and reboot. However for some of us, there are still lingering applications in our network which cannot be vMotion’d, thus the maintenance for installing this driver is still rather painful when you have to coordinate the move of hundreds of VMs with their application owners.

The second issue is one with the native VMware bnx2x for Broadcom NICs. Apparently, there is an issue when IP Checksum Offload Support is enabled, which causes the machine to halt and you get the Purple Screen of Death (PSOD).  The work around for this one is to disable the IP Checksum Offload Support. You can read the details here:, although the word on the street is that this solution is NOT persistent after rebooting your host.

For details on how to install the HP NMI driver on ESX 4.1, please read on..

Continue reading HP Blades on vSphere 4.1…What’s the Deal?