Heads up: ESXi 5.5 crashes Linux iscsi_trgt

If you’re running a standard Linux on your homelab storage box with iSCSI (ietd, iscsi_trgt), you need to take some actions before deploying ESXi 5.5 in your environment. It seems the new ESXi release issues some SMART command on iSCSI targets, which hits a bug in iscsi_trgt. I’m using Debian 7.2 on my lab NAS which comes with iscsitarget Once an ESXi 5.5 tried to access an iSCSI LUN, the iSCSI subsystem crashed completely. You’ll see messages like these: Continue reading

ESXi 5.5 and Realtek 8168/8169

VMware has dropped the support for Realtek R8168 and R8169 NICs in ESXi 5.5, as already posted in some blogs even before vSphere 5.5 became GA. Really bad news for all of us whitebox homelab owners who use the cheap cards and onboard chips to increase the number of NICs available for vSphere networking. Fortunately the drivers included in ESXi 5.1 still work with 5.5, and there are good instructions available by Vladan Seget and Erik Bussink on how to create a custom installation ISO. But if you’re using a Kickstart server (as I still recommend, even with Auto Deploy and also for a small home lab) it is yet easier. Continue reading

ZyXEL 1910-24 password pitfall

Just swapped my old lab switch, a LevelOne GSW-1676, for a ZyXEL GS1910-24. Really nice piece of hardware, fanless, snappy GUI, good VLAN management, lots of features, including IPv6 support. And cheap, so basically I’m very satisfied.
Until I changed the default admin password. Of course a long phrase with some special characters. Including a colon, which the GUI let me do without any complaint. But this locked me out of the switch.
Continue reading

How to clear phantom alerts on VMs

Under certain circumstances some or all VMs in a vSphere environment may show the red alert icon and an alert status in a VM list, but no alarm in the VM properties that could be acknowledged or cleared. These “phantom alerts” are an issue for quite some time now and still not fixed.
They seem to be caused by an interruption of the connection to the datastores these VMs reside on, for example by an ESXi host booting while the storage is not up & running. The VMs are first displayed as “disconnected”, but after the datastore connection is restored, the alert status stays.

There is no straightforward way to resolve this, but a vMotion clears the status, most likely because the VM registration is updated. So basically just move the VMs around to clear the alerts, the quickest way is to set the hosts into maintenance mode and evacuate them. A host reboot is not required.

Heads up: vShield Manager killed by update

There seems to be a serious problem in the update process of the vShield Manager appliance. I had to update a vSphere 5.0 / vCloud Director 1.5 environment running vSM 5.0.1-638924 to the latest releases (vSphere 5.1 update 1a / vCD 5.1.2 / vSM 5.1.2a). Disk space on the vSM was sufficient, the update itself using the VMware-vShield-Manager-upgrade-bundle-5.1.2-943471.tar.gz went fine, the appliance rebooted and was busy with the upgrade process for some time, as expected. Afterwards the UI showed the new release, the VM was idling, update apparently successful.
But after a manual reboot (due to a scheduled maintenance of the whole environment) before migrating the configuration to a new vSM instance the appliance would not come up anymore. The disk driver encountered fatal I/O errors, maybe due to file system or partitioning damage (although the symptoms do not clearly indicate that), finally resulting in a kernel panic: Continue reading

IPv6 Hands-On Lab

Just to keep you updated (no, this blog is not a straw fire!):

A colleague & I have meanwhile set up an IPv6 test lab on the same hardware I use in my home lab. And this means a complete setup: DHCPv6, RA, static IPv6, tunnels, firewalling, broad range of client OS – the whole nine yards.

It’s going to take some time to write a series of blog posts describing the setup, and I’m still tempted to use IPv6 for the vSphere infrastructure as well. Maybe even for iSCSI, although it’s no longer officially supported…

So stay tuned, there’s a lot of stuff coming soon! Just have to finish the setup, writing – and my holidays. 🙂

Workaround for the vCenter Server appliance 5.1U1 update delay

The update process from 5.1.x to 5.1 Update 1 contains a serious flaw. The update may take more than 45 minutes, some report more than one hour. VMware even mentions this in their release notes:

Update of vCenter Server Appliance 5.1.x to vCenter Server Appliance 5.1 Update 1 halts at web UI while showing update status as installing updates*
When you attempt to upgrade vCenter Server Appliance 5.1.x to vCenter Server Appliance 5.1 Update 1, the update process halts for nearly an hour and the update status at Web UI shows as installing updates. However, eventually, the update completes successfully after an hour.

Workaround: None.

The generic update documentation KB article 2031331 “Updating vCenter Server Appliance 5.x” mentions even longer durations:

The update process can take approximately 90 to 120 minutes. Do not reboot until the update is complete.

Well, there is a workaround, even a very simple one:

Minimizing the vCenter memory footprint – Appliance

In my previous post I described how to reduce the vCenter memory requirements on Windows. Basically the same is true for the vCenter appliance, but the files are a bit harder to find. Besides that the same disclaimer applies – this is in no way supported by VMware.

Single Sign On:

  • /usr/lib/vmware-sso/bin/setenv.sh
  • Change “JVM_OPTS” (default: “-XX:MaxPermSize=256M -Xms2048m -Xmx2048m”) to “-XX:MaxPermSize=128M -Xms128m -Xmx256m”

Inventory Service:

  • /usr/lib/vmware-vpx/inventoryservice/wrapper/conf/wrapper.conf
  • Set wrapper.java.maxmemory (default: “3072”) to “384” (MB)

Continue reading

Minimizing the vCenter memory footprint – Windows

With vSphere 5.1 the memory requirements of the vCenter server have dramatically increased. If all components reside on a single Windows server [VM], even the smallest inventory size will require 10 GB of memory, according to the VMware Installation and Setup guide. Although this document states a minimum of 4 GB memory for the vCenter Appliance, it is in fact configured for 8 GB RAM after deployment. This will most likely exceed or significantly reduce the resources of small home labs or all-in-one setups with VMware Workstation.

Is this necessary? Nope. But due to the default JVM memory settings a simple reduction of the VMs’ RAM could lead to swapping and have a negative impact on the overall performance, obviously. The following adjustments to the application settings will allow to reduce the VM memory to 4-5 GB. This posting covers a Windows-based vCenter server, the following post will be related to the Appliance.