Server SAN – future of storage or a niche solution?

There’s been quite a hype about the so-called “Server SAN”, which basically denotes a software-defined storage solution based on commodity server hardware carrying a significant number of disks and SSDs. There’s an excellent in-depth description by Stuart Miniman on Wikibon.

Analysts see a high market potential for this type of storage, even mostly replacing the traditional enterprise storage within about 15 years. Now will this really happen, do we all need to change our storage paradigms and switch to Server SAN?

No, I do not think so. Continue reading

vSphere & IPv6: how VMRC breaks it all

It’s been a bit quiet around my vSphere pure IPv6 lab setup lately. At first it seemed to be surprisingly easy, but the devil’s in the details…

As already posted, the initial setup went without any problems. The setup required IPv4 just for the kickstart process, or to be more exact for the first boot and installation phase. During the final configuration script the DHCP IPv4 address was dropped and the ESXi hosts were running IPv6 only: Continue reading

Heads up: Debian 7.4 igb driver broken

Last Friday I updated the Debian 7 installation on my homelab NAS box. Prior to that I had updated the firmware of its (IBM OEM) i340-T4 NIC with version 19.0 of the Intel “Ethernet Connections Boot Utility”, as explained earlier. After a reboot all of the i340 interfaces were gone. The card was listed in lspci and the dmesg output, but the driver wasn’t loaded. Manual loading didn’t work either:

Mar 21 15:40:58 labnas kernel: [ 2172.966771] Intel(R) Gigabit Ethernet Network Driver - version 3.2.10-k
Mar 21 15:40:58 labnas kernel: [ 2172.966774] Copyright (c) 2007-2011 Intel Corporation.
Mar 21 15:40:58 labnas kernel: [ 2172.966823] igb 0000:01:00.0: setting latency timer to 64
Mar 21 15:40:59 labnas kernel: [ 2173.790497] igb: probe of 0000:01:00.0 failed with error -13
Mar 21 15:40:59 labnas kernel: [ 2173.790519] igb 0000:01:00.1: setting latency timer to 64
Mar 21 15:41:00 labnas kernel: [ 2174.614178] igb: probe of 0000:01:00.1 failed with error -13
Mar 21 15:41:00 labnas kernel: [ 2174.614198] igb 0000:01:00.2: setting latency timer to 64
Mar 21 15:41:00 labnas kernel: [ 2175.437858] igb: probe of 0000:01:00.2 failed with error -13
Mar 21 15:41:00 labnas kernel: [ 2175.437880] igb 0000:01:00.3: setting latency timer to 64
Mar 21 15:41:01 labnas kernel: [ 2176.261527] igb: probe of 0000:01:00.3 failed with error -13

So it seems the updated igb driver from Debian 7.4 is broken. Continue reading

Reducing the vCenter memory – vCSA 5.5

An update for this post on how to minimize the memory requirements of the vCenter Server Appliance for release 5.5 was long overdue. Sorry for the delay, I was rather struggling with a pure IPv6 setup (and found out that the VMRC plugin as well as the Web Client break the whole thing – more on that soon).

Anyway, lets see what changed with V5.5 with regards to the memory requirements and JVM parameters. Amazingly quite a lot, but in a good way. Some of my recommendations are obsolete now since VMware changed the settings to more or less the same values I proposed. Must be a coincidence, of course. At least a nice confirmation that my settings were not that bad. 🙂

I had to update this post for vCSA 5.5 update 2 since the settings for the initial 5.5 release caused services to fail in 5.5U2. Overall, the memory requirements were significantly increased.

Continue reading

IPv6 for the simple Debian based DNS server

The last post was about a simple DNS server installation using Debian / Raspbian and bind. I already mentioned that this approach has the advantage of greater flexibility and more features than the DNS functionality that may come with your NAS or router. Particularly regarding IPv6, and that is what we are going to add to the DNS server now. Continue reading

A simple Debian based DNS server

The setup of my homelab, especially the IPv6-only configuration I’m running right now, requires a DNS server. To me as a Unix guy it was obvious that this basic infrastructure service needs to be deployed in any case. But some discussions on Twitter and especially with William Lam and on his blog indicated that this may not be a no-brainer for those who are VMware followers, but not that familiar with DNS. William pointed out that DNS is not a hard requirement, and I appreciate he takes the time to describe how to run VMware products without a DNS server. I fully trust him that this is possible (if there’s one person I would trust on that than it’s him!). But for many reasons, including the official VMware vSphere documentation, I still suggest to deploy a DNS server even for small homelab or test environments. Particularly if you’re trying to get familiar with IPv6. Continue reading

Services don’t bind to an IPv6 address

If you’re using a Linux system to provide IPv6 services, you may notice that some services don’t bind to a specific IPv6 address during system boot. Usually the symptom will be messages in the syslog where the daemons state “bind to port [#] on [IPv6 address] failed: Cannot assign requested address“, and that a simple restart of the service after booting has finished solves the problem.
In my case the issue occured with sshd and sometimes iscsi_trgt on the Debian Wheezy NAS system, which is also the DNS, DHCP, NTP, Kickstart etc server in my lab environment.

The reason is that the IPv6 addresses are not instantly up and usable once the interface is configured. One of the numerous new features of IPv6 are extended states and scopes of interfaces and addresses. Even static global IPv6 addresses go through a “tentative” phase, in which the uniqueness of the address is verified by the host, see RFC 4862. During this phase, which can take some seconds, the address is not finally assigned to an interface and therefore not usable. Daemons trying to bind to that address will fail with the mentioned error message. Continue reading

2001:db8:1ab:: up & running

The lab is (slowly, I have to admit) turning into an IPv6 only configuration.The first question might be “why would somebody want to do that?” Well, because it’s interesting and the main reason to have a lab anyway. But besides that I think every company should do something like this right now, to test their services and hardware (!) for IPv6 readiness. Now.

The basic setup in my lab is working fine, which means the firewall configuration is done and the local DNS is set up. For a start I’ll be using static IPv6 addresses and keep Router Advertisements and DHCPv6 for later. The NAS box (Debian 7.2) is configured with four interfaces for storage, two in VLAN 24 for iSCSI and two in a LACP setup in VLAN 25 for NFS, everything in a dual stack setup. The Kickstart configuration and scripts are modified as well, although I suppose the PXE boot itself will have to stay with IPv4, there’s no support for IPv6 in the firmware. Continue reading

Choose the right quad port NIC

With vSphere 5.5 VMware has dropped support for some of the hardware enthusiasts like me used in their home lab (which basically means I consider people who spent quite a few bucks for a home lab to be enthusiasts – are you really sending the right message to us, VMware?). I cannot approve that – to me it’s just a bad move to remove drivers for hardware that would otherwise work fine, even if it was not supported. Which basically would just mean you’d be on your own if something would not work, and VMware Support would just tell you that. Nothing to scare a whitebox user away.

Anyway. I had to face the fact that the onboard and additional PCI Realtek NIC of my lab ESXi would be unusable sooner or later, even if an older driver could be injected into the ESXi 5.5 image, as I explained in a previous post. So I decided to find a setup that would provide at least five network interfaces with presumably long term support. Continue reading