Originally written by me for a “Spotlight on IT” series on January 24th 2013.
https://community.spiceworks.com/topic/295145-a-virtual-balancing-act-big-business-features-smb-budget
Six years ago a client was forced to consider a new computer for a legacy app his business was running. The maker of the app, which only ran on Windows 95, had gone the way of the dodo, and the new computer coming in didn’t support Win95. This is where I got to see first-hand how simple, powerful and elegant virtualization was: I set up a new Windows XP machine and, using Virtual PC, created a new Windows 95 machine on the workstation. I installed the legacy app, copied over all the data and voila — my client was back in business.
Fast-forward six years: Virtualization has vastly matured, but, being in the SMB space, I still didn’t quite see how I could make this all fit. My biggest headache was and is the fact that all the enterprise features that are so cool (e.g., high availability and fault tolerance) are priced for the enterprise. As a systems administrator for a small company, I get a small IT budget — so much of what I do in the field of virtualization has to fit into that scope. Yet by the same token, I am in dire need of these same enterprise features.
Proving the need for virtualization
My first challenge was not in the technical aspects of virtualization, but in convincing management that there was a real need for it. To set up a demonstration, I had to set up a virtual server without any sort of budget for it — quite a trick if you ask me. Well, it so happened that I had a “spare” 2U server hanging around in case of hardware failure of an existing server. I installed the free ESXI (v3) and bought an inexpensive 3Ware Sidecar with an LSI Raid Controller (which I needed in any case for my backup project) that I attached to an old PC and, using Openfiler, made my first “SAN” (really it was just a Direct Attached Storage device). With this I was able to throw a few virtual servers onto the ESXi host and launched into my presentation.
Persuading management was not easy. They were old school — the ledger paper, slide rule and big iron servers; trying to explain the concept of virtualization (the server is just a file) just didn’t fly. The managers were used to seeing and dealing with MASS. The more solid it was, the more real it was. I tried to explain virtualization in terms of Moore’s law — how hardware has become so much more powerful that you’re really paying for 100% of the server but only using 5%. Their eyes just glossed over on most of this, but they could understand the fact that you’re wasting 95% of your investment with buying one big server for each role that you needed performed.
I had to explain virtualization in terms of an apartment tenement. The physical server was the big building and the virtual servers were the apartments, each one separated from the other but all sharing the same common hardware (water pipes, electrical, and so on). I then demonstrated how easy it was to set up, copy, and move virtual servers around using the vSphere client. I powered on a virtual server, and it looked just like a computer powering on. I then logged into a virtual server using RDP and to all intents and purposes this looked just like the real thing. They couldn’t tell the difference, which is the point I was trying to make. I think more than anything the visual demo is what convinced them for me to go ahead with my plans.
Small budgets, big challenges
As I mentioned, my biggest challenge around virtualization is providing all the cool features at an SMB budget (emphasis on the small). For this, planning was VITAL. I needed to specifically figure out:
- What processes/servers had to be virtualized and why?
- I had to plan backup and recovery of VMs in the case of a disaster.
- Then, there were all the hardware choices I had to make for best performance.
I found that in order to fit my budget I had to work with open source solutions. I also found that working with the Spiceworks Community has been and is a great resource pool to draw from. Nowadays, when I Google for a solution I find that my answer invariably comes from a past Spiceworks post!
I made tight comparisons between VMware, Citrix/Xen and Microsoft’s Hyper-V. VMware at that time was way more mature, offered more features and had way better support and documentation set up. By working with sysadmin friends who were using VMware, I got closer insight into this particular product, and so that’s what I ran with.
There definitely was a myriad of data to sort through. My whole approach to virtualization was aligned to business continuation. Deploying, backing up and recovering virtual machines is so much easier and convenient than moving the same amount of stuff around in big clunky hardware and doing traditional restores from tape or hard drives (after the server OS had been restored and all application set back up again).
Over a period of two years I was able to move more and more items into my virtual infrastructure. I started out with all my support systems: syslog servers, MySQL servers and Nagios (for system monitoring). This was followed by a redundant DNS server and a VPN Server. Then I moved our line of business applications systems and finally I migrated our physical SBS 2003 server to a virtual SBS 2011 server.
There are a few instances where open source solutions don’t quite get the job done — yet. One instance was the backup and recovery software. For version 3, I was able to use the ghettoVCB backup script to backup my VMs. However, in version 4, the VMware storage (amongst others) APIs changed. Thus, to provide backup and recovery I got budgetary approval to purchase Veeam for backup and replication (though I also recently had the privilege to try out Unitrend’s backup and recovery solution, which turned out to be very nice — I’m now using on a different project). Also, since then Veeam has come out with a free product, which is also very useful to small operations such as mine.
Remaining challenges
I now have a few physical servers left that present some unique challenges such as telephony devices — making outbound analog calls from a VM. But, no doubt, the Spiceworks Community is filled with good and helpful resources along that line. And the last project I am working on is in building a decent, but inexpensive SAN using a couple of Supermicro servers with a SAS backplane and a couple of LSI Controllers using SSD for caching and of course a few 15K SAS drives.
At first Openfiler looked attractive — even if you pay the 1000 Euros for the Advanced iSCSI Target Plugin. Though I have to admit, in looking at Windows Server 2012, I am starting to think that this might make a better iSCSI Target than Openfiler, with built in data de-duplication amongst other attractive features. I’m now, with some help from the Spiceworks Community, looking into FreeBSD and ZFS (which is way cool) and am currently building a set of SANS. (The specs include: Supermicro MBD-X9DR3 motherboard, Dual Intel Xeon E5-2620 CPUs, 128 GB DDR3 RAM and 16 1TB Seagate 7200 RPM drives and a Supermicro CSE-836E16 3U rack mount chassis. ZFS will use the 128 GB of RAM to do the L1 caching and ZFS is fully replicable using a block send/receive mechanism).
My current infrastructure now spans several sites and multiple hosts. In the next couple of years I am looking to expand into the VDI space. For this I have already looked at OpenThinClient and have set up several working instances of this and was able to do a demonstration to the management team. This is key, as in the next year or two we’ll be looking at refreshing all desktops at considerable costs. But if we go the VDI route we’re looking at huge cost saving benefits and lower maintenance and support costs. At least, that’s the current theory.
I have learned much in moving my servers to a virtual infrastructure. No doubt there is a ton more to know, but these are indeed exciting times to be in IT.
– Thomas