Web Hosting the Traditional Way (1993-2006)
I don’t know about you but my first web hosting was done at my house in 1993 with a private T1 to that location. I quickly learned that this wasn’t going to work the first time I was away as I had to do a rebuild of a disk array on one of the servers from 600 miles away while keeping the existing servers hobbling along. My next solution was to move to a data center in 1997 with Global Center who at the time was also handling a young startup company called netscape. I purchased a rack with 2MB/s 95% billing and dual incoming 10Mb/s handoffs into my switch for about $1200/month. I also had a T1 private line back to my house and another T1 back to a local business. I was forced to move my rack 3 times within that Global Center facility as they grew and expanded. Each time they gave me the new rack in advance and I provisioned the services over without any disruption by using my backup switch and carefully verified that the T1’s were ready to go to reduce their downtime to approx 2-3 minutes… the time it took to reconnect the T1.
Global Center was purchased by Exodus and we moved to a new facility sometime in 2000-2001. This time I was being renumbered and I had to move two tails for the T1’s to the new facility. I proceeded once again in a stepwise fashion and purchased a few new servers and migrated to the new servers at the same time. The hot cut of the T1’s was done in approx 5 mins so once again the move happened without service disruption but at a cost of extra time on my part of provisioning and planning. It took me almost 2 weeks of planning, testing, and migration of services back and forth between servers as I moved things back and forth between the two data centers. Next Exodus was purchased by Savvis but I was able to keep my rack.
Sometime around 2005, I decided to move to MCI after a long and tedious negotiation of a complete cage and bandwidth at 90% off. The deal allowed for 3 cages, 15Mb/s at 95% billing, 100Mb/s hand offs for $5000/month. That lasted for about 2 years until I decided that some of my testing of cheap and redundant inexpensive solutions was working also.
Web Hosting the Next Generation (2004-Present)
I began with service overlapping for 1 year as I tested a few options… I had been testing a root server (AKA dedicated server) with 1and1.com in their New York Level 3 data center which I paid $89/month for a server with OOB console access and external disk backup. It also included the ability to configure an external firewall which I used in conjunction with my own iptables firewall to keep the windows probes off my server. As an aside, that server was up over 2000 days when I decommissioned it and it was running fedora 2. The server was a simple incoming mail relay and dns server for me. My record previously was a freebsd 4.10 box that was up for 800 consecutive days so this was also my first test of linux.
My next purchase was a dedicated server from softlayer.com in late 2005. The server was provisioned in 2 hours and included a quad processor, and 4GB or ram. In addition, I had hardware raid with battery backup on the disk controller for async writes and two vlan’s. One vlan for my OOB and backend private network from another provider and the other VLAN for future servers and main access to the services. The OOB was IPMI console/ssh which gave me the complete solution. Previously, I was using cisco 2511’s for my OOB on some sun netra’s and some PC’s with IMPI console/serial. I had found a replacement for a dedicated rack necessary to hold all the backup switches, routers, and servers with a single server that cost $289/month and was provisioned in 2 hours. I also test Amazon’s AWS as a beta subscriber for some backup and redundancy options but never moved forward with this as I had found other cheaper solutions. My worry about reliability was squashed the second week when Softlayer called me to explain they had put a new drive into my server as an operator had visually heard and seen an alarm originating from the server when they were doing their rounds. No outage but I followed up and found that they had not installed the management software to discover the raid failure. At that time I didn’t really know what was handled or not handled. They said they handled management of the server but their definition is quite different from my definition. When I think management, I am thinking your manage the services whereas they are talking about ping checks and hardware replacement. With the boundaries defined, the next task was to see how reliable the server would be. Other than a small 5 minute outage that I scheduled to replace the battery on the disk controller card, the server hasn’t been down. I began to provision real services onto the softlayer infrastructure about 6 months after observing and understanding the infrastrure. The first services were two VMWARE guests that ran Zimbra and they are still being used.
I measured and watched the vmware zimbra instances for about 2 years and began to wonder if Xen would work also so in 2008, I found a XEN VPS with 2GB ram and 50GB disk for $29/month. I initially used it as a backup zimbra instance and as a dns server. I never noticed any down time in the year. I also added a few other providers of VPS’s sometime around 2007 for about $29 each to see how they would do. They tend to be up about 200-300 days before the physical machine is rebooted. Outages tend to be less than 5 minutes. In 2008, I purchased some ultra cheap hosting with SSH and shell access from a2hosting.com. This blog is on that as a test.
Cost is not an indicator of reliability. Choose services that provide you with VLAN’s, OOB, backup, and raid. There is a high probability if they know how to offer this that they have a systematic method of operating their servers. Initially, I would login to a few route servers and verify some of the peering relationships of the providers that I was thinking about using… that is tedious and there is no control of when they change anyway. I still use it as an initial gage to how much they might be paying for their bandwidth but found that purchasing redundant servers from redundant providers gave me control. In some instances, I cancelled service. For example, when 1and1 moved out of their Level 3 facility in 2009 to their Kansas facility I cancelled one of the servers. I had already had a server there and it makes no sense to have two servers in the same facility.
I have been all the routes. I have paid for dedicated T1’s (DS1) from data centers to home for OOB, I have had servers in data centers I ran. I have had data centers in neutral carrier data centers and at the biggest telco colo (MCI – Ashburn). I have noticed no difference in performance, reliability, or service. I have reduced my cost 90% and kept the same reliability without the expense of hardware, bandwidth charges, or lack of dedicated noc staffing. While I am not ready to commit to the a2hosting.com $3-$5/month sites, I am certain that the cloud offerings will continue to offer more compelling offerings and drive the cost down further.
Next time I will write about some of the techniques for making and configuring the services in a secure method using these services. For example, encrypted drives, external ssh backup services ($20 per 100GB/month) with GigE access.