It’s now June 1st which means May 2009 is over with and we’re on to the next month pushing forward here at Hawk Host. May was an exciting month for us for several reasons with some of them being great others being bad. I thought I’d go through what all has happened in May.
We broke our previous monthly record in number of orders which is all thanks to our customers recommending us all over the Internet. Along with that we saw a record in revenue for the month. It was a great month for us in terms of growth which makes so we can improve our services.
Seattle Going Strong
We added Seattle as a location option at the end of April but we really did not start promoting it until the start of May. Thus far it’s looking great with the first machine filling fast and within maybe two months more at most we’ll need a second machine there. It is showing excellent performance for users in Asia and we’ve had compliements from customers in Japan commenting at just how much faster it is compared to other locations. I’m sure it’s just as great for Australia and China as well which we have a lot of customers from using Seattle as their hosting account location.
Yoda Node Shut off by Mistake
Our provider SoftLayer made an unfortunate mistake turning off our Yoda machine by mistake while performing maintenance in the area. Each rack serves has two power units and half the servers are connected to one and the other half the other unit. One of them are showing up as very hot and needed to be replaced but we were not on this unit. When the technician was consoling into machines they shut off ours by mistake. We were an even number server when only odd number servers needed to be turned off to replace the failing power unit. I had a discussion with the operations manager in Washington DC and was assured this is the first time this has happened and it should not happen again.
Dallas Server Room 2 Outage
Two of our servers Pluto and Saturn are located in Dallas server room two which had lost power in May for a brief period of time. The cause of the outage was a power surge in the rooms circuit due to a failed capacitor in a UPS unit. The UPS unit was removed immediately after discovering this and power was restored in the area. A terrible piece of luck unfortunately and nothing we could do about it happening on our end anyways.
Seattle FCR Outage
In Seattle while performing routine maintenance a technician did not follow protocol and it resulted in an outage taking down our entire Seattle location for a brief period of time. The end result was about 20 minutes of down time and the employees employement being terminated.
Just to comment on the outages it was a pretty bad month for us with regards to outages that were basically out of our control. We work to monitor the health of servers in terms of CPU, memory, i/o ect. We also of course keep tabs on any messages coming from the operating system. The thing we have no control over is failures of power units and mistakes by datacenter employees. Overall it was not particularly bad and we’re not the only provider to have issues like this others have had outages recently with their data centers similar to these or even are located in the same data centers as us. We of course with any outage kept everyone informed of the problems and revealed the actual causes rather than not saying anything about it. We’ll always of course continue to do that as having the information even if it’s bad for the host is better than not having any information at all.
We’re currently actively testing Percona’s MySQL builds and Psacct on our development/testing machine. I cannot reveal what it is for as it’s an early project for us but when we have more information about it I’ll be sure to post it some where.
I think May was a great month besides a few hiccups. June already looks great with us looking to bring back our virtual private servers in Dallas (they’ve been sold out since April). We’ll also probably be adding another cPanel/ Web Server machine to our fleet near the end of June or early July. We’re slowely growing with double digits in machines and seeing a new one added or a few every single month 🙂
Not to mention we’ve added additional noticed regarding our servers RAID health so we can pro-actively find potential issues (more so than before).
Beyond that increasing staff hours to help with the influx of new customers – ah yes :).