In the past 12 months we’ve started deploying forms of solid state disk caching to help improve disk i/o performance on our systems. It was first seen with the deployment of Facebook Flashcache on our virtual private servers and semi dedicated server offerings. In the past several months we’ve been utilizing CacheCade which is provided on LSI raid cards. These caching methods have resulted in near solid state drive performance while being able to continue to offer large disk allowances on our systems.
Solid state drives can provide access latency of under 0.1ms while even the fastest 15,000RPM serial attached scsi (SAS) drives have access latencies of 2ms and a typical 7,200 RPM hard drive has a latency of 4.17ms. Even the cheapest solid state drives can provide 350 random read and write operations per second while a 15K SAS rpm drive provides just 120 random read and write operations per second. The only negative for solid state drives currently is the cost per GB. A 600GB 15,000RPM SAS drive can be purchased for just $300 while a 600GB solid state drive is well over $1000. At this point it is just not cost effective to be utilizing strictly solid state drives, which is why we turned to caching (initially Flashcache, now CacheCade).
Our Flashcache implementation utilizes 64GB or 120GB solid state drives for strictly read operations. What it would do is when data is written to disk it would also be available on the solid state drive. If the data is accessed frequently but not written frequently it would start reading from strictly the solid state drive. This has resulted in 64-120gb of data actually having its reads served from the significantly faster solid state drive. The best example I have is showing the access latencies of one of our systems utilizing strictly four 15,000RPM SAS drives in a raid-10:
Here are the access latencies of one our of our solid state drives that is being utilized for Flashcache:
As you can see for the 64gb of data being stored on the solid state drives we’re able to read data in just 2.23ms compared to the 13.41ms time of our SAS raid setup. This is a 600% improvement in read latency for the data that is being cached. Once we had implemented this setup we quickly realized there was one negative. We were not caching writes which meant for our systems we were still having write latencies of 20ms or more. We started researching solutions that cached both reads and writes which lead us to using LSI CacheCade.
LSI CacheCade allows for reads as well as writes to be sent to the solid state drives on the system. This is a raid card/hardware based solution rather than the software solution of Flashcache. The raid card uses algorithms to determine the most frequently accessed data and it is placed on the solid state drives, similar to our Flashcache implementation. What changes is with the write caching it first writes the data to the solid state drives. It then places the data on the regular mechanical drives on the system when it’s the most unobtrusive time possible. We place two solid state drives in each system as a raid-1 CacheCade to make sure no data is lost when utilizing both read and write caching. I could continue to talk about all the great technical details but I think it’s easiest to just show some graphs. Here is one of our systems before switching it to having LSI CacheCade:
Here is the same system after switching to using LSI CacheCade:
As you can see this is a huge improvement. Even the slowest reads and writes from the system are faster than minimum read and write latency on the system before LSI CacheCade was introduced.
While LSI CacheCade is the most exciting our use of Flashcache is also still a significant improvement for systems which are utilizing it. It has meant we’re caching upwards of 120GB of data on significantly faster solid state drives. The operating system does do caching with free memory but adding 100GB or more to memory on many systems would not be possible nor cost effective. It also may not provide the same real world performance gains our solid state caching does. With that being said this is why we’re working towards migrating all systems to LSI CacheCade.
We’ve been migrating servers to CacheCade based systems in our Dallas location since October 2012. We hope to eventually have systems in all our locations utilizing some form of solid state caching. It is definitely a lot of work for us but solid state drives are the future. We wish for all users to be able to enjoy the advantages having consistent disk access times brings.
This is why I use Hawkhost. Keep up the good work guys!
Hawkhost is indeed one of the best web hosting sites I have tried, as for web monitoring service I am using http://www.monitive.com
This article is almost half a year old. I don’t see Hawk Host Shared packages mentioning CacheCade.
So, did you guys implement or not?
All our shared web hosting packages in Dallas utilize CacheCade. This however is not available in all our locations at this time. We’re hoping to bring the use of solid state drives to all our other locations as well. This however does take time as we’re upgrading a large amount of hardware. In most cases this means we’re moving to new servers as the old ones do not have the space for solid state drives.
Pingback: Our VPS Hosting Is On The Move – Now Available in Singapore | Hawk Host Blog