Our team has been busy deploying our cloud offering and thus far we’ve deployed them in Dallas, New York and Singapore. Each time we’ve announced the launch of a new cloud location we’ve said it’s fast, reliable but have not provided many details. Now while we cannot reveal everything we think it’s time to at least discuss the portions we are able to.
The processors we use are going to continue to evolve but as of today we deploy Broadwell, Haswell and Skylake processors. Our choice of processor is currently dependent on availability within a location. Rest assured though regardless of the processor your cloud server is on you’ll experience excellent performance.
All our locations have at least 40Gbps of total network connectivity to our servers. Many of our newer deployments however are now utilizing 100Gbps of total connectivity to a single server. We currently limit cloud servers to up to 1Gbps of total connectivity but depending on user needs we can provide faster connectivity.
We receive a lot of inquiries about our storage system. The majority of them revolve around whether we’re using SSD’s and the answer is yes but our storage is much faster than your conventional SSD. We currently utilize a Software-Defined Storage (SDS) called Storpool. A simple explanation is it keeps one copy of data on a fast storage medium (SSD or NVMe) and two copies on a slower medium (HDD’s with raid cards using write back cache or Intel Optane drives used as write caching drives
There are a lot of benchmarks out there all with different goals. I’ve opted to include nench which is popular with users that are price conscious while expecting high performance servers. It’s a simple benchmark which makes it easy for users to compare the results. I’ve chosen one of our 1GB servers that cost just $5/month to run the benchmarks on:
nench.sh v2019.07.20 -- https://git.io/nench.sh benchmark timestamp: 2019-09-20 19:27:53 UTC Processor: Intel Xeon Processor (Skylake, IBRS) CPU cores: 1 Frequency: 1999.993 MHz RAM: 979M Swap: 1.0G Kernel: Linux 3.10.0-1062.1.1.el7.centos.plus.x86_64 x86_64 Disks: sda 20G HDD sdb 1G HDD CPU: SHA256-hashing 500 MB 1.720 seconds CPU: bzip2-compressing 500 MB 5.371 seconds CPU: AES-encrypting 500 MB 1.124 seconds ioping: seek rate min/avg/max/mdev = 87.9 us / 215.4 us / 12.6 ms / 158.1 us ioping: sequential read speed generated 17.3 k requests in 5.00 s, 4.23 GiB, 3.47 k iops, 867.2 MiB/s dd: sequential write speed 1st run: 841.14 MiB/s 2nd run: 841.14 MiB/s 3rd run: 910.76 MiB/s average: 864.35 MiB/s nench.sh v2019.07.20 -- https://git.io/nench.sh benchmark timestamp: 2019-09-20 19:28:35 UTC Processor: Intel Xeon Processor (Skylake, IBRS) CPU cores: 1 Frequency: 1999.993 MHz RAM: 979M Swap: 1.0G Kernel: Linux 3.10.0-1062.1.1.el7.centos.plus.x86_64 x86_64 Disks: sda 20G HDD sdb 1G HDD CPU: SHA256-hashing 500 MB 1.722 seconds CPU: bzip2-compressing 500 MB 5.396 seconds CPU: AES-encrypting 500 MB 1.104 seconds ioping: seek rate min/avg/max/mdev = 92.2 us / 231.6 us / 4.89 ms / 153.2 us ioping: sequential read speed generated 18.1 k requests in 5.00 s, 4.41 GiB, 3.62 k iops, 904.0 MiB/s dd: sequential write speed 1st run: 883.10 MiB/s 2nd run: 824.93 MiB/s 3rd run: 859.26 MiB/s average: 855.76 MiB/s
You could compare other results with many of the benchmarks posted elsewhere. We would however point out the system had an average disk access latency of just over 200 us. It also had disk access speeds of over 850MB/sec. It does this all while providing not only disk based redundancy but server level redundancy meaning it doesn’t have the advantage of relying on strictly a local disk. This does however mean if the underlying hardware you’re on was to fail your server will be brought online again in under 5 minutes on the next available server.
This is just a quick run down of our cloud if you have any questions of course feel free to comment on our blog or contact our sales team. We fully intend on providing more benchmarks and results based on other popular tools in the future. If you have any specific requests feel free to comment and we’ll make sure to include them in a follow up blog post.