All 6 solo nodes are now up & running, and almost fully provisioned. Next big one has been installed, and software setup and testing is beginning. This new one will include automatic SSD storage tiering, so it will require a bit of extra testing etc. before enrolling fully in production. This storage node also ramps everything a notch or two higher in all regards of hardware. Let's call that one Jabba3.

We are already planning on Jabba4, which is a bit different due - this one will experiment with custom storage chassis in the backblaze pod style. For Jabba5, chassis is already on order and on sea freight pending delivery at end of november. For sea freight arriving at end of January/Early feb, we are planning to get 3 20-24 disk storage chassis, and couple Rackable Systems 3U 16disk arrays for testing.

If the Jabba3 software is a success, we will start utilizing 4Tb disks with even heavier SSD utilization than before. That will finally start realizing the level of operational cost we have been targeting, while more than realizing the performance targets.
We are also planning to bring online 2 more Solo type storage nodes, but with a few more disks than usual.

We are also toying the idea of building almost SSD only storage node for OS data only, it would feature several Tb of SSD, and smaller size magnetic drives in RAID10 to maximize performance.

We got 27 nodes waiting to be brought online, 25 of which are waiting for new PSUs, for which we will probably source locally overpriced ones to get them online ASAP. We will roll them into production slowly to see how the new storage model copes. All 27 of which has already been sold and pending delivery to customers.

Some 20 nodes are already purchased and waiting for delivery, and at end of month we will order a bunch of the AMD E350 mobos + latest gen Atoms (~30 total), along with a quantity of older model Atoms for the 2G series. We are targeting to bring up next month up to 50 nodes online - depending upon the storage progress.

We also have inbound on sea freight a high end Dell Dual Quad Xeon, 72Gb ram server for testing - if the power to performance to ram ratio is good, we might start offering this model of server with 1Gbps unmetered at beginning of the year.

Right now by far the biggest expense is the storage - to maintain high performance we need to seriously overshoot the performance characteristics, but once we get the software and hardware designs honed in we are hoping to reach a sensible storage vs. nodes cost ratio.

Right now the operational costs are FAR FAR more than the revenue from Espoo DC, we are trying to ramp up the production schedule of new nodes as much as we can. This is not easy work, there is so much things which needs development in both software and hardware, and we actually need to come up with new hardware designs to be able to compete with big budget DCs, thinking outside of the box is a must to be able to come to a sensible cost for the services. Some big budget DCs even have almost free electricity, and their space costs are a tiny fraction per m2 compared to ours, and to overcome these obstacles requires a lot of creative thinking.

Current bandwidth usage is very low as well, we are happy to say that we are operating BY FAR uncongested network: http://i.imgur.com/8X95di8.png
In the graph they are vice-versa. so inbound is actually outbound. Measurement is from cogent side port.



Domenica, Ottobre 13, 2013

« Indietro