In-depth look at platforms designed for lowest cost and how they stack up. The eeePC comes out on top though Ncomputing does well. Definitely worth a read.
A few weeks ago when I published my benchmarking results I made a number of points, one of which was that we really need to be careful with what equipment we are sending to less developed countries including Africa. A few quick Google searches will return a handful of charities who will accept the very dregs of the computing world for shipping out to these countries. Equally, a few more searches will reveal countless articles damning this practice for the environmental damage it does. Well, in the spirit of transparency I want to highlight the work of a charity called Computer Aid today, and in particularly praise them for their approach to recycling machines.
You see, Computer Aid are obviously came to the same sort of conclusions I did which is why they now will not accept donations of machines slower than Pentium 4 class hardware. A bold move, but one that ensures the equipment they send out is fit for purpose. However, the praise does not stop here. I actually stumbled on to their site while tripping over an email from last October my professor had sent me (Cheers Colin). What I actually came across was the brief testing ZDNet had done in conjunction with the company to look at more specialised low-cost and low-power platforms that could be used when modern desktop hardware simply didn’t seem appropriate. A quick hop over to the Computer Aid site informed me that since the date of that email back in October the charity had published their full report and it really is worth a read. It’s a brief yet concise nine page affair but what I find really brilliant are the quantitative results that they were able to attain due to their inherent connections to the regions being discussed. Not only does the paper have a close correlation to the benchmarking we did, but it also is looking at much the same hardware level questions I did during the x86 <15W platform research (Note you can skip to the last pages of those articles for the full PDF reports).
To summarise their report, the Asus eeePC was chosen as the lowest power and most feature rich platform:
“The Asus Eee PC is the overall ‘winner’ of the tests. It is the preferred solution by all partners. Despite the small size of the screen, it offers the best compromise between power consumption, performance and portability in both Linux and Windows-equipped versions. “
Also, the Ncomputing X300 Windows based thin client system was highlighted as being especially suited towards lab deployments:
“The Ncomputing X300 is the preferred solution when setting up computer labs. Despite higher power consumption per each user and limited Linux compatibility, it was appreciated especially by African Universities in the case of installations not requiring portability. Desktop virtualisation can be a viable option to reduce hardware costs, power consumption and required maintenance compared to the use of traditional desktop PCs.”
Now, these results excite me greatly because I wholeheartedly believe we can do EVEN BETTER by reflecting on the developments in the hardware market and combining this with our own research into the type of software deployment we favour.
Let’s take on the hardware first. According to ZDNet the eeePC used was an early 701 model which incorporated a rushed Celeron ULV processor running at 900MHz. This design was quickly phased out in favour of the 1.6GHz Atom N270 + 945GSE chipset combo designed to compete in this ‘netbook’ space. Despite the simpler in-order design, the increased clock speed, hyperthreading capabilities, and lower TDP caused the Atom solution to become the de-facto standard in the sector - and this of course is why we see several hundred designs based around these specifications. Now, these designs have been round a good long while and the market economics of the situation are very interesting if a bit long-winded (Intel being able to produce the chipset very easily and cheaply using existing R&D, old fab facilities et al). Needless to say that if you’re placed in the correct sector of the industry and have the buying power you can do even better than this woeful implementation.
You see, all Atoms are not created equally, and more to the point their chipset pairings vary dramatically. I’m not going to go into great detail (please see our research) but using a 1.6GHz Atom chip with something other than the stunted 9.3W 945GSE chipset and you’re onto a winner. A case in point: The Dell Mini 12 (now discontinued in blighty, but seemingly not in America - go figure) and now the Mini 10. Due (allegedly) to the rumours flying around that Intel did not want vendors to use the Atom Diamondville platform in anything other than <12″ lappies in order to avoid cannibalisation of their Core2 CULV market, Dell just went right on and used the Silverthorne platform intended for MIDs, which is essentially the pairing of a smaller, slightly more efficient albeit architecturally identical Atom with a chipset built from the ground up to be suitable for it - Poulsbo. It also helps them jump ahead of the competition by squeezing more run-time out of a 3 cell Li-ion battery and offers product subdivision in the case of the Mini 10 by pitching a cut-down “V” variant sporting the standard Diamondville kit. The result of Poulsbo is a complete platform drawing a mere 4.3W as opposed to the 11.8W that 99% of netbooks utilise, or the 29.5W one nettops are lumped with. Interesting stuff, especially when you consider the PowerVR influence over the GMA500 GPU in theory allows it decode 1080p H.264 in hardware.
The disadvantages to this platform? Silverthorne is damn expensive although we can’t get official figures from the Intel ARK, and the graphics drivers under Linux are a complete shambles apart from the custom 8.04 Ubuntu Builds Dell liaised with Canonical to develop. When I say expensive by the way I mean that it’s basically impossible for you and I to locate our local friendly consumer-embedded reseller and find anything using it. These boards are reserved for the target form factor (MIDs) and those organisations big enough to be able to go ahead and purchase huge volumes to make them financially viable like Dell. I did have one quote with an industrial embedded manufacturer who estimated a $500 cost per board which is frankly insanity when you consider the 1.6GHz Mini 10 is just £349. The maths don’t stack up in the cold light of day and this is hugely frustrating for consumers or people like me who like to keep an eye on the market. It’s also why we came to the conclusion that the trusty old 945GSE chipset, despite it’s failings was the best of the bunch, especially as mITX boards using it have recently reached you and I for ~£100.
This is where another company come into play - FitPC. FitPC have a single product at the moment, a low power self-titled PC running on the frankly antiquated Geode CPU (think cobwebs instead of thermal paste). Whilst this is fine for cost effective thin clients or other extremely undemanding applications it becomes a sticking point when you look into the kind of rich FOSS software implementations we have been discussing. Last week in the far east a sequel appeared to have tipped up powered by the 2W 1.6GHz Atom Z530 CPU and the 2.3W US15W chipset (Poulsbo). Sure enough, the company themselves are now advertising the device and I must say it looks like an absolute winner.
And so, to software. The proposition here is simple. The Ncomputing X300s are thin client machines which rely on proprietary Microsoft software, and are limited as all thin clients are when faced with any serious computational tasks such as the playback or rich media sources. In addition, the machines have severe range limitations (10M from the host PC according to their docs) and although fairly cheap (IRO £149) don’t scale particularly well in larger deployments. That said, the study still conveyed that technical staff were more than willing to work with such devices which really does bode well in terms of expected user experience.
Instead, my proposition is to build off the research done by enterprising coders like David Van Assche, and use the new generation of low powered hardware outlined above to perform all processing locally in the model we term CCL - Client Centric Processing. Jjust like thin clients the central kernel used is delivered using PXE and can be maintained just once but in this case it’s a much larger image designed to perform almost all tasks locally. Using this inverted approach a machine can utilise all its onboard resources whilst putting minimal pressure on a central server, whose only job is basically to send data using the NBD protocol. The huge bonus is also that the machines all benefit from having no local storage to support, and share a central point of authentication and management.
The dream therefore is a classroom (mobile or otherwise) supporting ten students and a teacher. Each student is equipped with a Poulsbo cored Atom machine which draws well under 15W including the VDU (which ironically becomes the biggest drain), and the teacher utilises the server, whose only other job is to deliver content quickly over a Gigabit Ethernet. The server itself could use the same equipment as the students, but ideally at this point we would utilise a richer set of core-logic to push data from a RAID1 array, as read speeds are really the defining burden. The result of all this is a fully functional, low maintenance, rugged, low power classroom that achieves all its aims for the very minimum outlay possible. In addition, the seemingly positive intentions from Intel in supporting Linux and third-party IP blocks is the cherry on the top of ideas like this for research teams like ourselves.
All this and more is possible with the equipment today, and with more developments in the fertile netbook space such as the upcoming Pineview platform which places the GPU onto the CPU die, the equipment we use to accomplish our goals can only become faster, cheaper, and lower in terms of electrical footprint.
I’ll cover Pineview (and Pinetrail - don’t ask) in the coming weeks for those interested. At the moment we’re simply working with conjecture as far as those platforms are concerned so it’s for the best if we wait for more solid details to leak. Till then there’s always l’inq, and good old Anand.