Advertisment

Four cores good, six cores better?

author-image
CIOL Bureau
Updated On
New Update

UK: Intel officially announced its “Dunnington” Xeon 7400 processor on Monday, the first of the six-core processor series of chips that is targeted at high-end servers. It sets the stage for a new round of competition between Intel and AMD. However, the real challenge will be to see whether today’s software is able to unlock that power and deliver real economic benefits to the CIO.

Advertisment

Language designs need to change to unlock the power

Not all software will be able to unlock the huge processor power that the new six-core chips will provide. The announcement, by Unisys, of one of the first machines provides a good example. Unisys announced the ES7000 Model 7600R Enterprise Server on Monday. It is a 16-socket server and so, in theory, it offers a massive 96 cores of processing power. However, the practical problem for CIOs looking to deploy Windows Server on this machine is that Windows only supports 64 cores in a single instance, leaving 32 cores unused.

Now, it is unlikely that the 64 core limitation, at an operating system level, will remain the issue for long. The Linux kernel is already capable of supporting more than 64 cores, as are Solaris and OpenSolaris. The real obstacle to unlocking power will be at the business application level. A huge percentage of the software used by business today is single threaded, using inherently sequential algorithms and developed in procedural programming languages.

My own first attempts at the parallelization of sequential code came in the late 1980s and taught me that this is a very difficult transition to make. There have been major developments in language design since those days, such as the relatively recent Fortress language developed by Sun Microsystems to address the “general problem of constructing software for very large HPC systems (tens of thousands of processors, petabytes of memory”. This and other similar language developments will be important when using the newest chips in HPC environments.

Advertisment

Virtualisation is the real key to unlocking the power

Not all software will be amenable to being moved from sequential to parallel computation. Instead the improved efficiency is likely to come elsewhere. Virtualization is the most important technology that will bring together sets of inherently sequential software, with defined limits to its scalability onto a single platform that operates at high levels of utilisation.

Having a more dense computing fabric can enable more applications to be brought onto a single device, i.e. one with more cores per die and on a smaller footprint. The savings from consolidation are greater when the density is higher, driving more savings through reducing power consumption, reducing IT management costs, improving infrastructure flexibility and increasing asset utilisation.

To maximise the use of the power of the new six-core chips and the platforms that they are being built into, there will need to increased adoption of virtualisation, and the IT management framework needs to be robust enough to manage the increased complexity that can result.

The author is SVP IT research at Ovum.

semicon