Advertisment

New approaches to efficient computing

author-image
CIOL Bureau
Updated On
New Update

By K.P.Unnikrishnan, Director- Marketing, Alliances and Teleweb Sales

Advertisment

As the need for compute power grows, so too do the heat and cooling expenses that powerful computer systems generate. In recent months, several key industry players such as Google have publicly recognized the impact of rising energy costs on their bottom line. With the number of users on the Web expected to rise by 300 million per year into the foreseeable future, even small improvements in Web server energy efficiency hold the promise of massive savings. The challenge facing today's systems developers, consequently, is no longer simply one of maximizing price/performance. Energy consumption is now a critical success factor in the systems development equation, driven by unsustainable increases in datacenter power requirements and the need to cool constantly running electronic equipment and servers.

Hardware heat load density has more than quadrupled over the past five years. Cooling systems for existing datacenters are reaching capacity, and building new datacenters is often prohibitively expensive. Enterprises are seeing their datacenter power growing to be among their biggest expenses. Global environmental and economic issues add to the challenge. Our earth is in the balance, quite literally. Unless we act now to develop more energy efficient chips, our capacity to reduce the impact and risks associated with climate change and sustainable economic development will decrease over time.

Some examples:

Today, server processors easily consume 150 W. The most efficient processor delivering even better performance uses only 70 W. Cutting our datacenter energy consumption can have a substantial impact in addressing the nation's energy consumption challenges. Doing well by doing good, we can achieve significant cost savings through increased datacenter energy efficiency. If half the entry-level servers sold in the last three years were replaced by the most energy efficient processors, for example, over 11 million tons of CO2 emissions/year equivalent to the emissions of a million SUVs would be eliminated. Additionally, the higher performance of processors such as Sun's UltraSPARC T1 could reduce the number of Web servers required in the world by half, again slashing power requirements. A recent conference at Sun Microsystems focused on energy savings in datacenters, like those used for large Web enterprises and corporate networks. The servers in those centers are high power, and high power using. It's estimated that 20 to 25 million servers are installed worldwide, with about half here in the U.S., all operating 24/7.

Advertisment

The conference highlighted some new processors that might help reduce this. Sun's UltraSPARC T1 has up to eight cores with four threads per core: very high performance and it uses just 72 W typ, 79 W peak vs. 180 W for other processors. AMD's newer processors lower power by 40 percent through higher-performance internal communications, a lower-frequency dual core, and a 64-bit architecture.

The energy savings of these processors is multiplied by other equipment in the datacenters. Air conditioning, power conversion, and UPS backup power all add to the energy usage. A large datacenter can need 6 MW for its servers, but as much as 14 MW total. So if we cut 100 W from every server in use today we might save 4 billion W*24 hours a day, every day. A new era in energy-efficient and eco-friendly computing. 

Chip multithreading enables a new era of cool computing, delivering an unprecedented combination of high throughput and low power consumption. Meanwhile, chip designers who pursue higher performance with the traditional ILP (instruction-level parallelism) techniques will fall further behind the performance/power consumption curve.

Advertisment

Despite the clear demand for cooler computing, many previous- and current-generation processors rely heavily on ILP to speed single-threaded applications. ILP attempts to increase performance by determining, in real time, instructions that can execute in parallel.

In their newest systems-on-a-chip designs, developers are increasingly turning to chip-multiprocessors, or multicore processors. Chip-multiprocessors reduce power demands and improve efficiency by sharing on-chip structures such as memory controllers between the cores. However, many of these designs continue to rely on ILP to deliver performance gains. In doing so, the designs ignore the main characteristic of commercial applications the requirements of large databases and customer relationship management systems. They are rich in threads but poor in instruction-level parallelism. As a result, their development continues down the path of increasing power demands. Energy-efficient servers, such as the Sun Fire T2000, can have a substantial impact on nationwide and planet wide energy consumption challenges.

At first glance, it appears that improving processor performance and lowering processor power demands are at odds, but this doesn't have to be the case. Multi threading running multiple threads per processor core hides the frequent high-latency events and exploits the thread-level parallelism common in commercial applications. By combining simple core chip multiprocessors with multithreading, it is possible to consume less power while delivering higher throughput. These single-chip processors are designed to exploit thread-level parallelism by employing fine-grain multithreading. This approach, known as chip multithreading, is ideal for commercial applications. Under this design, the chip multithreading processor has multiple independent 64-bit cores (execution pipelines). Each core is capable of selecting from multiple active threads. The result is a processor that allows dozens of threads or processes to execute simultaneously on a single chip.

Advertisment

The aggregate throughput of this design is approximately 5 to 15 times higher than contemporary processors such as the Intel Xeon, AMD Opteron, and Sun UltraSPARC III. Despite the much higher throughput, the power density of this design is much less than others, making it better suited for dense rack-mount installations in the datacenter.

Other power-saving techniques

Designers can enhance the power-saving nature of the basic chip multithreading architecture through additional hardware features that target two sources of high power consumption. Upon detecting that a chip is reaching a certain power limit, certain mechanisms can be used to throttle the power down. Designers can gain additional power savings by reducing the issue rates of the cores and limiting activity in main memory.

Software design also can reduce power consumption. Additional power reductions can be found on the operating-system layer. Idle loops occur more frequently than you might expect, especially on a processor with 32 or more threads. To reduce power consumption, the operating system can halt a thread when it enters an idle loop, resuming execution only when it is ready to schedule work. Thread-scheduling algorithms are also important tools. Although it is difficult for an operating system to gauge how a workload might behave, it can follow guidelines about how best to spread limited workloads across multiple cores. The operating system can be encouraged to pack threads unless there is a performance cost.