Advertisment

What’s common between Super-Moms and Super-Computers?

If there is a Moore’s Law for the world of computers, there’s a More’s Law for the planet of Super-computers

author-image
Pratima Harigunani
New Update
ID

Pratima H

Advertisment

INDIA: Expectations. That’s a very heavy word. It weighs heavily upon the shoulders of someone who is haunted by this invisible ghost every now and then. This ‘every now and then’ can mean every two years for a car engine where it is subjected to merciless scrutiny and a contradictory wish list of speed, performance, leg-room, exhaust-levels, torque, weightlessness feel, drive fluidity and what not. From being inside a tank to being fitted under a Ferrari, between Henry Fords to Elon Musks – the engine just keeps getting walloped with new expectations every time it fulfills a to-do list.

That ‘every now and then’ punctuates almost all hours of a waking (and sleeping) Mom’s day. She has to do the laundry, put food on the table, stuff sandwiches inside lunch boxes, carpool kids to school and back, keep up with piano lessons, jog in time for PTA meetings, plan extraordinary birthday parties, match ribbons/ties with frocks/shorts, keep pets happy and well-fed and so on and on and on.

But on some days, even those engines or Super-Moms must pity poor Super-computers. The breed just gets no breathing space. You hit teraflops, they want zetaflops. You make it to the Linpack benchmark, they want you to be on the Green top 50 too. You crunch weather reports like never before, they want you to still worry about Von Neumann faults. You climb exa-scale computing, they ask if you could do it wearing GPU or APU shoes or quantum computing socks. There is, to cut short the chase, just no word called ‘respite’ for anyone who works in the mercilessly-hungry skyline of super-computers.

Advertisment

When we get to chat with Rajeeb Hazra, Vice-President, Intel Architecture Group & GM Technical Computing we too had worked up quite an appetite around the many issues and wish-wands from the next super-computer.

Turns out that everybody keeps wanting ‘more’, and mysteriously enough, supercomputing wizards keep delivering on that yearning extraordinaire. Here's a peek-a-boo at how the impossible juggling is progressing and at some latest HPC somersaults that are happening right here in India.

How easy it to balance so many tangents whizzing around the super-computing radius. Between performance and power efficiency, between bare bone muscle and application ease, between being a Tianhe-2 and being a Piz Daint. Why are the world’s best supercomputers not appearing on the Greenest list and vice versa?

Advertisment

There is so much happening in the underlying architecture and things are moving fast from one generation to the other. There is multi-core parallelism and many-core parallelism. In fact, applications are being re-written for new architectures. I hope to see rapid proliferation and modernization of many scientific applications soon. Intel has been associated with CDAC etc and those partnerships have helped us with insights. I would say there is tremendous growth underway and we are ready to leapfrog from yesteryears’ applications to next-generation ones and products like Xeon 5 are helping that direction immensely.

As to the gap between powerful and green ones, that would certainly be a utopian world to achieve and we have some distance to cover on that goal. But the path that we are on, we will definitely accelerate on that road and help to bridge green benchmarks with efficiency benchmarks strongly.

How?

Advertisment

With Moore’s law as a good fundamental and with architectural innovation on top of that we are making energy-efficient cores. It is about changing system architecture from time to time and making the components energy-aware. There has to be a switch-on, switch-off intelligence that works smoothly and that’s a good path forward.

What is the significance of new GPU focus and other strategies with Sierra/Summit that players like Nvidia are purportedly deploying to take supercomputing to new levels?

One component of DOE’s strategy for the Coral bid was to support at least two fundamentally different architectures for the multiple Coral system deliveries. We believe that our approach toward HPC will enable revolutionary performance gains while maintain the decade plus investment in the application code base. In particular dramatic improvements in energy efficiency and cost-performance are expected to enable an ever growing set of application at all scales. Our approach is to support a familiar, proven program model while also working with the community to explore new program models in a transitional way where appropriate and desired.

Advertisment

We see tremendous opportunities ahead as Intel both develops some of the fundamental technologies and then integrates these into our products. HPC continues to be fertile ground to push the envelope in computing and Intel remains committed to a product line focused on doing this.

Exactly how?

Our focus is on programmability and compatibility since anything new is good but only if it is not too disruptive or uncomfortable for the eco-system. The entire stakeholder universe around should be able to transition smoothly with any breakthrough. Our product line Xeon Phi is trying to reach new levels without the hassle of learning something completely new or incompatible. We don't see the need for full speed accelerators or discrete offload carts or anything that is tough to align to in the long-run. We are trying to achieve evolution goals through parallelism and new architectures.

Advertisment

How much distance has been covered (or is expected to be covered in next one or two years) on areas like von Neumann challenge, the trade-off between performance and economy and latency issues?

We see a step function in system performance in the next couple years. This will come through the integration of new memory technologies that will deliver unprecedented performance while also maintaining a standard usage model. The next two to five years will likely be one of the most disruptive times in more than a decade in terms or realized performance gains.

Advancements in software, storage, compiler and post-silicon progress etc have helped the direction of supercomputing as much as hardware improvements (Moore's law's impacts). Would you agree? Why or why not?

Advertisment

Yes we agree that all of these play a critical role in making a balanced system. I would also add to those fabric technologies as being a critical factor in realizing the tremendous compute potential. Intel is also investing heavily in the technologies you mention to assure that customers have the ability to have a balanced, reliable, highly performance system.

Any observations on implications of NVRAM (Non-Volatile Random Access Memory), TSVs (Through Silicon Via), Moore's law asymptote for the industry?

NVRAM will likely follow a trajectory similar to that which we saw for DRAM (Dynamic Random Access Memory). New forms of NVRAM will initially augment standard memory systems but over time they will improve to play the role of low cost memory while DRAM will be pulled in closer to the CPU to deliver very low latency with capacities similar to what are typical today. TSV’s will first be commonplace in memory but there will likely be additional movement toward a PIM-like model. For now there remain significant thermal challenges to this approach but these challenges are being aggressively addressed.

Moore’s law in terms of the traditional density scaling will continue for at least a few generations. We are researching new directions involving both silicon and non-silicon based technologies. Even if density scaling were to stop at the current generation, architectural innovation would continue to drive us forward into the future albeit at a somewhat slower rate. Moore’s law has been responsible for only about 50 per cent of the improvements in perf/W and cost-performance over the last decade or two. This architectural innovation moving us forward will be alive and well for at least another decade, perhaps two.

So much is trickling down from the other side of computing world here – like in-memory engines, or computing on cloud, SDN etc. Is that good news for supercomputing folks?

All driving elements of energy efficiency are well in time. Software has started realizing it has its own role to play on the barometer of energy-efficiency. Software has to co-operate with hardware in new models. If software is not kept awake then no matter how sharp hardware elements are, that will destroy hardware’s efforts on energy-awareness.

What is the next-generation of super-computers going to be like?

They will have new levels of memory-processors, with new frontiers of co-designing, scalable system frameworks etc. No matter what the ingredients are – in-memory ones, or optics or interconnect ones – they will be blended well and interoperable in a stable, sustainable way.

Is that an easy road to chase?

The attempt is to reduce complexity and provide a platform where multiple architectures and components can have consistency, interoperability and fluidity. We are sitting in very exciting times and multiple technology transitions, component-integrations, and new innovations will bring HPC (High-Performance Computing) to the masses and create applications that will help everyday life. Soon you will find HPC as an integral part of a common man’s life, whether it is checking weather or a better way of traffic management.

How much is India ready for that kind of adoption?

India finds itself on an exciting turn, specially within the broader context of ‘digital India’ dream. To achieve that and with scale, supercomputing muscle is going to be very helpful; Digital without ‘compute’ is empty. We have graduated from PARAM to today’s best supercomputers and have so much IP and super-computing wisdom to leverage. From scientific research that India has mastered well, the application of HPC can flow on to artificial intelligence areas, translation of ancient texts, agriculture problems etc. India has contributed well on advances in super-computing assets.

The government has recently approved the launch of National Supercomputing Mission to connect national academic and R&D institutions with a grid of over 70 high-performance computing facilities at an estimated cost of Rs 4,500 crore. With this recent announcement high-performance computing will gain more prominence in India. Currently, HPC has been deployed in areas like weather forecasting, health care, education, life sciences and R&D. The government is also using HPC in areas like defence in order to send and interpret coded messages. In essence, largely, industries that deal with a lot of data or business analytics are probable targets for using HPC based applications.

What part makes you most excited about this new HPC thrust?

Intel provides its technology for several home-grown supercomputers such as Param Yuva (developed by the Centre for Development of Advanced Computing) and EKA (from the Tata Stable) It is a constant endeavour at Intel to work jointly with the scientific community for continuous improvement. With an Rs4500 crore commitment to HPC and for democratization of super-computing with a fabric that runs across the nation, India is clearly walking well on the broad initiative. It will soon become competitive when it will have not just one, not two, but many supercomputers working and buzzing everywhere. We are excited to offer our products and knowledge to this goal.

Any examples worth noting that indicate scope of National Supercomputing mission for India. Or any thoughts on what Indian supercomputers still need to learn/catch up on when it comes to their global counterparts?

India has come a long way very quickly in supercomputing. It would be difficult to criticize given the tremendous successes that India has had. If there is one area that we would mention it is the development of close ties between industry and computing. When the economic impact of computing is easily visible and driven from the commercial sector, computing will further accelerate through market forces, not just government foresight.

supercomputing esdm must-read