<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=1875728222659097&amp;ev=PageView&amp;noscript=1">

Mixed-Precision Arithmetic in HPC: Can Sacrificing Precision Really Improve Accuracy?

Home » Blog » Mixed-Precision Arithmetic in HPC: Can Sacrificing Precision Really Improve Accuracy?

The world of supercomputing is evolving fast, and with the first exascale systems expected in just a few years, the HPC (High Performance Computing) community is hard at work, preparing for their arrival.

While these developments are exciting, their use in practice is causing some head-scratching among engineers.

The power demand from exascale systems is one of the concerns, with current predictions stating a minimum draw of 20MW. Cost and resource implications are no doubt set to shape the new era of HPC, and computer scientists are faced with identifying ways to make them as efficient and accessible as possible.  

The Envenio team has already started to look to the future of HPC, building its EXN/Aero CFD Software as a proactive solution to provide accessible and affordable CFD (Computational Fluid Dynamics), and prevent ever-growing costs in power-draining hardware.

The Accuracy/Precision Trade-off

Identifying ways in which systems can be more energy efficient is one of the avenues being investigated by the industry, and a report by computer scientists from the Argonne National Laboratory, Rice University, and the University of Illinois at Urbana-Champaign, has shown how more accurate solutions could be achieved by reducing a computation’s mathematical precision.

The concept of “inexact computing” or “mixed-precision arithmetic” was first mentioned in a paper written by one of the report’s authors, Krishna Palem, in 2003. His findings concluded how replacing higher precision calculations with lower precision calculations could lead to energy savings, bringing obvious benefits in the looming exascale era.

The latest report takes Palem’s original findings a step further, highlighting how an application’s accuracy could also be improved, by simply reinvesting the saved energy into additional computations.

This reinvestment is based on 'Newton-Raphson', a numerical analysis method developed in the 17th century. In the paper, Palem discusses how the software is able to compute successively more accurate results by "calculating answers in a relay of springs rather than in a marathon".

In practice, there are a variety of ways this “mixed-precision arithmetic” can be utilized,

In a blog post produced by the University of Manchester's Nick Higham, conclusion is drawn as to how high precision, low precision, half precision and even quadruple precision all have a place in reducing costs and optimizing performance.

EXN/Aero's Mixed Precision Capability

Envenio’s commitment to extracting additional performance from existing hardware is echoed by the report’s authors, and an increase in the development and use of mixed precision algorithms is widely anticipated over the coming years. 

This forecast is supported by the returns of an ever-diminishing Moore’s Law, the increased use of accelerators such as GPUs, and a number of economical and practical challenges affecting accessible supercomputing.

Envenio’s EXN/Aero solver has the ability to perform mixed precision, another example of how the software is bringing real world trends to engineers at all levels, in both an accessible and affordable way.


2018-10-15 | Categories: CFD, HPC

Popular Posts