<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=1875728222659097&amp;ev=PageView&amp;noscript=1">

HPC Trends: Why CFD Must Be Proactive Not Reactive

Home » Blog » HPC Trends: Why CFD Must Be Proactive Not Reactive

From the introduction of x86 clusters in the late 1990s to the decade-dominance of vector processing, industry trends and adaptations in high performance (HPC) trends are reflective of technological advances and consumer demand. As always, cost, performance, and accessibility are driving these trends, in a proactive attempt to ensure future computational power needs are met.

In this article, we look at ways in which the world of HPC is approaching the latest disruptions and trends, and how the world of CFD must match this proactive rather than reactive stance. This was underpinned at this year's NAFEMS World Congress, and demonstrated with a wealth of papers covering relevant topics.

Being 'Reactive' Is Not Sustainable

Over recent years, some CFD simulation teams have continued to grow and expand their computing cluster, running into a number of limitations including space, power, additional users and licensing costs. While developments in the advancement of hardware over recent years has helped, CFD vendors have not been completely proactive when it comes to creating and developing accelerator-supported HPC software packages. Their available platforms have only achieved marginal gains from using HPC technology, which means the benefit does not justify the additional HPC hardware cost. 

Accelerated Computing: The Need For Speed 

Over recent years, the use of accelerators has become a significant part of everyday life in HPC. According to the latest research, approximately a third of HPC systems in operation today are using accelerators, and for the first time, over 100 accelerated systems are featured on the Top 500 list. In addition, a third of FLOPS are powered by accelerators.

A key factor in the increase of accelerator-use is the continued slow-down of Moore’s Law. Improving microchip performance without experiencing a disproportionate increase in costs and power is difficult with the size of transistors currently approaching atomic scale.

2016-07-26 - McLeod Engg on Tap.png

Figure 1 - Cost-Performance Comparison

Such a slow-down means the industry can no longer expect a doubling of performance every 18 months, hence the reason for accelerator implementation. This is not a surprising move given that many companies have recognized the increased efficiency even a small accelerator investment can make.

The most widely used accelerator type is easily the GPU, holding holding around 80% of the market. As the accelerator type that brought supercomputing into everyday life, the mobile phone is a good example of its working existence.

Envenio predicts that accelerator use will continue to increase, and within a few years, a larger majority of Computational Fluid Dynamics (CFD) systems will already be accelerator equipped. 

While writing code for accelerators is not easy, this is what was required to arrive at a truly optimized solver for HPC hardware. Envenio built its EXN/Aero software package to deliver economical supercomputing for all sizes of CFD teams (freelancers to large corporate teams), allowing businesses to not just reduce hardware investment, but to be confident in their ability to increase performance, and ultimately remain in line or ahead of their competitors. 

Embracing The Cloud 

You don't have to look too far back to find articles questioning the validity of running HPC workloads in the cloud, and we've even written an article about how some misconceptions are limiting its current use. In fairness to the authors of such articles, the technology in the public cloud at this time was simply not ready to accommodate the type of workloads commonly seen in the HPC space.

It's fair to say that in 2017, we not only have the technology in the cloud required to run such HPC workloads, but also have real-world companies and organizations successfully doing just that.

Broner presents a good example of why companies are looking to cloud options, in his recent article on Inside HPC.

"You buy a computer for a few million dollars, and you are able to run simulations to reduce your innovation time and time to market for your products. The auto manufacturer depicted in Figure 1 represents the new dilemma faced in buying such an in-house HPC system. With the workload this company has, what size system should they buy? If they buy a system that accommodates the peak workload, they may have to spend around $20M, but the system will be only 20% utilized. If they buy a $4M system, the system will be highly utilized, but large jobs cannot be run, and jobs will wait in a queue—potentially for days—before they run, delaying innovation and time to market".

Screen Shot 2017-09-04 at 16.29.05.png

Figure 2 - The challenge of an auto manufacturer selecting the next HPC system

Many companies finding themselves in this situation are now deciding that neither option is acceptable for their CFD workloads, and are turning to companies such as Envenio, to run their high performance simulations in the cloud. They will pay a monthly fee for the software they need when they need it, and have instant access to the most ideal system for their requirements. As a result, design throughput and time to market is also optimized. Software such as Envenio's cloud-hosted EXN/Aero provides companies with the best of both worlds. They have access to large computing power, for a fraction of the cost. Cloud-hosted CFD can now be considered an 'operating expense' rather than a huge capital expenditure investment.

B9ZcB_PCAAA3g86.png-large.pngFigure 3 - This image shows why cloud-based CFD offers a more appealing solution to organizations

Unlike other trends in the HPC world, the cloud is fairly unique. In effect, companies can dip their toe in the water, with plenty of opportunity to test the key benefits for the future without committing to multi-million-dollar purchases.

Essentially, the cloud gives the CFD community an exciting opportunity to access world-class computing power on a budget, whilst working against the clock. 

The Manycore Mindset

As innovations in HPC progress, updates in workstation-scale technology track the architectural evolution of large-scale supercomputing systems. At present, workstations are shifting to a manycore layout, where massively parallel GPU co-processors share the computing workload with multiple CPU core. By adopting a manycore mindset that reduces O&M costs on computing equipment and results in more easily accessible high-performance simulations, we are locked into a steep scalability trajectory that non-optimized solvers will find harder and harder to match over time

Our very own EXN/Aero is an example of a manycore-optimized solution ideal for those wishing to scale up their CFD simulation capability, without experiencing a similar increase in compute or licensing resource costs.

CFD vendors must continue to invest into forward-thinking HPC trends, to deliver low-cost powerful manycore supercomputing at every enterprise scale. A working example of this, is Envenio's investment to develop space-time (para-real) parallelization, helping to overcome some of the new latency barriers that appear in a manycore environment.

Not every computing technology is equally good at every computing task. Key to manycore computing is providing this flexible load balancing mechanism that lets the various architectures play to their strengths. The Cell-Based Mapping Module (CBMM) is our answer to this problem. It detects the available compute resources, and then manages assignment of the cell and interface compute tasks on-line, according to their attributes and data type. This approach maximizes parallelism for all solver operations and removes bottlenecks.

EXN/Aero's cell and interface design is another example of a proactive approach, allowing for greater creativity and flexibility in the way a CFD run is parallelized. Bulky, expensive cell tasks are computed on GPU devices much of the time while interface tasks are distributed based on their proximity to cell objects. This minimizes the communication overhead for interface information from interface to cell, and from host memory to co-processor memory .

EXN/Aero’s CBMM also allows multiple simulation instances to run concurrently on the same shared resource, ultimately speeding up the engineering workflow.


The above topics have simply scratched the surface on a number of issues and trends around HPC and CFD. However, they show the very real stance being taken by us here at Envenio in response to changing needs and technologies. Built from the ground up to maximize performance in hybrid parallel computing environments, EXN/Aero is a working example of CFD software embracing and adapting to HPC trends. CFD users must remain open minded, proactive, and ready to adapt to new technologies to remain relevant and innovative. 

2017-09-12 | Categories: CFD, simulations, HPC