you do not have javascript enabled and page functionality may be reduced

Members’ Activities – Imperial College London

OPL - Impact of partial differential equations on unstructured finite element/volume-based solution.

Multi-core processors are now a ubiquitous building block for everything from smart phones, to desktops and supercomputers. In order to achieve performance and efficiency, it is now time to enable applications to embrace modern developments in processor design. Applications need ways of exploiting all the levels of parallelism: SIMD at the core level; threading to exploit resource sharing at the CPU level; and message passing for inter process communication.

Changes in disruptive technology push application developers into the Red Queen's race. Huge resources are often required to rewrite what is already very complex simulation software. Highly specialist knowledge is required to achieve high performance on modern multi-core systems. In addition, many well-engineered simulators also make use of third part libraries, such as PETSc, which lie on the critical computational path. This introduces further constraints for software architects: application codes can either use parallel paradigms supported by those third party packages, switch to an alternative package or implement their own library. It is with these specific challenges in mind that OPL aims to develop, extend and support key building blocks required for building high performance simulation software.

ICL - OPL partnership

Fluidity is an open source control volume finite element simulation software, whose development is centred within the Applied Modelling and Computation Group (AMCG) at Imperial College London. Fluidity is used in a wide range of geophysical and industrial fluid flow applications, including: tsunami and geohazard modelling; marine renewable energy; petroleum reservoirs; and mining. These are all demanding multi-scale problems of great social and economic interest, which require both advanced numerical methods and high performance computing. The two dominant computational costs are the assembly of the local systems of equations, and the solving the arising sparse system of equations. While the details of matrix assemble is application specific, the arising sparse systems of equations is solved using the open source library PETSc, developed at Lawrence Livermore National Laboratory. An additional novel feature of Fluidity is its use of anisotropic adaptive mesh methods. Adaptive methods allow local control of simulation accuracy, so there is never too much or too little resolution in a local region. In comparison with static

mesh approaches, this can either enable orders of magnitude high resolution to be achieved with a given computational resource, or allows the same accuracy to be achieved in a simulation with a much smaller computational resource.

The following figure shows resolutions of an adaptive mesh sediment-laden density current coloured by density. In this turbidity current, one can clearly see stratification of two sediment classes. Yellow highlights the large diameter sediment that settles more quickly, whereas red is the small diameter sediment. This adaptive simulation was run with approximately 100x fewer mesh nodes than for the fixed mesh with the same accuracy.

Fluidity had already been parallelised using MPI and could scale to thousands of processors. However, it was clear that in order to scale further, other parallel strategies had to be considered. OpenMP is an attractive choice as it is not too invasive in the code and, when done correctly, can scale well on shared memory processors. However, the entire software stack had to be considered, in particular, major simulator components such as PETSc and mesh adaptivity.

OPL brought together developers at Imperial College London, Edinburgh Parallel Computer Centre and Lawrence Livermore National Laboratory to develop an OpenMP threaded version of PETSc, based on the PETSc 3.3 stable release. This is currently being using in Fluidity and already the hybrid OpenMP/MPI code is outperforming the previous pure MPI version.

OPL is also supporting AMCG to develop the next generation of their mesh adaptivity library under an open source BSD license, PRAgMaTIc (Parallel anisotRopic Adaptive Mesh ToolkIt). This supports 2D and 3D adaptivity for triangular and tetrahedral meshes, and supports hybrid OpenMP/MPI parallelism. It also exploits hybrid arithmetic to optimise floating point performance. These innovations have greatly boosted the performance beyond that of the previous generation of the library and made it more generally available to other application codes.

February 14 2013

International Science Grid This Week's Andrew Purcell interviews Wolfgang Gentzsch

February 13 2013

A well-attended Open Petascale Libraries meeting was held in Salt Lake City on 11th November to coincide with the SC12 conference.

February 01 2013

"Cosmic Web Stripping" is identified as a new way of explaining the famous missing dwarf problem.

Copyright © Fujitsu Laboratories of Europe Limited 2010-2013. All rights reserved.
This website provides information about the Fujitsu Open Petascale Libraries initiative and the research activities of the members of the Open Petascale Libraries Network.
Our Terms of Use and Privacy statements apply to all pages accessed in using this website.