Skip to main content Skip to page footer


Parallel software is currently mainly based on the MPI standard, which was established in 1994 and has since dominated applications development. The adaptation of parallel software for use on current hardware, which is mainly dominated by higher core numbers per CPU and heterogeneous systems, has highlighted significant weaknesses of MPI, which preclude the scalability of applications on heterogeneous multi-core systems. As a result of both the hardware development and the objective of achieving scalability to even higher CPU numbers, we now see new demands on programming models in terms of a flexible thread model, asynchronous communication, and the management of storage subsystems with varying bandwidth and latency. This challenge to the software industry, also known as the "Multicore Challenge", stimulates the development of new programming models and programming languages and leads to new challenges for mathematical modeling, algorithms and their implementation in software.

PGAS (Partitioned Global Address Space) programming models have been discussed as an alternative to MPI for some time. The PGAS approach offers the developer an abstract shared address space which simplifies the programming task and at the same time facilitates: data-locality, thread-based programming and asynchronous communication. The goal of the GASPI project is to develop a suitable programming tool for the wider HPC-Community by defining a standard with a reliable basis for future developments through the PGAS-API of Fraunhofer ITWM. Furthermore, an implementation of the standard as a highly portable open source library will be available. The standard will also define interfaces for performance analysis, for which tools will be developed in the project. The evaluation of the libraries is done via the parallel re-implementation of industrial applications up to and including production status.

Project Activities

  • Definition of the GASPI standard of a PGAS-API; ensure interoperability with MPI.
  • Development of a high-performance library for one-sided and asynchronous communication based on the Fraunhofer PGAS-API.
  • Provision of a highly portable and open source GASPI-implementation.
  • Adaptation and further development of the Vampir Performance-Analysis-Suite.
  • Provision of efficient numerical libraries (core functions and higher-level solvers) for both sparse- and dense Linear Systems.
  • Porting complex, industry-oriented applications.
  • Evaluation, benchmarking and performance analysis.
  • Information dissemination, formation of user groups, training and workshops.


The GASPI  project is part of the funding program “IKT 2020 – Forschung für Innovation“ of the BMBF in the framework of the call for “HPC-Software for scalable parallel systems“.


Members from EMCL


Dr. Jens Breitbart

Project Link 

HPCwire: An HPC Programming Model for the Exascale Age