Compilers for High Performance (Compilation and Parallelization Techniques)

Topic 4

Description

This topic deals with all subjects concerning the automatic parallelization and the compilation of programs for high performance systems, from general-purpose platforms to specific hardware accelerators. This includes language aspects, program analysis, program transformations and optimizations for all resource utilization (processors, functional units, memory requirements, power consumption, code size, etc.).

 

A (non-exclusive) selection of standard issues covered is listed below. The interplay between compiler technology and development and execution environments is also included. Target programming styles comprise the usual sequential imperative languages, but also very high level, data-parallel, object-oriented, and single-assignment languages. We also welcome submissions on practical experiences, -in particular, industrial case studies- to assess the benefits and limitations and the essential reasons responsible for success or failure of current automatic parallelization techniques and programming styles.

Focus

  • static analysis
  • program transformations
  • cache optimizations
  • automatic parallelization
  • scheduling, allocation, mapping
  • communication optimizations
  • code generation
  • languages (compilation aspects)
  • dynamic compilation
  • compiling for Grids and hybrid systems
  • compiling for chip multiprocessors and embedded systems

Global Chair

Prof. Dr. Michael Gerndt
Technische Universität München
München, Germany
Email:
gerndt@in.tum.de

Vice Chairs

Prof. Chau-Wen Tseng
University of Maryland
College Park, USA
Email: tseng@cs.umd.edu

Dr. Michael O'Boyle
University of Edinburgh
Edinburgh, UK
Email: mob@dcs.ed.ac.uk  

Local Chair

Dr. Markus Schordan
Lawrence Livermore National Laboratory
Livermore, USA
Email: schordan1@llnl.gov