All workshops and tutorials will take place on March 11, 2007 according to the table below. Lunch is included for all tutorial/workshop attendees. Note that in the US, this is the start day of the Daylight Savings Time, so do not forget to adjust your clocks in the morning!
Theme | Morning (8:00-12:00) | Afternoon (1:30-6:00) | |
---|---|---|---|
DSP/Embedded systems | ODES: 5th workshop on optimizations for DSP and embedded systems | ||
Multi/many-core | Data-parallel programming models for many-core architectures workshop | Software tools for multi-core systems workshop | |
Instruction-level parallelism | Open64 Compiler tutorial | EPIC-6 workshop (explicitly parallel instruction computing) | |
Compiler tutorials | GCC compiler tutorial | Practical Phoenix: A Hands-On Tutorial |
The GNU Compiler Collection (GCC) is one of the most popular compilers available today, yet its internal infrastructure remains relatively unknown outside the immediate developer community.
Over the last few years we have made significant improvements to its internal architecture, resulting in a more modular compiler that incorporates many of the most recent optimization technology. In this tutorial, I will provide a roadmap to the internal workings of GCC that should help compiler implementors modify and enhance GCC to cater their needs.
The tutorial will provide a detailed description of all the major components in GCC (intermediate representations, SSA forms used, alias analysis, OpenMP, pass manager, call graph manager, structure of passes, etc). The aim is to provide enough information for implementors to be able to modify GCC efficiently.
Parallel architectures have proliferated to the desktop through multi-core CPUs and GPUs. As power constraints drive semiconductor manufacturers to increase levels of software-exposed parallelism in their products, developing software for these platforms becomes more and more challenging. Automatic parallelization of programs written in mainstream languages such as C remains in the realm of research, while manual parallel programming using such languages is fraught with difficulties. For example, debugging non-deterministic race conditions in multi-threaded code is exceedingly difficult. Additionally, tuning performance of parallel programs requires unprecedented knowledge and understanding of architectural and micro-architectural details. These issues have created an enormous programmer productivity problem.
Data-parallel programming models are emerging as an extremely attractive model for parallel programming, driven by several factors. Through deterministic semantics and constrained synchronization mechanisms, they provide race-free parallel-programming semantics. Furthermore, data-parallel programming models free programmers from reasoning about the details of the underlying hardware and software mechanisms for achieving parallel execution and facilitate effective compilation. Finally, efforts in the GPGPU movement and elsewhere have matured implementation technologies for streaming and data-parallel programming models to the point where high performance can be reliably achieved.
This workshop aims to gather commercial and academic researchers, vendors, and users of data-parallel programming platforms to discuss implementation experience for a broad range of many-core architectures and to speculate on future programming-model directions.
This inaugural workshop's talks will be largely invited, though attendance will be open.
More information including slides and a discussion board is at this
Google Group.
Open64 was originally developed by SGI and released as the MIPSpro
compiler. It has been well-recognized as an industrial-strength
production compiler for high-performance computing. It includes
advanced interprocedural optimizations, loop nest optimizations,
global scalar optimizations, and code generation with advanced global
register allocation and software pipelining. It was open-sourced in
2000 after it was retargeted to the Itanium processor. There have been a number of subsequent branches and improvements to
the compiler since then. Intel adopted the Open64 compiler for
compiler-related research and subsequently released it as the Open
Research Compiler (ORC) starting Jan 2002. During this time, Intel
drove ORC to outstanding performance and functionality and released
ORC 2.1 in the summer of 2003. Later, Pathscale (acquired by Qlogic in
early 2006) released a branch of Open64 for the AMD Opteron processor
in 2004, bringing the high performance open source compiler to x86-64
developer community. HP has sponsored the Open64 Compiler project for
Itanium since November 2005 following the path of ORC for compiler
research with additional focus on quality and upgrading C++ language
support to stay close to the GCC front end evolution. Open64 has been
ported to different architectures and this tutorial will present the
work and results on them.Open64: the Open-Source High-Performance Compiler for Servers,
Embedded Systems and Compiler/Architecture Research
Organizers:
Shin-Ming Liu (HP), Pen-Chung Yew (University of Minnesota), Sun Chan
(Simplight Nanoelectronics), Shengyuan Wang (Tsinghua University),
Yuan Dong (Tsinghua University)Proposed Tutorial Agenda