Programme

Mar
13
Sun
2016
Building Dynamic Tools with DynamoRIO on x86 and ARM (DynamoRIO) @ Tibidabo
Mar 13 @ 9:00 am – 12:30 pm

This tutorial will present the DynamoRIO tool platform and describe how to use its API to build custom tools that utilize dynamic code manipulation for instrumentation, profiling, analysis, optimization, introspection, security, and more. The DynamoRIO tool platform was first released to the public in June 2002 and has since been used by many researchers to develop systems ranging from taint tracking to prefetch optimization.  DynamoRIO is publicly available in open source form and operates on Linux and Windows on IA-32, AMD64, and ARM platforms.

International Workshop on Dynamic Compilation Everywhere (DCE) @ BNC B
Mar 13 @ 9:00 am – 12:30 pm

General purpose as well as integrated processors nowadays have to run programs written in a wide variety of languages with isolation concerns. Dynamic compilation, i.e. generate binary code at run-time, is becoming a viable solution for many usage scenarios, and the goal of this workshop is to present current research and look forward to what is going to happen in this field of growing interest for the coming years.

Scientific challenges are multiple with many inter-relations: program representation (source code, intermediate representation, data sets), fast binary code generation, patches, hardware abstraction, garbage collection, performance observation, performance trade-offs, polymorphism, operating systems.

An Open-Source GPGPU Compiler (GPUCC) @ BNC A
Mar 13 @ 2:00 pm – 5:30 pm

This tutorial will present gpucc, an open-source compiler built by Google targeting CUDA and NVIDIA GPUs. gpucc performs various general and CUDA-specific optimizations to generate high performance code. It outperforms NVIDIA’s toolchain (nvcc) on internal large-scale end-to-end benchmarks by up to 51%, and is on par for several open-source benchmarks (Rodinia, SHOC and Tensor). It supports modern language features such as those in C++11 and C++14, and compiles code 8% faster than nvcc, up to 2.4x faster for pathological compiles.

This tutorial will cover the following topics:

  • Using gpucc
    • gpucc system overview: a brief description of how gpucc works under the hood
    • Detailed performance results of gpucc vs nvcc
    • Compiling CUDA programs with gpucc: a demo on how to install gpucc and compile some sample CUDA programs
  • Contributing to gpucc
    • Performance debugging: how to debug the performance of generated binary by using nvprof and observing device code
    • Writing new optimizations for gpucc
The International Workshop on Architectural and Micro-Architectural Support for Dynamic Optimization (AMAS-DO) @ BNC B
Mar 13 @ 2:00 pm – 5:30 pm
Long employed by industry, large scale use of binary translation and on-the-fly code generation and optimization is becoming pervasive both as an enabler for virtualization, processor migration and also as processor implementation technology. The emergence and expected growth of just-in-time compilation, virtualization and Web 2.0 scripting languages brings to the forefront a need for efficient execution of this class of applications. The availability of multiple execution threads brings new challenges and opportunities, as existing binaries need to be transformed to benefit from multiple processors, and extra processing resources enable continuous optimizations and translation.
The main goal of this half-day workshop is to bring together researchers and practitioners with the aim of stimulating the exchange of ideas and experiences on the potential and limits of Architectural and MicroArchitectural Support for Binary Translation and Dynamic Optimization (hence the acronym AMAS- DO, reflecting an a change fromprevious editions). The key focus is on challenges and opportunities for such assistance and opening new avenues of research. A secondary goal is to enable dissemination of hitherto unpublished techniques from commercial projects.

 

Mar
15
Tue
2016
Keynote – Keshav Pingali
Mar 15 @ 8:30 am – 9:30 am

50 Years of Parallel programming: Ieri, Oggi, Domani*

Parallel programming started in the mid-60’s with the pioneering work of Karp and Miller, David Kuck, Jack Dennis and others, and as a discipline, it is now 50 years old. What have we learned in the past 50 years about parallel programming? What problems have we solved and what problems remain to be solved? What can young researchers learn from the successes and failures of our discipline? This talk is a personal point of view about these and other questions regarding the state of parallel programming.

* The subtitle of the talk is borrowed from the title of a screenplay by Alberto Moravia, and it is Italian for “Yesterday, Today, Tomorrow.”

Biography

pingaliKeshav Pingali is a Professor in the Department of Computer Science at the University of Texas at Austin, and he holds the W.A.”Tex” Moncrief Chair of Computing in the Institute for Computational Engineering and Sciences (ICES) at UT Austin. Pingali is a Fellow of the IEEE, ACM and AAAS. He was the co-Editor-in-chief of the ACM Transactions on Programming Languages and Systems, and currently serves on the editorial boards of the ACM Transactions on Parallel Computing, the International Journal of Parallel Programming and Distributed Computing. He has also served on the NSF CISE Advisory Committee (2009-2012).

Break
Mar 15 @ 9:30 am – 10:00 am
Session 5: Affine Programs (Louis-Noël Pouchet)
Mar 15 @ 10:00 am – 11:15 am

Chair: Louis-Noël Pouchet (Ohio State University)

#91: Daniele G. Spampinato and Markus Püschel. A Basic Linear Algebra Compiler for Structured Matrices

#38: Lénaïc Bagnères, Oleksandr Zinenko, Stéphane Huot and Cédric Bastoul. Opening Polyhedral Compiler’s Black Box

#64: Gabriel Rodríguez, José M. Andión, Mahmut Kandemir and Juan Tourino. Trace-based Affine Reconstruction of Codes

Break
Mar 15 @ 11:15 am – 11:35 am
Session 6: Static Analysis (Michael O’Boyle)
Mar 15 @ 11:35 am – 12:50 pm

Chair: Michael O’Boyle (University of Edinburgh)

#42: Mateus Tymburiba, Rubens Emílio and Fernando Pereira. Inference of Peak Density of Indirect Branches to Detect ROP Attacks

#25: Yulei Sui, Peng Di and Jingling Xue. Sparse Flow-Sensitive Pointer Analysis for Multithreaded C Programs

#43: Vitor Paisante, Maroua Maalej, Leonardo Barbosa, Laure Gonnord and Fernando Pereira. Symbolic Range Analysis of Pointers

Lunch
Mar 15 @ 12:50 pm – 2:20 pm
Session 7: Programming Models (Mauricio Breternitz)
Mar 15 @ 2:20 pm – 3:35 pm

Chair: Mauricio Breternitz (AMD)

#74: Vassilis Vassiliadis, Jan Riehme, Jens Deussen, Konstantinos Parasyris, Christos D. Antonopoulos, Nikolaos Bellas, Spyros Lalis and Uwe Naumann. Towards Automatic Significance Analysis for Approximate Computing

#17: Kevin Brown, Hyoukjoong Lee, Tiark Rompf, Arvind Sujeeth, Christopher De Sa, Christopher Aberger and Kunle Olukotun. Have Abstraction and Eat Performance Too: Optimized Heterogeneous Computing with Parallel Patterns

#28: Melanie Kambadur and Martha Kim. NRG-Loops: Adjusting Power from Within Applications

Excursion, followed by Banquet
Mar 15 @ 4:15 pm – 10:00 pm