General purpose as well as integrated processors nowadays have to run programs written in a wide variety of languages with isolation concerns. Dynamic compilation, i.e. generate binary code at run-time, is becoming a viable solution for many usage scenarios, and the goal of this workshop is to present current research and look forward to what is going to happen in this field of growing interest for the coming years.
Scientific challenges are multiple with many inter-relations: program representation (source code, intermediate representation, data sets), fast binary code generation, patches, hardware abstraction, garbage collection, performance observation, performance trade-offs, polymorphism, operating systems.
Beyond the embarrassingly parallel – New languages, compilers, and runtimes for big-data processing
Large-scale data processing requires large-scale parallelism. Data-processing systems from traditional databases to Hadoop and Spark rely on embarrassingly-parallel relational primitives (e.g. map, reduce, filter, and join) to extract parallelism from input programs. But many important applications, such as machine learning and log processing, iterate over large data sets with true loop-carried dependences across iterations. As such, these applications are not readily parallelizable in current data-processing systems.
In this talk, I will challenge the premise that parallelism requires independent computations. In particular, I will describe a general methodology for extracting parallelism from dependent computations. The basic idea is replace dependences with symbolic unknowns and execute the dependent computations symbolically in parallel. The challenge of parallelization now becomes a, hopefully mechanizable, task of performing the resulting symbolic execution efficiently. This methodology opens up the possibility of designing new languages for data-processing computations, compilers that automatically parallelize such computations, and runtimes that exploit the additional parallelism. I will describe our initial successes with this approach and the research challenges that lie ahead.
Biography
Madan Musuvathi is a Principal Researcher at Microsoft Research working in the intersection of programming languages and systems, with specific focus on concurrency and parallelism. His interests span program analysis, systems, model checking, verification, and theorem proving. His research has led to several tools that improve the lives of software developers both at Microsoft and at other companies. He received his Ph.D. from Stanford University in 2004.
Chair: Mary Lou Soffa (University of Virginia)
#4: Tongping Liu and Xu Liu. Cheetah: Detecting False Sharing Efficiently and Effectively
#27: Dehao Chen, Xinliang David Li and Tipp Moseley. AutoFDO: Automatic Feedback-directed Optimization for Warehouse-scale Applications
#32: Ivan Jibaja, Ting Cao, Steve Blackburn and Kathryn McKinley. Portable Performance on Asymmetric Multicore Processors
Chair: Dorit Nuzman (Intel)
#53: Probir Roy and Xu Liu. MemTool: A Lightweight Profiler to Guide Structure Splitting
#29: Linchuan Chen, Peng Jiang and Gagan Agrawal. Expoliting Recent SIMD Architectural Advances for Irregular Applications
#59: Hao Zhou and Jingling Xue. Exploiting Mixed SIMD Parallelism by Reducing Data Reorganization Overhead
Chair: Vijay Janapa Reddi (University of Texas)
#52: Raj Barik, Naila Farooqui, Brian Lewis, Chunling Hu and Tatiana Shpeisman. A Black-box Approach to Energy-Aware Scheduling on Integrated CPU-GPU Systems
#5: Christos Margiolas and Michael F.P. O’Boyle. Portable and Transparent Software Managed Scheduling on Accelerators for Fair Resource Sharing
#62: Dong Nguyen and Jongeun Lee. Communication-Aware Mapping of Stream Graphs for Multi-GPU Platforms
#8: Jingyue Wu, Eli Bendersky, Mark Heffernan, Chris Leary, Jacques Pienaar, Bjarke Roune, Rob Springer, Xuetian Weng and Robert Hundt. gpucc: An Open-Source GPGPU Compiler
50 Years of Parallel programming: Ieri, Oggi, Domani*
Parallel programming started in the mid-60’s with the pioneering work of Karp and Miller, David Kuck, Jack Dennis and others, and as a discipline, it is now 50 years old. What have we learned in the past 50 years about parallel programming? What problems have we solved and what problems remain to be solved? What can young researchers learn from the successes and failures of our discipline? This talk is a personal point of view about these and other questions regarding the state of parallel programming.
* The subtitle of the talk is borrowed from the title of a screenplay by Alberto Moravia, and it is Italian for “Yesterday, Today, Tomorrow.”
Biography
Keshav Pingali is a Professor in the Department of Computer Science at the University of Texas at Austin, and he holds the W.A.”Tex” Moncrief Chair of Computing in the Institute for Computational Engineering and Sciences (ICES) at UT Austin. Pingali is a Fellow of the IEEE, ACM and AAAS. He was the co-Editor-in-chief of the ACM Transactions on Programming Languages and Systems, and currently serves on the editorial boards of the ACM Transactions on Parallel Computing, the International Journal of Parallel Programming and Distributed Computing. He has also served on the NSF CISE Advisory Committee (2009-2012).
Chair: Louis-Noël Pouchet (Ohio State University)
#91: Daniele G. Spampinato and Markus Püschel. A Basic Linear Algebra Compiler for Structured Matrices
#38: Lénaïc Bagnères, Oleksandr Zinenko, Stéphane Huot and Cédric Bastoul. Opening Polyhedral Compiler’s Black Box
#64: Gabriel Rodríguez, José M. Andión, Mahmut Kandemir and Juan Tourino. Trace-based Affine Reconstruction of Codes
Chair: Michael O’Boyle (University of Edinburgh)
#42: Mateus Tymburiba, Rubens Emílio and Fernando Pereira. Inference of Peak Density of Indirect Branches to Detect ROP Attacks
#25: Yulei Sui, Peng Di and Jingling Xue. Sparse Flow-Sensitive Pointer Analysis for Multithreaded C Programs
#43: Vitor Paisante, Maroua Maalej, Leonardo Barbosa, Laure Gonnord and Fernando Pereira. Symbolic Range Analysis of Pointers
Chair: Mauricio Breternitz (AMD)
#74: Vassilis Vassiliadis, Jan Riehme, Jens Deussen, Konstantinos Parasyris, Christos D. Antonopoulos, Nikolaos Bellas, Spyros Lalis and Uwe Naumann. Towards Automatic Significance Analysis for Approximate Computing
#17: Kevin Brown, Hyoukjoong Lee, Tiark Rompf, Arvind Sujeeth, Christopher De Sa, Christopher Aberger and Kunle Olukotun. Have Abstraction and Eat Performance Too: Optimized Heterogeneous Computing with Parallel Patterns
#28: Melanie Kambadur and Martha Kim. NRG-Loops: Adjusting Power from Within Applications