This tutorial will present the DynamoRIO tool platform and describe how to use its API to build custom tools that utilize dynamic code manipulation for instrumentation, profiling, analysis, optimization, introspection, security, and more. The DynamoRIO tool platform was first released to the public in June 2002 and has since been used by many researchers to develop systems ranging from taint tracking to prefetch optimization. DynamoRIO is publicly available in open source form and operates on Linux and Windows on IA-32, AMD64, and ARM platforms.
This tutorial will present gpucc, an open-source compiler built by Google targeting CUDA and NVIDIA GPUs. gpucc performs various general and CUDA-specific optimizations to generate high performance code. It outperforms NVIDIA’s toolchain (nvcc) on internal large-scale end-to-end benchmarks by up to 51%, and is on par for several open-source benchmarks (Rodinia, SHOC and Tensor). It supports modern language features such as those in C++11 and C++14, and compiles code 8% faster than nvcc, up to 2.4x faster for pathological compiles.
This tutorial will cover the following topics:
- Using gpucc
- gpucc system overview: a brief description of how gpucc works under the hood
- Detailed performance results of gpucc vs nvcc
- Compiling CUDA programs with gpucc: a demo on how to install gpucc and compile some sample CUDA programs
- Contributing to gpucc
- Performance debugging: how to debug the performance of generated binary by using nvprof and observing device code
- Writing new optimizations for gpucc
Beyond the embarrassingly parallel – New languages, compilers, and runtimes for big-data processing
Large-scale data processing requires large-scale parallelism. Data-processing systems from traditional databases to Hadoop and Spark rely on embarrassingly-parallel relational primitives (e.g. map, reduce, filter, and join) to extract parallelism from input programs. But many important applications, such as machine learning and log processing, iterate over large data sets with true loop-carried dependences across iterations. As such, these applications are not readily parallelizable in current data-processing systems.
In this talk, I will challenge the premise that parallelism requires independent computations. In particular, I will describe a general methodology for extracting parallelism from dependent computations. The basic idea is replace dependences with symbolic unknowns and execute the dependent computations symbolically in parallel. The challenge of parallelization now becomes a, hopefully mechanizable, task of performing the resulting symbolic execution efficiently. This methodology opens up the possibility of designing new languages for data-processing computations, compilers that automatically parallelize such computations, and runtimes that exploit the additional parallelism. I will describe our initial successes with this approach and the research challenges that lie ahead.
Biography
Madan Musuvathi is a Principal Researcher at Microsoft Research working in the intersection of programming languages and systems, with specific focus on concurrency and parallelism. His interests span program analysis, systems, model checking, verification, and theorem proving. His research has led to several tools that improve the lives of software developers both at Microsoft and at other companies. He received his Ph.D. from Stanford University in 2004.
Chair: Mary Lou Soffa (University of Virginia)
#4: Tongping Liu and Xu Liu. Cheetah: Detecting False Sharing Efficiently and Effectively
#27: Dehao Chen, Xinliang David Li and Tipp Moseley. AutoFDO: Automatic Feedback-directed Optimization for Warehouse-scale Applications
#32: Ivan Jibaja, Ting Cao, Steve Blackburn and Kathryn McKinley. Portable Performance on Asymmetric Multicore Processors
Chair: Dorit Nuzman (Intel)
#53: Probir Roy and Xu Liu. MemTool: A Lightweight Profiler to Guide Structure Splitting
#29: Linchuan Chen, Peng Jiang and Gagan Agrawal. Expoliting Recent SIMD Architectural Advances for Irregular Applications
#59: Hao Zhou and Jingling Xue. Exploiting Mixed SIMD Parallelism by Reducing Data Reorganization Overhead
Chair: Vijay Janapa Reddi (University of Texas)
#52: Raj Barik, Naila Farooqui, Brian Lewis, Chunling Hu and Tatiana Shpeisman. A Black-box Approach to Energy-Aware Scheduling on Integrated CPU-GPU Systems
#5: Christos Margiolas and Michael F.P. O’Boyle. Portable and Transparent Software Managed Scheduling on Accelerators for Fair Resource Sharing
#62: Dong Nguyen and Jongeun Lee. Communication-Aware Mapping of Stream Graphs for Multi-GPU Platforms
#8: Jingyue Wu, Eli Bendersky, Mark Heffernan, Chris Leary, Jacques Pienaar, Bjarke Roune, Rob Springer, Xuetian Weng and Robert Hundt. gpucc: An Open-Source GPGPU Compiler
Knights Landing Intel Xeon Phi CPU: Path to Parallelism with General Purpose Programming
The demand for high performance will continue to skyrocket in the future, fueled by the drive to solve the challenging problems in scientific world and to provide the horsepower needed to support the compute-hungry use cases that continue to emerge in commercial and consumer space, such as machine learning and deep data analytics. Exploiting parallelism will be crucial in achieving the huge performance gain required to solve these problems. This talk will present the new Xeon Phi Processor, called Knights Landing, which is architected to provide massive amounts of parallelism in a manner that is accessible with general purpose programming. The talk will provide insights into 1) the important architecture features of the processor and 2) the software technology to explore them. It will provide the inside story on the various architecture decisions made on Knights Landing – why we architected the processor the way we did, and on a few programming experience – how the general purpose programming model makes it easy to exploit parallelism on Xeon Phi. It will show measured performance numbers from the Knights Landing silicon on a range of workloads. The talk will conclude with showing the historical trends in architecture and what they mean for software as we extend the trends into the future.
Biography
Avinash Sodani is a Senior Principal Engineer at Intel Corporation and the chief architect of the Xeon-Phi Processor called Knights Landing. He specializes in the field of High Performance Computing (HPC). Previously, he was one of the architects of the 1st generation Core processor, called Nehalem, which has served as a foundation for today’s line of Intel Core processors. Avinash is a recognized expert in computer architecture and has been invited to deliver several keynotes and public talks on topics related to HPC and future of computing. Avinash holds over 20 US Patents and is known for seminal work on the concept of “Dynamic Instruction Reuse”. He has a PhD and MS in Computer Science from University of Wisconsin-Madison and a B.Tech (Hon’s) in Computer Science from Indian Institute of Technology, Kharagpur in India.
Chair: Aaron Smith (Microsoft)
#45: Soham Chakraborty and Viktor Vafeiadis. Validating Optimizations of Concurrent C/C++ Programs
#85: Ignacio Laguna, Martin Schulz, David F. Richards, Jon Calhoun and Luke Olson. IPAS: Intelligent Protection Against Silent Output Corruption in Scientific Applications
#99: Adarsh Yoga and Santosh Nagarakatte. Atomicity Violation Checker for Task Parallel Programs
Chair: Soo-mook Moon (Seoul National University)
#95: Daniele Cono D’Elia and Camil Demetrescu. Flexible On-Stack Replacement in LLVM
#96: Byron Hawkins, Brian Demsky and Michael Taylor. BlackBox: Lightweight Security Monitoring for COTS Binaries
#69: Toshihiko Koju, Reid Copeland, Motohiro Kawahito and Moriyoshi Ohara. Re-constructing High-Level Information for Language-Specific Binary Re-optimization