Calendar

Feb
7
Sat
HPDSLs: Scala, LMS and Delite for High-­Performance DSLs and Program Generators @ D
Feb 7 @ 8:30 am – 12:00 pm

This tutorial is targeted at researchers and practitioners interested in building efficient domain specific languages (DSLs) and program generators. Lightweight Modular Staging (LMS) is a pragmatic approach to runtime code generation in Scala, and Delite is a compiler framework for embedded DSLs that simplifies the process of implementing DSLs for parallel computation and heterogeneous targets. This tutorial provides an overview of the technology stack, demonstrates use-­cases where it has been successfully applied and guides the attendees step-­by-­step through creation of simple generators and DSLs.

LLVM: An Intro to LLVM: IR, optimizations, backends and more @ San Ramon
Feb 7 @ 8:30 am – 5:30 pm

Topic Overview

  • High-level overview of LLVM & Clang
    • Will include how to get started coding on LLVM & Clang
    • Overview of core design elements, data structures, APIs, and patterns used in the codebase
    • High-level testing strategy for LLVM & Clang using tools like Clang’s ‘-verify’, opt, llc, FileCheck, and GoogleTest
    • Process of submitting a patch, code review, and community interactions
  • How to add an optimization pass to LLVM
    • Tutorial on the LLVM IR both in the abstract and at the level of internal APIs
    • Basic APIs and data structures needed to implement, test, and wire a new pass into the compiler.
    • Overview of the relationship between transform and analysis passes.
    • Overview of the different kinds of transformation passes, how they interact, and what they can and can’t do
    • Actually add a transformation pass and an analysis pass to the compiler that depend on each other and exercise this machinery.
      • Includes authoring relevant tests for each component
  • High-level overview of the architecture of an LLVM backend, with an emphasis on modifying or enhancing existing backends rather than adding a new one
    • Detailed review of where things are: from SelectionDAG to FastISel to the register allocator
    • Detailed review of exactly how a backend’s tablegen works, and how to make changes there and debug things
  • Add a target-independent SelectionDAG combine to the code generator
    • Include detailed walk through of the relevant DAG combine interfaces.
  • Add a target-specific DAG combine with special consideration of legalization
  • Add support for a new instruction pattern to a backend
  • Every bit of performance matters, and how the LLVM coding standard helps here
Lunch @ New World Cafe
Feb 7 @ 12:00 pm – 2:00 pm

Salad

Assorted Mixed Greens with Poached Pear, Sweet Onion Mustard Dressing on the side.

Entrées

Chicken Breast with Mushrooms topped with Cream Sauce.

Salmon with Capers topped with Lemon-Butter Sauce.

Quinoa Comfit

Veggie Moussaka

Dessert

Strawberry or Chocolate Mousse

Halide: Code generation for image processing and stencil computation in Halide @ A
Feb 7 @ 2:00 pm – 5:30 pm

This workshop will cover design and implementation of Halide, a domain-specific language and compiler for image processing and stencil computation, for people interested in using and building on it as a highly configurable code generator. As a language now in widespread production use, Halide is an interesting and high-impact platform for research on program transformation and code generation; as a language with explicit algebraic control over a wide range of loop synthesis and code generation strategies, it is a powerful backend for other languages and systems, especially those including stencil computation.

Topics:

  • The Halide programming model
  • Halide’s model of scheduling for loop synthesis
  • Examples of program transformation and synthesis via Halide schedules
  • Code generation in Halide
  • Mapping to the GPU and heterogeneous parallel execution via Halide schedules
  • Hands-on session with Halide, focussed on scheduling and code generation
Periscope: Code Auto-Tuning with the Periscope Tuning Framework @ B
Feb 7 @ 2:00 pm – 5:30 pm

In this tutorial, the attendees will have the opportunity to delve into the topic of application auto-tuning, presented by developers and performance engineers from the Auto-Tune project. This tutorial will provide a practical perspective to auto-tuning, exemplifying with use cases how to best harness and tailor performance analysers to tune real applications.

Feb
8
Sun
Altera: Compiling OpenCL to a streaming dataflow architecture on FPGAs @ Irvine
Feb 8 @ 8:30 am – 12:00 pm

In recent years, Field-Programmable Gate Arrays have become extremely powerful computational platforms that can efficiently solve many complex problems. Modern FPGAs comprise effectively millions of programmable elements, signal processing elements and high-speed interfaces, all of which are necessary to deliver a complete solution. The power of FPGAs is unlocked via low-level programming languages such as VHDL and Verilog, which allow designers to explicitly specify the behavior of each programmable element. While these languages provide a means to create highly efficient logic circuits, they are akin to “assembly language” programming for modern processors. This is a serious limiting factor for both productivity and the adoption of FPGAs on a wider scale.

In this tutorial, we use the OpenCL language to explore techniques that allow us to program FPGAs at a level of abstraction closer to traditional software-centric approaches. OpenCL is an industry standard parallel language based on ‘C’ that offers numerous advantages that enable designers to take full advantage of the capabilities offered by FPGAs, while providing a high-level design entry language that is familiar to a wide range of programmers.

The challenge of mapping a ‘C’ based language to FPGAs is that these languages all have implicit assumptions that the underlying architecture executing these programs is a processor based architecture. Processors are characterized by a sequence of instructions that control a datapath that manipulates data values stored in a memory. Conversely, FPGA architectures are more suited to implementing spatial computing circuits where data flows in a pipelined fashion from one functional unit to the next until computations are complete. Data can be transferred efficiently by wires, registers or FIFOs without always resorting to external storage. This tutorial will explore compiler optimizations and code generation techniques that can transform sequential programs into efficient streaming dataflow circuits for FPGAs. We will examine specific case studies of DSP filters, image processing and mathematical computations to demonstrate how these techniques can be applied to real world examples.

OpenTuner: Autotuning programs with OpenTuner @ G
Feb 8 @ 8:30 am – 12:00 pm

This tutorial will cover the usage of OpenTuner, a open source framework for building domain-specific multi-objective program autotuners. OpenTuner supports fully customizable configuration representations, an extensible technique representation to allow for domain-specific techniques, and an easy to use interface for communicating with the tuned program. A key capability inside OpenTuner is the use of ensembles of disparate search techniques simultaneously. Techniques which perform well will receive larger testing budgets and techniques which perform poorly will be disabled. OpenTuner has been used by a number of different projects to build domain specific autotuners.

The topics covered in the workshop will be:

  • Overview of autotuning: including a history of past autotuning projects and how autotuning is used today
  • Machine learning primer: empirical search, model based techniques, and which technique is right for you
  • OpenTuner framework: how is it designed and how you should use it
  • Examples of using opentuner: presentations by current users of opentuner
  • What makes a good search space representation: the secret sauce of autotuning
  • How to go about autotuning your system with OpenTuner
  • Hands-on session with OpenTuner
Using Pin++ To Author Highly Configurable Pintools for the Pin @ A
Feb 8 @ 8:30 am – 12:00 am

This tutorial will discuss an open-source framework for creating Pintools, which are analysis tools for the dynamic binary instrumentation tool named Pin, named Pin++. Pin++ is an object-oriented framework that uses template meta-programming to implement Pintools. The goal of Pin++ is to simplify programming a Pintool and promote reuse of its components across different Pintools. Our results show that Pintools implemented using Pin++ can have a 54% reduction in complexity, increase in its modularity, and up to 60% reduction in instrumentation overhead.

This tutorial will focus on the following key concepts in Pin++:

  • It will discuss the challenges of implement a Pintool using the traditional approach.
  • It will discuss how Pin++ addresses existing challenges when authoring Pintools.
  • Using hands-on examples, it will discuss how to implement basic Pintools using Pin++ so the audience can begin exploring how to apply Pin++ to their existing problems.
Lunch @ New World Cafe
Feb 8 @ 12:00 pm – 2:00 pm

Salad

Classic Caesar Salad with Dressing on the side.

Entrées

Chicken Piccata

Seafood Kebab with Salsa Fresca

Tomatoes alla Parmigiana

Veggie Lasagna

Steamed Rice

Dessert

Tiramisu

DynamoRIO: Building Dynamic Tools with DynamoRIO on x86 and ARM @ A
Feb 8 @ 2:00 pm – 5:30 pm

This tutorial will present the DynamoRIO tool platform and describe how to use its API to build custom tools that utilize dynamic code manipulation for instrumentation, profiling, analysis, optimization, introspection, security, and more. The DynamoRIO tool platform was first released to the public in June 2002 and has since been used by many researchers to develop systems ranging from taint tracking to prefetch optimization. DynamoRIO is publicly available in open source form and targets Windows, Linux, and Mac on x86 and Linux on ARM.

The tutorial will cover the following topics:

  • DynamoRIO API: an overview of the full range of DynamoRIO’s powerful API, which abstracts away the details of the underlying infrastructure and allows the tool builder to concentrate on analyzing or modifying the application’s runtime code stream. It includes both high-level features for quick prototyping and low-level features for full control over instrumentation.
  • DynamoRIO system overview: a brief description of how DynamoRIO works under the covers.
  • Description of tools provided with the DynamoRIO package, including the Dr. Memory memory debugging tool, the DrCov code coverage tool, and the DrStrace Windows system call tracing tool.
  • Sample tool starting points for building new tools
  • Advanced topics when building sophisticated tools
Graal: A research platform for dynamic compilation and managed languages @ B
Feb 8 @ 2:00 pm – 5:30 pm

The tutorial will cover the following topics:

  • Graal: a new high-performance dynamic compiler for Java written in Java
  • Introduction to the Graal intermediate representation, and how it simplifies speculative optimizations
  • Graal API: Separation of the compiler from the VM
  • Snippets: expressing high-level semantics in low-level Java code
  • Integration of the compiler with an application/library – and how that can help your research project.
  • Using Graal for static analysis
  • Graal as a compiler for dynamic programming languages
  • Project Sumatra: Compiling for the GPU
Feb
9
Mon
Keynote: Paolo Faraboschi, HP Labs, The Machine
Feb 9 @ 8:50 am – 10:00 am

Paolo FaraboschiAbstract: By end of the decade we expect over 30 billion intelligent devices connected to the internet, resulting in unprecedented amounts of data. At the same time, scaling of the memory technologies that are at the foundation of computing today will significantly slow down. We will need transformational changes to the way in which we collect, process, store, and analyze that data. Not everyone realizes that these changes will revolutionize the way in which we architect and program computing systems. This talk will discuss the technology trends, the implications to software and programming, and what we are doing at HP to address some of the challenges. Starting from the emerging non-volatile devices, it will cover how they will enable flattening and re-architecting the memory hierarchy. Then, it will dive into the implications to software, discussing how file systems, databases and explicit applications can take advantage of large, flat and persistent memory spaces.

Biography: Paolo Faraboschi is an HP Fellow at HP Labs. His interests are at the intersection of system architecture and software. He is currently working on TheMachine project, researching how we can build better systems around non-volatile memory. In the last five years, he worked on low-energy servers and HP project Moonshot. From 2004 to 2009, at HPL in Barcelona, he led a research activity on scalable system-level simulation and modeling. From 1995 to 2003, at HPL Cambridge, he was the principal architect of the Lx/ST200 family of VLIW cores, widely used in video SoCs and HP’s printers. Paolo is an IEEE Fellow and an active member of the computer architecture community: guest co-editor of IEEE Micro TopPicks 2012, Program co-Chair for HiPEAC10 (2010), MICRO41 (2008) and MICRO34 (2001). He holds 25 patents and co-authored the book “Embedded Computing: a VLIW approach to architecture, compiler end tools”. Before joining HP in 1994, he received a Ph.D. in EECS from the University of Genoa, Italy.

Feb
10
Tue
Keynote: Dharmendra S Modha, IBM, Brain-Inspired Computing
Feb 10 @ 1:15 pm – 2:25 pm

Dharmendra S ModhaAbstract: I will describe a decade-long, multi-disciplinary, multi-institutional effort spanning neuroscience, supercomputing, and nanotechnology to build and demonstrate a brain-inspired computer and describe the architecture, programming model, and applications. For more information, see: modha.org.

Biography: Dr. Dharmendra S. Modha is an IBM Fellow and IBM Chief Scientist for Brain-inspired Computing. He is a cognitive computing pioneer who envisioned and now leads a highly successful effort to develop brain-inspired computers. The groundbreaking project, SyNAPSE, funded by DARPA to the tune of $53.5M, is multi-disciplinary, multi-national, multi-institutional and has had worldwide scientific impact. Its resulting revolutionary computing architecture and ecosystem break from the prevailing von Neumann paradigm and constitute a foundation for new classes of ultra-low-power, compact, real-time, multi-modal sensorimotor information technology systems. Dr. Modha has also made significant contributions to IBM businesses via innovations in caching mechanisms for storage controllers, clustering algorithms for services, and coding theory for disk drives. His work has been featured in Economist, Science, New York Times, BBC, Discover, MIT Technology Report, Associated Press, Popular Mechanics, Communications of the ACM, Forbes, Fortune, and IEEE Spectrum amongst thousands of media mentions. Author of over 60 papers and inventor of over 100 patents, he has won ACM’s Gordon Bell Prize, USENIX/FAST Test of Time Award, Best Paper Awards at ASYNC and IDEMI, First Place, Science/NSF International Science & Engineering Visualization Challenge, and is a Fellow of IEEE and World Technology Network. In 2013 and 2014, he was named as Best of IBM. On their 40th Anniversary, EE Times named Dr. Modha amongst 10 Electronics Visionaries to Watch. Dr. Modha received BTech from IIT Bombay in 1990 and PhD from UCSD in 1995.

Feb
11
Wed
Keynote: David Wecker, Microsoft Research, Simulation and Compilation of Quantum Algorithms
Feb 11 @ 8:15 am – 9:25 am

Dave WeckerAbstract: Languages, compilers, and computer-aided design tools will be essential for scalable quantum computing, which promises an exponential leap in our ability to execute complex tasks. LIQUi|> is a modular software architecture designed to simulate and control quantum hardware. It enables easy programming, compilation, and simulation of quantum algorithms and circuits, and is independent of a specific quantum architecture. This talk will focus on simulation of quantum algorithms in Quantum Chemistry and Materials as well as Factoring, Quantum Error Correction and compilation for hardware implementations (http://arxiv.org/abs/1402.4467).

Biography: Dave came to Microsoft in 1995 and helped create the “Blender” (digital video post-production facility). He designed and worked on a Broadband MSN offering when he became architect for the Handheld PC v1 & v2 as well as AutoPC v1 and Pocket PC v1. He moved to Intelligent Interface Technology and resurrected SHRDLU for Natural Language research as well as building a state of the art Neural Network based Speech Recognition system. For the Mobile Devices Division he implemented secure DRM on e-books and Pocket PCs. He created and was director of ePeriodicals before taking on the role of Architect for Emerging Technologies. This lead to starting the Machine Learning Incubation Team and then architect for Parallel Computing Technology Strategy working on Big Data and now Quantum Computing. He has over 20 patents for Microsoft and 9 Ship-It awards. He started coding professionally in 1973, worked in the AI labs at CMU while obtaining a BSEE and MSIA and was at DEC for 13 years (ask him about DIDDLY sometime ;).