Keynotes

It's Time for a New Old Language

Guy L. Steele Jr, Software Architect (Oracle Labs)

The most popular programming language in computer science has no compiler or interpreter. Its definition is not written down in any one place. It has changed a lot over the decades, and those changes have introduced ambiguities and inconsistencies. Today, dozens of variations are in use, and its complexity has reached the point where it needs to be re-explained, at least in part, every time it is used. Much effort has been spent in hand-translating between this language and other languages that do have compilers. The language is quite amenable to parallel computation, but this fact has gone unexploited.

In this talk we will summarize the history of the language, highlight the variations and some of the problems that have arisen, and propose specific solutions. We suggest that it is high time that this language be given a complete formal specification, and that compilers, IDEs, and proof-checkers be created to support it, so that all the best tools and techniques of our trade may be applied to it also.

Bio:

Guy L. Steele Jr. is a Software Architect at Oracle. He received his A.B. in applied mathematics from Harvard College (1975), and his S.M. and Ph.D. in computer science and artificial intelligence from M.I.T. (1977 and 1980). He has also been an assistant professor of computer science at Carnegie-Mellon University; a member of technical staff at Tartan Laboratories in Pittsburgh, Pennsylvania; and a senior scientist at Thinking Machines Corporation in Cambridge, Massachusetts. He joined Sun Microsystems Laboratories in 1994 as a Distinguished Engineer and was named a Sun Fellow in 2003. Sun Microsystems was acquired by Oracle in 2010, and he is now a member of Oracle Labs.

The Association for Computing Machinery awarded him the 1988 Grace Murray Hopper Award and named him an ACM Fellow in 1994. He was elected a Fellow of the American Association for Artificial Intelligence in 1990. He led the team that received a 1990 Gordon Bell Prize honorable mention for achieving the fastest speed to that date for a production application: 14.182 Gigaflops. He was also awarded the 1996 ACM SIGPLAN Programming Languages Achievement Award. In 2001 he was elected to the National Academy of Engineering of the United States of America. In 2002 he was elected to the American Academy of Arts and Sciences. In 2007 he received the IEEE Computer Society Harry H. Goode memorial Award, and in 2011 he was named an IEEE Fellow.

He has served on accredited standards committees X3J11 (C language) and X3J3 (Fortran), and served as chairman of X3J13 (Common Lisp). He was also a member of the IEEE committee that produced the IEEE Standard for the Scheme Programming Language, IEEE Std 1178-1990. He was a representative to the High Performance Fortran Forum, which produced the High Performance Fortran specification in May, 1993. He is a co-author of The Java Language Specification.

At Oracle Labs, he is responsible for research in language design and implementation strategies, and architectural and software support for programming languages.


Everyone Needs High Performance Computing

Steve Keckler, Vice President of Architecture Research (NVIDIA)

The technology landscape is incredibly exciting today, with high-performance computation transforming many aspects of society and daily life. New innovations appear seemingly daily in areas of entertainment, transportation, communication, and health care, just to name a few. Emerging practical applications of virtual and augmented reality, autonomous vehicles, and automated reasoning will place new demands on our computing architectures. While the computational appetite of emerging applications in these spaces appear to be growing without bound, the historical technology scaling trends which have provided the fundamental horsepower for computing over the last 50 years, are slowing substantially. This talk will discuss some of the cataclysmic trends in applications of high-performance computing at multiple scales, and focus on opportunities and challenges for computer designers. I will also describe some personal experience with technology transfer from research to product and provide a perspective on technology transfer for academic researchers.

Bio:

Dr. Stephen W. Keckler is the Vice President of Architecture Research at NVIDIA and an Adjunct Professor of Computer Science at the University of Texas at Austin, where he served on the faculty from 1998-2012. His research interests include parallel computer architectures, high-performance computing, energy-efficient architectures, and embedded computing. Dr. Keckler is a Fellow of the ACM, a Fellow of the IEEE, an Alfred P. Sloan Research Fellow, and a recipient of the NSF CAREER award, the ACM Grace Murray Hopper award, the President's Associates Teaching Excellence Award at UT-Austin, and the Edith and Peter O’Donnell award for Engineering. He earned a B.S. in Electrical Engineering from Stanford University and M.S. and Ph.D. degrees in Computer Science from the Massachusetts Institute of Technology.


The Computer Science Behind the Microsoft Cognitive Toolkit -- an Open Source Large-Scale Deep Learning Toolkit for Windows and Linux

Frank Seide, Principal Researcher & CNTK Architect (Microsoft)

Abstract:

Deep Learning is redefining computing. Deep Neural Networks, or DNNs, have led to breakthrough accuracy improvements for tasks formerly considered AI, like speech recognition, image classification, and translation. Recurrent DNNs are differentiable universal computers. DNNs are layered structures of relatively simple functions with millions to billions of learnable model parameters. The challenge is that these model parameters are obtained by machine learning of sometimes billions of data samples, which often requires harnessing a farm of powerful multi-GPU servers.

Microsoft’s open source Cognitive Toolkit (CNTK) is used to create the DNNs that power many Microsoft services and products. It enables researchers and data scientists to easily code such neural networks at the right abstraction level, and to efficiently train and test them on production-scale data. This talk will discuss the Cognitive Toolkit and how it takes a functional-style, differentiable user program, compiles it into a computation graph for GPU execution, and distributes execution of its training across a GPU-server farm. This talk will explain how the toolkit's design intersects with several topics of interest to the three co-located conferences.

Bio:

Frank Seide, a native of Hamburg, Germany, is a Principal Researcher at Microsoft Research, and an architect of Microsoft's Cognitive Toolkit for deep learning. His current research focus is on deep neural networks for conversational speech recognition. Together with co-author Dong Yu, he was first to show the effectiveness of deep neural networks for recognition of free conversations, and he was part of the effort to reach human parity on this task. Since graduation with his Master’s in electrical engineering in 1993, Frank has worked on various speech topics, first at Philips Research in Aachen and Taipei, then at Microsoft Research Asia (Beijing), and now at Microsoft Research (Redmond), including spoken-dialogue systems, Mandarin speech recognition, audio search, speech-to-speech translation, and distributed neural-network model training.