Today, the LLVM compiler infrastructure world is essentially inescapable in HPC. But back in the 2000 timeframe, LLVM (low level virtual machine) was just getting its start as a new way of thinking about how to overcome shortcomings in the Java Virtual Machine. At the time, Chris Lattner was a graduate student of Vikram Adve at the University of Illinois.
“Java was taking over the world. It was really exciting. Nobody knew the boundaries of Java. Some of us had some concerns about the sort of workloads that maybe wouldn’t fit well with it. But the compilation story was still quite early. Just-in-time compilers were just coming on,” recalled Lattner.
Participating in a Fireside Chat at SC21 last month, Lattner strolled down memory lane and talked about how LLVM grew from his master’s thesis project at University of Illinois, Urbana-Champaign in 2000 into a broad community effort used by, and contributed to, by nearly every major company producing compilers and programming-language tools. He also discussed LLVM’s future, his work on Swift and MLIR, and the reward and challenge of working in open source communities. Hal Finkel of the DOE Office of Advanced Scientific Computing Research was the interviewer.
Chris Lattner, SciFive
“Vikram and I had this idea that if we took this just-in-time compiler technology, but did more ahead-of-time compilation, we could get better trade-offs in terms of whole program optimization analysis, [and] be able to build analysis tools, and get better performance. A lot of the name LLVM, low level virtual machine, comes from the idea of taking the Java Virtual Machine and building something that is underneath it, a platform that you could then do whole program optimization for,” said Lattner.
“After building a whole bunch of infrastructure and learning all this compiler stuff, which I was just eating up and just loved learning by doing, we actually ended up saying, Well, how about we build a code generator? And how about we go integrate with GCC (GNU Compiler Collection). Very early on, it started as a Java thing that ended up being a C-oriented and statically-compiled tooling language as the initial focus. So, a lot of that early Genesis kind of got derailed. But it became a very useful platform for research and for applications in a large number of different domains.”
Lattner is, of course, no stranger to the programming world. Much of his work on LLVM, Clang, and Swift took place while he was at Apple. Lattner also worked briefly at Tesla leading its auto pilot team. He is currently senior vice president of platform engineering at SiFive which develops RISC-V processors.
Presented here are a few of Lattner’s comments (lightly edited) on his work with the LLVM community and its future. The SC21 video of the session is available here (for registered attendees).
My Time at Apple – “You don’t understand…nothing’s ever going to replace GCC”
When Lattner graduated from U of Illinois in 2005, LLVM was still an advanced research project. “The quality of the generated code wasn’t perfect but it was promising,” he recalled. An Apple engineer was working with LLVM and talked it up to an Apple VP. At the time Lattner was collaborating with the engineer over mailing lists
“The time was right. Apple had been investing a lot in GCC, and I don’t know if it was the GCC technology or the GCC team at Apple at the time, but management was very frustrated with lack of progress. I got to talk with this VP who thought compilers were interesting and he decided to give me a chance. He hired me and said, ‘Yeah, you can work on this LLVM thing. Show that it wasn’t a bad idea. [Not long after] he motivated saying ‘You can have a year or so to work on this. And worst case, you’re a smart guy, we can make you work on GCC.’”
A couple of weeks into the job, Lattner remembers being asked “why are you here” by an experienced Apple engineer. After explaining his LLVM project, the colleague said, “You don’t understand. GCC has been around for 20 years, it’s had hundreds of people working on it, nothing’s ever going to replace GCC, you’re wasting your time.” Lattner said, “Well, I don’t know, I’m having fun.”
It turned out there was a huge need for just-in-time compilers in the graphics space, and LLVM was a good solution.
Lattner said, “The OpenGL team was struggling because Apple [was] coming out with 64-bit Mac and moving from PowerPC to Intel, and a bunch of these things. They were using hand-rolled, just-in-time compilers and we were able to use LLVM to solve a bunch of their problems like enable new hardware [which was] not something that GCC was ever designed to do.”
“So [pieces of LLVM] shipped with the 10.4 Tiger release (2007) improving graphics performance. That showed some value and justified a little bit of investment. I got another person to work with and we went from that to another little thing and to another little thing, one little step at a time,” recounted Lattner. “It gained momentum and eventually started replacing parts of GCC. Another thing along the way was that the GPU team was trying to make a shading language for general-purpose, GPU compute, [and that] turned into what we know now as OpenCL and that became the first user of Clang.”
The rest, of course, is a very rich LLVM history of community development and collaboration.
Collaboration’s Risk and Reward – “It’s time for you to go.”
Not surprisingly, it’s challenging to build an open source development community in which commercial competitors collaborate. This isn’t unique to LLVM, but given its endurance and growth, there may be lessons for others.
Lattner said, “Look at the LLVM community and you have Intel and AMD and Apple and Google and Sony and all these folks that are collaborating. One of the ways we made [it work] was by being very driven by technical excellence and by shared values and a shared understanding of what success looks like.”
“As a community, we always worked engineer-to-engineer to solve problems. For example, for me when I was at Apple or whatever affiliation, I’d have my LLVM hat on when working with the community, but I’d have my Apple hat on where I’m solving the internal problem for hardware that’s not shipped, right. We decided that the corporate hats that many of us wore would not be part of the LLVM community. It was not about bringing up a topic like, I need to get this patch in now to hit a release,” he said.
The shared understanding helped inform LLVM community growth by attracting similarly minded collaborators said Lattner. “I’m proud of the fact that we have people who are harsh, industrial enemies that are fighting with each other on the business landscape, but can still work on and agree on the best way to model some kernel in a GPU or whatever it is,” he said.
Things don’t always work out.
“Over the years, not often, we have had to eject people out of the community. It’s when people have decided that they do not align with the value system [or] they’re not willing to collaborate with people or they’re not aligned with where the community is going. That is super difficult, because some of them are prolific contributors, and there’s real pain, but maintaining that community cohesion [and] value system is so important,” said Lattner.
LLVM Warts & Redo – Would starting from scratch a good idea?
“I am the biggest critic of LLVM because I know all the problems,” said Lattner, half in jest, noting that LLVM is over 20 years old now. “LLVM is definitely a good thing, but it is not a perfect thing by any stretch of the imagination. I’m really happy we’ve been able to continually upgrade and iterate and improve on LLVM over the years. But it’s to the point now, where certain changes are architectural, and it’s very difficult to make.
“One example of this is that the LLVM compiler itself is not internally multi-threaded. I don’t know about you, but I think that multicore is no longer the future. There are also certain design decisions, which I’m not going to go into in detail on, that are regretted. Many of those, only nerds like me care about and they’re not the strategic kind of a problem that faces the community, but others really are,” said Lattner.
“[Among] things that LLVM has never been super great at are loop transformations, HPC-style transformations, auto parallelization, OpenMP support. LLVM works and it’s very useful, but it could be a lot better. Those [weaknesses] all go back to design decisions in LLVM where the LLVM view of the world is really kind of a C-with-vectors view of the world. That original design premise is holding back certain kinds of evolution,” he said.
Today, noted Lattner, the LLVM project overall has many sub-projects, including MLIR and others that are breaking down these barriers and solving some of these problems. “But typically, when people ask about LLVM, they’re thinking about Clang and the standard C/C++ pipeline, and it hasn’t quite adopted all the new technology in the space,” said Lattner.
Finkel asked of Lattner would recommend starting over again.
“Yes, I did. This is what MLIR (multi-level intermediate representation) is right? All kidding aside. LLVM is slow when you’re using it in ways it wasn’t really designed to be used. For example, the Rust community is well known for pushing on the boundaries of LLVM performance because their compilation model instantiates tons and tons and tons of stuff, and then specializes and specializes and specializes it all way. This puts a huge amount of pressure and weight on the compiler that C, for example, or simpler lower level languages don’t have. It leads to amazing things in the Rust community but it’s asking the compiler to do all this work that is implicit in this programing model,” he said.
“Starting all over from scratch, you have to decide what problems you want to fix. The problems that I’m interested in fixing with LLVM come down to it doesn’t model higher level abstractions like loops very well and things like this. I think the constant time performance of any individual pass is generally okay. The other challenge I see with LLVM is that it’s a complicated set of technologies and therefore a difficult tool to wield unless you know all the different pieces. Sometimes, people are writing lots of passes that shouldn’t be run. So, I’m not religiously attached to LLVM being the perfect answer.”
Making LLVM Better – While at Google, Lattner Tackled MLIR
MILR is a sub-project within LLVM and intended to help give it more modern capabilities. Lattner went from Apple to Google where he worked on MLIR.
“I’ll start from the problem statement [which] comes back to the earlier questions on what’s wrong with LLVM? So LLVM is interested in tackling the C-with-vectors part of the design space, but there are a lot of other interesting parts of design space where LLVM may be helpful in small ways, but doesn’t really help the inherent problem. If you talk about distributing computation to a cluster, LLVM doesn’t do any of that. If you talk about machine learning, and I have parallel workloads that are represented as tensors, LLVM, doesn’t help. If you look at other spaces, for example hardware design, LLVM has some features you can use [but] are really not great,” said Lattner.
“The other context was within Google and the TensorFlow team. [Although] TensorFlow itself is not widely seen as this, it’s really a set of compiler technologies. It has TensorFlow graphs. It has this XLA compiler framework with an HLO graphs. It has code generation for CPUs and GPUs. It has many other technology components like TensorFlow Lite, which is a completely separate machine learning framework with converters back and forth,” he said.
What had happened, said Lattner, is that TensorFlow had this massive amount of infrastructure, an ecosystem with “seven or eight different IRs” floating around. “Nobody had built them like a compiler IR. People think of TensorFlow graphs as a protocol buffer, not as an IR representation. As a consequence, the quality around that was not very great. Nothing was really integrated. There were all these different technology islands between the different systems. People weren’t able to talk with each other because they didn’t understand that they’re all working on the same problems in different parts of space,” recalled Lattner.
MLIR, said Lattner, arose from this idea of “saying, how do we integrate these completely different worlds where you’re working on a massive multi-1000-node machine learning accelerator, like GPUs, versus I’m working on an Arm TensorFlow Lite mobile deployment scenario. There’s no commonality between those.”
Lattner said, “There’s a hard part to building compilers, which has nothing to do with the domain. If you look at a compiler like LLVM, a big part of LLVM, is all this infrastructure for testing, for debug info, for walking the graph, for building a control flow graph, for defining call graphs, or doing analyses of pass managers – all of this kind of stuff is common regardless of whether you’re building a CPU JIT compiler or building a TensorFlow graph style representation. The representation on the compiler infrastructure is invariant to the domain your targeting.”
What MLIR evolved into “was taking the notion of a compiler infrastructure and taking the domain out of it. MLIR is a domain-independent compiler infrastructure that allows you to build domain specific verticals on top. It provides the ability to define your IR, your representation, like what are your adds, subtracts, multiply, divides, stores. What are the core abstractions you have? For example, in software, you have functions. In hardware, you have Verilog modules. MILR can do both of those,” he said.
“Building all of this useful functionality, out-of-the-box, allowed us in the Google lab to say, “We have seven different compilers, let’s start unifying them at the bottom and pull them onto the same technology stacks. We can start sharing code, and breaking down these barriers.” Also, because you have one thing, and it’s been used by lots of people, you can invest in making it really, really good. Investing in infrastructure like that is something you often don’t get a chance to do.”
Lattner said he’s not only excited to see MLIR being adopted across the industry, notably for machine learning kinds of applications, but also in new arenas such as in quantum computing. “At SiFive, we use it for hardware design and chip design kinds of problems – any place you can benefit from having the compiler be able to represent a design,” he said.
(Presented below is an excerpt from LLVM.org that showcases the wide scope of the project)
LLVM OVERVIEW EXCERPTED FROM LLVM.ORG
The LLVM Project is a collection of modular and reusable compiler and toolchain technologies. Despite its name, LLVM has little to do with traditional virtual machines. The name “LLVM” itself is not an acronym; it is the full name of the project.
LLVM began as a research project at the University of Illinois, with the goal of providing a modern, SSA-based compilation strategy capable of supporting both static and dynamic compilation of arbitrary programming languages. Since then, LLVM has grown to be an umbrella project consisting of a number of subprojects, many of which are being used in production by a wide variety of commercial and open source projects as well as being widely used in academic research. Code in the LLVM project is licensed under the “Apache 2.0 License with LLVM exceptions”
The primary sub-projects of LLVM are:
- The LLVM Core libraries provide a modern source- and target-independent optimizer, along with code generation support for many popular CPUs (as well as some less common ones!) These libraries are built around a well specified code representation known as the LLVM intermediate representation (“LLVM IR”). The LLVM Core libraries are well documented, and it is particularly easy to invent your own language (or port an existing compiler) to use LLVM as an optimizer and code generator.
- Clang is an “LLVM native” C/C++/Objective-C compiler, which aims to deliver amazingly fast compiles, extremely useful error and warning messages and to provide a platform for building great source level tools. The Clang Static Analyzer and clang-tidy are tools that automatically find bugs in your code, and are great examples of the sort of tools that can be built using the Clang frontend as a library to parse C/C++ code.
- The LLDB project builds on libraries provided by LLVM and Clang to provide a great native debugger. It uses the Clang ASTs and expression parser, LLVM JIT, LLVM disassembler, etc so that it provides an experience that “just works”. It is also blazing fast and much more memory efficient than GDB at loading symbols.
- The libc++ and libc++ ABI projects provide a standard conformant and high-performance implementation of the C++ Standard Library, including full support for C++11 and C++14.
- The compiler-rt project provides highly tuned implementations of the low-level code generator support routines like “__fixunsdfdi” and other calls generated when a target doesn’t have a short sequence of native instructions to implement a core IR operation. It also provides implementations of run-time libraries for dynamic testing tools such as AddressSanitizer, ThreadSanitizer, MemorySanitizer, and DataFlowSanitizer.
- The MLIR subproject is a novel approach to building reusable and extensible compiler infrastructure. MLIR aims to address software fragmentation, improve compilation for heterogeneous hardware, significantly reduce the cost of building domain specific compilers, and aid in connecting existing compilers together.
- The OpenMP subproject provides an OpenMP runtime for use with the OpenMP implementation in Clang.
- The polly project implements a suite of cache-locality optimizations as well as auto-parallelism and vectorization using a polyhedral model.
- The libclc project aims to implement the OpenCL standard library.
- The klee project implements a “symbolic virtual machine” which uses a theorem prover to try to evaluate all dynamic paths through a program in an effort to find bugs and to prove properties of functions. A major feature of klee is that it can produce a testcase in the event that it detects a bug.
- The LLD project is a new linker. That is a drop-in replacement for system linkers and runs much faster.
In addition to official subprojects of LLVM, there are a broad variety of other projects that use components of LLVM for various tasks. Through these external projects you can use LLVM to compile Ruby, Python, Haskell, Rust, D, PHP, Pure, Lua, and a number of other languages. A major strength of LLVM is its versatility, flexibility, and reusability, which is why it is being used for such a wide variety of different tasks: everything from doing light-weight JIT compiles of embedded languages like Lua to compiling Fortran code for massive super computers.
As much as everything else, LLVM has a broad and friendly community of people who are interested in building great low-level tools. If you are interested in getting involved, a good first place is to skim the LLVM Blog and to sign up for the LLVM Developer mailing list. For information on how to send in a patch, get commit access, and copyright and license topics, please see the LLVM Developer Policy.
The post Lessons from LLVM: An SC21 Fireside Chat with Chris Lattner appeared first on Technovanguard.