Cliff Click, Distinguished Engineer at Azul Systems

Cliff Click

Biography: Cliff Click

With more than twenty-five years experience developing compilers, Cliff serves as Azul Systems' Chief JVM Architect. Cliff joined Azul in 2002 from Sun Microsystems where he was the architect and lead developer of the HotSpot Server Compiler, a technology that has delivered dramatic improvements in Java performance since its inception.

Previously he was with Motorola where he helped deliver industry leading SpecInt2000 scores on PowerPC chips, and before that he researched compiler technology at HP Labs. Cliff has been writing optimizing compilers and JITs for over 20 years. He is invited to speak regularly at industry and academic conferences including JavaOne, JVM and VEE; serves on the Program Committee of many conferences (including PLDI and OOPSLA); and has published many papers about HotSpot technology. Cliff holds a PhD in Computer Science from Rice University.
 
Software passion: Building tools
 
Links:

Presentation: "The Top Ten Issues for Java in Production"

Track: ARCHITECTURE & DESIGN CASE STUDIES / Time: Monday 10:15 - 11:15 / Location: Store Sal, Musikhuset


Keywords: Java


Presentation: "Java on 1000 Cores - Tales of Hardware/Software Co-Design"

Track: ENGINE ROOM: THE SYSTEMS WE BUILD UPON / Time: Wednesday 13:30 - 14:30 / Location: Lille Sal, Musikhuset

Azul Systems designs and builds systems for running business logic applications written in Java.  Unlike scientific computing, business logic code tends to be very large and complex (>1MLOC is common), display very irregular data access patterns, and make heavy use of threads and locks.  The common unit of parallelism is the transaction or thread-level task.  Business logic programs tend to have high allocation rates which scale up with the amount of work accomplished, and they are sensitive to Garbage Collection max-pause-times.  Typical JVM implementations for heaps greater than 4 Gigabytes have unacceptable pause times and this forces many applications to run clustered.

Our systems support heaps up to 600 Gigabytes and allocation rates up to 35 Gigabytes/sec with max pause times in the millisecond range.  We have large core counts (up to 864) for running parallel tasks; our memory is Uniform Memory Access (as opposed to the more common NUMA), cache-coherent, and has supercomputer-level bandwidth.  The cores are our own design; simple 3-address RISCs with read- & write-barriers to support GC, hardware transactional memory, zero-cost high-resolution profiling, and other more modest Java-specific tweaks.

This talk is about the business environment which drove the design of the hardware (e.g. why put in HTM support? why our own CPU design and not e.g. MIPS or X86?), some early company history with designing our own chips (1st silicon back from the fab had problems like the bits in the odd-numbered registers bleeding into the even-numbered registers), and finally some wisdom and observations from a tightly integrated hardware/software co-design effort.

Keywords: System Design, Hardware, GC, multi-core, memory bandwidth, JVM, CPUs

Target Audience
: Anybody who wants a deeper understanding of why we program the way we do, why your hardware does (or does not!) fit well to the problems you are trying to solve, and a glimpse of what you can do when you make a radical departure from the past.