linux and glibc: The 4.5TiB malloc API trace

Session information has not yet been published for this event.

*
Refereed Presentation
Scheduled: Thursday, November 3, 2016 from 2:45 – 3:30pm in Sweeney F

One Line Summary

Ever wanted simulate a workload without needing the original application? Look no further.

Abstract

Whole-system Benchmarking
A glibc microbenchmark is a small program used to test the behaviour of a change in the library e.g. make bench. This is used routinely to discuss the technical merits of a performance-related patch.

A glibc whole-system benchmark is a dataset that characterizes a user workload and is used to test the behaviour of a change, but across a wider set of APIs i.e. a whole system. At least that’s what it’s supposed to mean.

The vision behind glibc’s whole-system benchmarking is to provide tooling to measure and characterize a user workload larger than a microbenchmark (glibc already has a microbenchmark framework). Once the workload is characterized it should be possible to evaluate the merits of larger performance-related changes in the core library against a given set of workloads. The problem is that nobody in the community knew where to start, nor was it clear what data needed to be gathered, nor how it could be used to evaluate code changes.

A year later and the Red Hat glibc team has progress to share. We present a realization of the whole-system benchmarking idea. Initially restricted to a singular set of APIs in libc, specifically the malloc API family of functions.

Trace:
We present a low-level builtin lossless tracing framework that is thread-safe and uses shifting mapped windows to minimize RSS impact on the application being traced. The binary trace logs are a high-fidelity representation of the malloc API calls made by a single process.

Convert:
We present the techniques used and problems encountered in converting the trace data into a workload. This includes a discussion on inter-thread event ordering and the problems caused by features like mremap in realloc.

Simulate:
Lastly we discuss and present a workload simulator that can replay the application behaviour with respect to the traced API, and the uses in patch evaluation and application analysis.

Feedback:
The work is far from complete and kernel developer feedback is welcome, particularly when it comes to minimizing tracer RSS usage, low-cost in-process RSS measurements (/proc or mincore), and low-cost conditionally enabled trace features using uprobes. Feedback from kernel tracing experts is particularly welcome since glibc must sketch out how to proceed on the rest of the API tracing e.g. built-in trace or lttng-ust. Trace format e.g. CTF? Workload format e.g. extended CTF?

The work was carried out by the Red Hat glibc team including Florian Weimer, DJ Delorie, and Carlos O’Donell.

Tags

userspace, trace, benchmarking, API

Presentation Materials

slides

Speaker

  • Codonell-square

    Carlos O'Donell

    Red Hat

    Biography

    Carlos O’Donell is a Principal Software Engineer at Red Hat for the platform tools team. At Red Hat he leads a crack team of developers in advancing the state of the art for low-level runtimes (glibc). Carlos is an FSF steward and core developer for the GNU C Library project (glibc). Carlos has been working on GNU tools and Linux for almost 15 years and has spoken at universities and various toolchain related conferences including GNU Tools Cauldron and the Linux Foundation Collaboration Summit.

Leave a private comment to organizers about this proposal