Proposals

Analyzing and Optimizing Server Workload Latency

Session information has not yet been published for this event.

*

One Line Summary

A look at the problems related to latency in server workloads and solutions we are exploring to improve average and tail latencies.

Abstract

Performance in server workloads is usually looked at from throughput perspective. This proposal takes the alternate look at server workload performance from latency perspective. The proposal describes various CPU scheduling issues that we see affecting the workload’s average latency and tail (70 or higher percentile) latency. The proposal follows it up with few changes we are exploring to address these latency issues.

Specifically: Today there is not much that an application thread can do today to influence the CPU scheduler decisions on when it gets scheduled in and out of a CPU. The preemption timing is based mostly on kernel timers and other threads waking on the CPU. There are ‘nice’ and ‘yield’ APIs which are either coarse grained or not well defined. For example:
- there is no way for an application thread to indicate that it is holding a critical resource and can possibly use some extra runtime before preemption.
- there is a way in which background applications can specify a high timer slack on timer wakeup. But, there is no enforcement to prevent such high timer slack application threads from interfering with a latency critical request-response thread.

The changes being explored here encompasses userspace, kernel and user/kernel API. They let the kernel know more about application thread state and letting the CPU scheduler make more informed decisions, without adding significant overhead into the equation. We will provide the data on effectiveness of these changes and also look at possible future optimizations in this area.

The changes are expected to hit lkml as RFCs before this conference and the conference will be a good forum to present this in detail and to have a closer look on any new user/kernel API being introduced here. It is also likely that the issues seen with our typical request-response workload could resonate with latency sensitive applications elsewhere and it would be good to generate some discussion on the topic and to share experiences around the topic.

Tags

performance, server, latency, CPU scheduler

Speaker

  • Venki Pallipadi

    Google Inc.

    Biography

    Venki Pallipadi is a member of Linux kernel team at Google Inc. His current focus on CPU scheduling and performance optimizations for server workloads. His past work in Linux kernel include CPU/Platform power management, cpuidle, ondemand/cpufreq, ACPI and various processor platform features like Page Attribute Table, APIC, HPET timer, etc.