Real-time Linux (Xenomai)

 Radboud University Nijmegen

Exercise #10: Measuring Jitter and Latency

Note: This excercise is intended for a real hardware platform, although the program for scheduling measurements could be tried first on VMware.


In real-time programming one usually has to guarantee that certain deadlines are always met. Hence, predictability of timing is crucial. In this exercise we investigate a few aspects of predictability by measuring scheduling jitter and interrupt latency.


The primary objectives of this exercise are:



We start with a definition of a number of terms, using a figure from the presentation Real Time
in Embedded Linux Systems
by Petazzoni & Opdenacker:

task latency

Interrupt latency:
the time between the occurrence of the interrupt and the start of the interrupt handler.

Scheduler latency: the time elapsed after completion of the handler and before execution the scheduler .

Scheduling latency, also called task latency: the time between the occurrence of the interrupt that makes a sleeping task runnable again and the moment the task is resumed.

Scheduling jitter: the unwanted variation of the release times of a periodic task. It can be characterized in various ways such as an interval around the desired release time, a maximal deviation from the desired time point, or a standard deviation from the mean value.

Pentium hardware and performance

In the lab we use PCs with standard Pentium hardware which is optimized for throughput, i.e., the number of instructions per time unit, at the cost of predictable latency. This means that we rather have a high average speed of instructions than a guaranteed low execution time of a specific instruction. The optimizations used by Pentium hardware makes most instructions executed very fast. Occasionally, however, a single instruction might take much longer to execute than it would do without optimization. This is problematic for real-time systems that need guaranteed hard real-time deadlines.    


Execution time

We list a number a number of reasons that contribute to the unpredictability of  the execution time of instructions:
An example of the worst case execution scenario of an instruction :
         what happens                                      time cost in nanoseconds
        - execution time instruction                                       1 ns
        - instruction cache miss                                           50 ns
        - instruction ATC miss                                           500 ns
        - data cache miss                                                 1 000 ns
        - paging needed for instruction and data    90 000 000 ns
        - many interrupts during execution                 100 000 ns
        - one big DMA                                           10 000 000 ns
                                                              total  : 100 101 551 ms
Hence, the execution of an instruction which basically costs a few nano seconds might take more than 100 milli seconds.

Interrupt latency

Besides the causes mention above, there a few additional factors that contribute to the unpredictability of interrupt latency.:

Scheduling latency

For scheduling latency the same factors apply as for interrupt latencies. However in this case extra latency can be caused that the schedular has to wait for the Linux kernel to complete some other tasks before it can execute.

When we periodically schedule a task, each cycle of the task starts late caused by the scheduling latency. Some part of the scheduling latency is sporadic, but other parts like e.g. "context saving" time is reocccuring for each cycle. Thus there is some fixed part of the scheduling latency reoccuring for each cycle. This means that when we take the difference between two adjacent cycle times, we remove this fixed part! Thus the latency between relative scheduling times of the periodic task are smaller. Hence the variation of the scheduling latency is smaller!

Load on a Linux system

To investigate how the load of a system affects jitter and latency, we will put some load on the system when performing measurements. We discuss a number of ways to monitor and to add various types of load on a Linux system.

I/O network load 

I/O disk load 

CPU load 

Memory load- swapping

#include <stdlib.h>
main() {
while (1) {


Exercise 10a.

Write a program to collect data about the real periodic scheduling of a task and plot this data.


void write_RTIMES(char * filename, unsigned int number_of_values,
RTIME *time_values){
unsigned int n=0;
    FILE *file;
    file = fopen(filename,"w");
    while (n<number_of_values) {
Try the measurments first in VMware and next on a Linux PC. Describe the results.

Exercise 10b.

Use the spreadsheet to calculate the average value of the measured periods, the maximal and minimal deviation from the desired period (100 000), and the standard deviation of the timing differences.

Exercise 10c.

Use the following script to put Linux under a big load:
    ping -f localhost -s 65000  >/dev/null   &  # network load
    while true; do cat /proc/interrupts >/dev/null ; done &  # cpu load
    while true; do ls -lR /  > /tmp/list ; done  &> /dev/null  # disk load

Exercise 10d.

Use the special parallel port cable to connect two PCs running Xenomai to each other. This special cable connects in both ways the data line D0 from one machine to the interrupt line S6 of the other machine. Now write a program to measure the interrupt latency of a PC as follows:
Repeat this measurement  10.000 times every 100us, and compute in each case the interrupt latency.
Similar to the exercise on scheduling jitter, write the results to a file, plot the measured latencies in a graph, and calculate average latency and the standard deviation.

Last Updated: 26 September 2008 (Jozef Hooman)
Created by: Harco Kuppens