Setting up Kdump and Crash for ARM-32 – an Ongoing Saga

Author: Kaiwan N Billimoria, kaiwanTECH
Date: 13 July 2017

DUT (Device Under Test):
Hardware platform: Qemu-virtualized Versatile Express Cortex-A9.
Software platform: mainline linux kernel ver 4.9.1, kexec-tools, crash utility.

First, my attempt at setting up the Raspberry Pi 3 failed; mostly due to recurring issues with the bloody MMC card; probably a power issue! (see this link).

Anyway. Then switched to doing the same on the always-reliable Qemu virtualizer; I prefer to setup the Vexpress-CA9.

In fact, a supporting project I maintain on github – the SEALS project – is proving extremely useful for building the ARM-32 hardware/software platform quickly and efficiently. (Fun fact: SEALS = Simple Embedded Arm Linux System).

So, I cloned the above-mentioned git repo for SEALS into a new working folder.

The way SEALS work is simple: edit a configuration file (build.config) to your satisfaction, to reflect the PATH to and versions of the cross-compiler, kernel, kernel command-line parameters, busybox, rootfs size, etc.

Setup the SEALS build.config file.

Screenshot: the script initial screen displaying the current build config:kdumpcr1

Relevant Info reproduced below for clarity:

Toolchain prefix : arm-none-linux-gnueabi-
Toolchain version: (Sourcery CodeBench Lite 2014.05-29) 4.8.3 20140320 (prerelease)

Staging folder : <…>/SEALS_staging
ARM Platform : Versatile Express (A9)

Platform RAM : 512 MB
RootFS force rebuild : 0
RootFS size : 768 MB

Linux kernel to use : 4.9.1
Linux kernel codebase location : <…>/SEALS_staging/linux-4.9.1
Kernel command-line : “console=ttyAMA0 root=/dev/mmcblk0 init=/sbin/init crashkernel=32M”

Busybox to use : 1.26.2
Busybox codebase location : <…>/SEALS_staging/busybox-1.26.2


Screenshot: second GUI screen, allowing the user to select actions to takekdumpcr2

Upon clicking ‘OK’, the build process starts:

I Boot Kernel Setup

  • kernel config: must carefully configure the Linux kernel. Please follow the kernel documentation in detail: [1]In brief, ensure these are set:
    CONFIG_SYSFS=y << should be >>

Dump-capture kernel config options (Arch Dependent, arm)
To use a relocatable kernel, Enable “AUTO_ZRELADDR” support under “Boot” options:      


which succinctly got it working!

  • Copy the ‘kexec’ binary into the root filesystem (staging tree) under it’s sbin/ folder
  • We build a relocatable kernel so that we can use the same ‘zImage’ 
    for the dump kernel as well as the primary boot kernel:
     “Or use the system kernel binary itself as dump-capture kernel and there is 
    no need to build a separate dump-capture kernel. 
    This is possible  only with the architectures which support a 
    relocatable kernel. As  of today, i386, x86_64, ppc64, ia64 and 
    arm architectures support relocatable kernel. ...”
  • the SEALS build system will proceed to build the kernel using the cross-compiler specified
  • went through just fine.

II Load dump-capture (or kdump) kernel into boot kernel’s RAM

Do read [1], but to cut a long story short

  • Create a small shell script - a wrapper over kexec – in the root filesystem:
    DUMPK_CMDLINE="console=ttyAMA0 root=/dev/mmcblk0 rootfstype=ext4 rootwait init=/sbin/init maxcpus=1 reset_devices"
    kexec --type zImage \
    -p ./zImage-4.9.1-crk \
    --dtb=./vexpress-v2p-ca9.dtb \
    [ $? -ne 0 ] && { 
        echo "kexec failed." ; exit 1
    echo "$0: kexec: success, dump kernel loaded."
    exit 0
  • Run it. It will only work (in my experience) when:
    • you’ve passed the kernel parameter ‘crashkernel=32M’
    • verified that indeed the boot kernel has reserved 32MB RAM for the dump-capture kernel/system:
RUN: Running qemu-system-arm now ...

qemu-system-arm -m 512 -M vexpress-a9 -kernel <...>/images/zImage \
-drive file=<...>/images/rfs.img,if=sd,format=raw \
-append "console=ttyAMA0 root=/dev/mmcblk0 init=/sbin/init crashkernel=32M" \
-nographic -no-reboot -dtb <...>/linux-4.9.1/arch/arm/boot/dts/vexpress-v2p-ca9.dtb

Booting Linux on physical CPU 0x0
Linux version 4.9.1-crk (hk@hk) (gcc version 4.8.3 20140320 (prerelease) (Sourcery CodeBench Lite 2014.05-29) ) #2 SMP Wed Jul 12 19:41:08 IST 2017
CPU: ARMv7 Processor [410fc090] revision 0 (ARMv7), cr=10c5387d
CPU: PIPT / VIPT nonaliasing data cache, VIPT nonaliasing instruction cache
OF: fdt:Machine model: V2P-CA9
ARM / $ dmesg |grep -i crash
Reserving 32MB of memory at 1920MB for crashkernel (System RAM: 512MB)
Kernel command line: console=ttyAMA0 root=/dev/mmcblk0 init=/sbin/init crashkernel=32M
ARM / $ id
uid=0 gid=0
ARM / $ ./
./ kexec: success, dump kernel loaded.
ARM / $ 

Ok, the dump-capture kernel has loaded up.
Now to test it!

III Test the soft boot into the dump-capture kernel

On the console of the (emulated) ARM-32:

ARM / $ echo c > /proc/sysrq-trigger 
sysrq: SysRq : Trigger a crash
Unhandled fault: page domain fault (0x81b) at 0x00000000
pgd = 9ee44000
[00000000] *pgd=7ee30831, *pte=00000000, *ppte=00000000
Internal error: : 81b [#1] SMP ARM
Modules linked in:
CPU: 0 PID: 724 Comm: sh Not tainted 4.9.1-crk #2
Hardware name: ARM-Versatile Express
task: 9f589600 task.stack: 9ee40000
PC is at sysrq_handle_crash+0x24/0x2c
LR is at arm_heavy_mb+0x1c/0x38
pc : [<804060d8>] lr : [<80114bd8>] psr: 60000013
sp : 9ee41eb8 ip : 00000000 fp : 00000000


[<804060d8>] (sysrq_handle_crash) from [<804065bc>] (__handle_sysrq+0xa8/0x170)
[<804065bc>] (__handle_sysrq) from [<80406ab8>] (write_sysrq_trigger+0x54/0x64)
[<80406ab8>] (write_sysrq_trigger) from [<80278588>] (proc_reg_write+0x58/0x90)
[<80278588>] (proc_reg_write) from [<802235c4>] (__vfs_write+0x28/0x10c)
[<802235c4>] (__vfs_write) from [<80224098>] (vfs_write+0xb4/0x15c)
[<80224098>] (vfs_write) from [<80224d30>] (SyS_write+0x40/0x80)
[<80224d30>] (SyS_write) from [<801074a0>] (ret_fast_syscall+0x0/0x3c)

Code: f57ff04e ebf43aba e3a03000 e3a02001 (e5c32000) 

Loading crashdump kernel...
Booting Linux on physical CPU 0x0

Linux version 4.9.1-crk (hk@hk) (gcc version 4.8.3 20140320 (prerelease) (Sourcery CodeBench Lite 2014.05-29) ) #2 SMP Wed Jul 12 19:41:08 IST 2017
CPU: ARMv7 Processor [410fc090] revision 0 (ARMv7), cr=10c5387d
CPU: PIPT / VIPT nonaliasing data cache, VIPT nonaliasing instruction cache
OF: fdt:Machine model: V2P-CA9
OF: fdt:Ignoring memory range 0x60000000 - 0x78000000
Memory policy: Data cache writeback
CPU: All CPU(s) started in SVC mode.
percpu: Embedded 14 pages/cpu @81e76000 s27648 r8192 d21504 u57344
Built 1 zonelists in Zone order, mobility grouping on. Total pages: 7874
Kernel command line: console=ttyAMA0 root=/dev/mmcblk0 rootfstype=ext4 rootwait 
init=/sbin/init maxcpus=1 reset_devices elfcorehdr=0x79f00000 mem=31744K

ARM / $ ls -l /proc/vmcore            << the dump image (480 MB here) >>
-r-------- 1 0 0 503324672 Jul 13 12:22 /proc/vmcore
ARM / $ 

Copy the dump file (with cp or scp, whatever), 
get it to the host system.

cp /proc/vmcore <dump-file>
ARM / $ halt
ARM / $ EXT4-fs (mmcblk0): re-mounted. Opts: (null)
The system is going down NOW!
Sent SIGTERM to all processes
Sent SIGKILL to all processes
Requesting system halt
reboot: System halted
QEMU: Terminated
^A-X  << type Ctrl-a followed by x to exit qemu >>
... and done. all done, exiting.
Thank you for using SEALS! We hope you like it.
There is much scope for improvement of course; would love to hear your feedback, ideas, and contribution!
Please visit : . 

IV Analyse the kdump image with the crash utility


The core analysis suite is a self-contained tool that can be used to
investigate either live systems, kernel core dumps created from dump
creation facilities such as kdump, kvmdump, xendump, the netdump and
diskdump packages offered by Red Hat, the LKCD kernel patch, the mcore
kernel patch created by Mission Critical Linux, as well as other formats
created by manufacturer-specific firmware.


A whitepaper with complete documentation concerning the use of this utility
can be found here: [3]

The crash binary can only be used on systems of the same architecture as
the host build system. There are a few optional manners of building the
crash binary:

o On an x86_64 host, a 32-bit x86 binary that can be used to analyze
32-bit x86 dumpfiles may be built by typing "make target=X86".
o On an x86 or x86_64 host, a 32-bit x86 binary that can be used to analyze
 32-bit arm dumpfiles may be built by typing "make target=ARM".

Ah. To paraphrase, Therein lies the devil, in the details.

[UPDATE : 14 July ’17
I do have it building successfully now. The trick apparently – on x86_64 Ubuntu 17.04 – was to install the 
lib32z1-dev package! Once I did, it built just fine. Many thanks to Dave Anderson (RedHat) who promptly replied to my query on the crash mailing list.]

I cloned the ‘crash’ git repo, did ‘make target=ARM’, it fails with:

 ../readline/libreadline.a ../opcodes/libopcodes.a ../bfd/libbfd.a
../libiberty/libiberty.a ../libdecnumber/libdecnumber.a -ldl
-lncurses -lm ../libiberty/libiberty.a build-gnulib/import/libgnu.a
 -lz -ldl -rdynamic
/usr/bin/ld: cannot find -lz
collect2: error: ld returned 1 exit status
Makefile:1174: recipe for target 'gdb' failed

Still trying to debug this!

Btw, if you’re unsure, pl see crash’s github Readme on how to build it.
So, now, with a ‘crash’ binary that works, lets get to work:

$ file crash
crash: ELF 32-bit LSB shared object, Intel 80386, version 1 (SYSV), dynamically linked, interpreter /lib/, for GNU/Linux 2.6.32, …

$ ./crash

crash 7.1.9++
Copyright (C) 2002-2017 Red Hat, Inc.
Copyright (C) 2004, 2005, 2006, 2010 IBM Corporation

crash: compiled for the ARM architecture

To examine a kernel dump (kdump) file, invoke crash like so:

crash <path-to-vmlinux-with-debug-symbols> <path-to-kernel-dumpfile>

$ <...>/crash/crash \
  <...>/SEALS_staging/linux-4.9.1/vmlinux ./kdump.img

crash 7.1.9++
Copyright (C) 2002-2017 Red Hat, Inc.
Copyright (C) 2004, 2005, 2006, 2010 IBM Corporation
GNU gdb (GDB) 7.6
Copyright (C) 2013 Free Software Foundation, Inc.
WARNING: cannot find NT_PRSTATUS note for cpu: 1
WARNING: cannot find NT_PRSTATUS note for cpu: 2
WARNING: cannot find NT_PRSTATUS note for cpu: 3

 KERNEL: <...>/SEALS_staging/linux-4.9.1/vmlinux
 DUMPFILE: ./kdump.img
 DATE: Thu Jul 13 00:38:39 2017
 UPTIME: 00:00:42
LOAD AVERAGE: 0.00, 0.00, 0.00
 TASKS: 56
 NODENAME: (none)
 RELEASE: 4.9.1-crk
 VERSION: #2 SMP Wed Jul 12 19:41:08 IST 2017
 MACHINE: armv7l (unknown Mhz)
 PANIC: "sysrq: SysRq : Trigger a crash"
 PID: 735
 COMMAND: "echo"
 TASK: 9f6af900 [THREAD_INFO: 9ee48000]
 CPU: 0

crash> ps
 0 0 0 80a05c00 RU 0.0 0 0 [swapper/0]
> 0 0 1 9f4ab700 RU 0.0 0 0 [swapper/1]
> 0 0 2 9f4abc80 RU 0.0 0 0 [swapper/2]
> 0 0 3 9f4ac200 RU 0.0 0 0 [swapper/3]
 1 0 0 9f4a8000 IN 0.1 3344 1500 init
722 2 0 9f6ac200 IN 0.0 0 0 [ext4-rsv-conver]
728 1 0 9f6ab180 IN 0.1 3348 1672 sh
> 735 728 0 9f6af900 RU 0.1 3344 1080 echo
crash> bt
PID: 735 TASK: 9f6af900 CPU: 0 COMMAND: "echo"
 #0 [<804060d8>] (sysrq_handle_crash) from [<804065bc>]
 #1 [<804065bc>] (__handle_sysrq) from [<80406ab8>]
 #2 [<80406ab8>] (write_sysrq_trigger) from [<80278588>]
 #3 [<80278588>] (proc_reg_write) from [<802235c4>]
 #4 [<802235c4>] (__vfs_write) from [<80224098>]
 #5 [<80224098>] (vfs_write) from [<80224d30>]
 #6 [<80224d30>] (sys_write) from [<801074a0>]
 pc : [<76e8d7ec>] lr : [<0000f9dc>] psr: 60000010
 sp : 7ebdcc7c ip : 00000000 fp : 00000000
 r10: 0010286c r9 : 7ebdce68 r8 : 00000020
 r7 : 00000004 r6 : 00103008 r5 : 00000001 r4 : 00102e2c
 r3 : 00000000 r2 : 00000002 r1 : 00103008 r0 : 00000001
 Flags: nZCv IRQs on FIQs on Mode USER_32 ISA ARM

And so on …

Another thing we can do is use gdb – to a limited extent – to analyse the dump file:

From [1]:

Before analyzing the dump image, you should reboot into a stable kernel.

You can do limited analysis using GDB on the dump file copied out of
/proc/vmcore. Use the debug vmlinux built with -g and run the following
  gdb vmlinux <dump-file>

Stack trace for the task on processor 0, register display, and memory
display work fine.

Also, [3] is an excellent whitepaper on using crash. Do read it.

All right, hope that helps!


Low-Level Software Design

[Please note, this article isn’t about formal design methods (LLD), UML, Design Patterns, nor about object-oriented design, etc. It’s written with a view towards the kind of software project I typically get to work on – embedded / Linux OS related, with the primary programming language being ‘C’ and/or scripting (typically with bash).]

When one looks back, all said and done, it isn’t that hard to get a decent software design and architecture. Obviously, the larger your project, the more the thought and analysis that goes into building a robust system. (Certainly, the more the years of experience, the easier it seems).

However, I am of the view that certain fundamentals never change: get them right and many of the pieces auto-slot into place. Work on a project enough and one always comes away with a  “feel” for the architecture and codebase – it’s robust, will work, or it’s just not.

So what are these “fundamentals”? Well, here’s the interesting thing: you already know them! But in the heat and dust of release pressures (“I don’t care that you need another half-day, check it in now!!!”), deadlines, production, we tend to forget the basics. Sounds familiar? 🙂

The points below are definitely nothing new, but always worth reiterating:

Low-level Design and Software Architecture

  • Jot down the requirements: why are we doing this? what do we hope to achieve?
  • Draw an overall diagram of the project, the data structures, the code flow, as you visualise it. You don’t really need fancy software tools- pencil and paper will do, especially at first.
    Arrive, gently, at the software architecture.
  • Layering helps (but one can overdo it)
    • To paraphrase- “adding a layer can be used to solve any problem in computer science” 🙂 Of course, one can quite easily add new problems too; careful!
  • It evolves – don’t be afraid to iterate, to use trial and error
    • “Be ready to throw the first one away – you’re going to anyway” – paraphrased from that classic book “The Mythical Man Month”
    • “There is no silver bullet” – again from the same book of wisdom. There is no one solution to all your problems – you’ll have to weigh options, make trade-offs. It’s like life y’know 😉
  • Design the code to be modular, structured
  • A function encapsulates an intention
    • Requirement-driven code: why is the function there?
  • Each function does exactly one thing
    • This is really important. If you can do this well, you will greatly reduce bugs, and thus, the need to debug.
  • Use configuration files (Edit: preferably in plain ASCII text format).


  • Insert function stubs – code it in detail later, get the overall low-level design and function interfacing correct first. What parameters, return value(s)? 
  • Avoid globals
    • use parameters, return values
    • in multithreaded / multiprocess environments, using any kind of global implies using a synchronization primitive of some sort (mutex, semaphore, spinlock, etc) to take care of concurrency concerns, races. Be aware – beware! – this is often a huge source of performance bottlenecks!
      Edit: When writing MT software, use powerful techniques TLS and TSD to further avoid globals.
  • Keep it minimal, and clean: Careful! don’t end up using too many (nested functions) layers – leads to “lasagna / spaghetti code” that’s hard to follow and thus understand
  • If a function’s code exceeds a ‘page’, re-look, redesign.

Of course, a project is not a dead static thing – at least it shouldn’t be. It evolves over time. Expect requirements, and thus your low-level design and code, to change. The better thought out the overall architecture though, the more resilient it will be to constant flux.

For example: you’re writing a device driver and a “read” method is attempting to read data from the ‘device’ (whatever the heck it is), but there is no data available right now, what should we do? Abort, returning an error code? Wait for data? Retry the operation thrice and see?

The “correct” answer: follow the standard. Assuming we’re working on a POSIX-compliant OS (Unix/Linux), the standard says that blocking calls must do precisely that: block, wait for data until it becomes available. So just wait for data. “But I don’t want to wait forever!” cries the application! Okay, implement a non-blocking open in that case (there’s a reason for that O_NONBLOCK flag folks!). Or a timeout feature, if it makes sense.

Shouldn’t the driver method “retry” the operation if it does not succeed at first? Short answer, No. Follow the Unix design philosophy: “provide mechanism, not policy”. Let the application define the policy (should we retry, if yes, how often; should we timeout, if yes, what’s the timeout, etc etc). The mechanism part of it – your driver’s read method implementation must work independent of, in fact ignore, such concerns. (But hey, it must be written to be concurrent and reentrant -safe. A post on that another day perhaps?).


Using the same example: lets say we do want the “read” method of our device driver to timeout after, say, 1 second. Where do we specify this value? Recollect, “provide mechanism, not policy”. So we’ll do so in the application, not the driver. But where in the app? Ah. I’d suggest we don’t hardcode the value; instead, keep it in a simple ASCII-text configuration file:


Of course, with ‘C’ one would usually put stuff like this into a header file. Fair enough, but please keep a separate header – say, app_config.h .

Try and do some crystal-ball-gazing: at some remote (or not-so-remote) point in the future, what if the project requires a GUI front-end? Probably, as an example, we will want to let the end-user view and set configuration – change the timeout, etc – easily via the GUI. Then, you will see the sense of using a simple ASCII-text configuration file to hold config values – reading and updating the values now becomes simple and clean.
Finally, nothing said above is sacred – we learn to take things on a case-by-case basis, use judgement.

A few Resources

Linux Kernel Version Timeline

I wanted to quickly look up Linux kernel release dates by version number.

All the info is on . I’ve just copied it below…

Click on the version # links (below) to see details of that version (redirects to the kernelnewbies website).

Last Updated: 01 Apr 2016


Linux 4.5 Released 13 March, 2016 (63 days)

Linux 4.4 Released 10 January, 2016 (70 days)

Linux 4.3 Released 1 November, 2015 (63 days)

Linux 4.2 Released 30 August, 2015 (70 days)

Linux 4.1 Released 21 June, 2015 (70 days)

Linux 4.0 Released 12 April, 2015 (63 days)


Linux 3.19 Released 8 February, 2015 (63 days)

Linux 3.18 Released 7 December, 2014 (63 days)

Linux 3.17 Released 5 October, 2014 (63 days)

Linux 3.16 Released 3 August, 2014 (56 days)

Linux 3.15 Released 8 June, 2014 (70 days)

Linux 3.14 Released 30 March, 2014 (70 days)

Linux 3.13 Released 19 January, 2014 (78 days)

Linux 3.12 Released 2 November, 2013 (61 days)

Linux 3.11 Released 2 September, 2013 (64 days)

Linux 3.10 Released 30 June, 2013 (63 days)

Linux 3.9 Released 28 April, 2013 (69 days)

Linux 3.8 Released 18 February, 2013 70 ( days)

Linux 3.7 Released 10 December 2012 (71 days)

Linux 3.6 Released 30 September, 2012 (71 days)

Linux 3.5 Released 21 July, 2012 (62 days)

Linux 3.4 Released 20 May, 2012 (63 days)

Linux 3.3 Released 18 March, 2012 (74 days)

Linux 3.2 Released 4 January, 2012 (72 days)

Linux 3.1 Released 24 October, 2011 (95 days)

Linux 3.0 Released 21 July, 2011 (64 days)


Linux 2.6.39 Released 18 May, 2011 (65 days)

Linux 2.6.38 Released 14 March, 2011 (69 days)

Linux 2.6.37 Released 4 January, 2011 (76 days)

Linux 2.6.36 Released 20 October, 2010 (80 days)

Linux 2.6.35 Released 1 August, 2010 (76 days)

Linux 2.6.34 Released 16 May, 2010 (81 days)

Linux 2.6.33 Released 24 February, 2010 (83 days)

Linux 2.6.32 Released 3 December, 2009 (84 days)

Linux 2.6.31 Released 9 September, 2009 (92 days)

Linux 2.6.30 Released 9 June, 2009 (77 days)

Linux 2.6.29 Released 24 March, 2009 (89 days)

Linux 2.6.28 Released 25 December, 2008 (77 days)

Linux 2.6.27 Released 9 October, 2008 (88 days)

Linux 2.6.26 Released 13 July, 2008 (87 days)

Linux 2.6.25 Released 17 April, 2008 (84 days)

Linux 2.6.24 Released 24 January, 2008 (107 days)

Linux 2.6.23 Released 9 October, 2007 (93 days)

Linux 2.6.22 Released 8 July, 2007 (73 days)

Linux 2.6.21 Released 26 April, 2007 (80 days)

Linux 2.6.20 Released 5 February, 2007 (68 days)

Linux 2.6.19 Released 29 November, 2006 (70 days)

Linux 2.6.18 Released 20 September, 2006 (95 days)

Linux 2.6.17 Released 17 June, 2006 (88 days)

Linux 2.6.16 Released 20 March, 2006 (76 days)

Linux 2.6.15 Released 3 January, 2006 (68 days)

Linux 2.6.14 Released 27 October, 2005 (59 days)

Linux 2.6.13 Released 29 August, 2005 (73 days)

Linux 2.6.12 Released 17 June, 2005 (107 days)

Linux 2.6.11 Released 2 March, 2005 (68 days)

Linux 2.6.10 Released 24 December, 2004 (66 days)

Linux 2.6.9 Released 19 October, 2004 (66 days)

Linux 2.6.8 Released 14 August, 2004 (59 days)

Linux 2.6.7 Released 16 June, 2004 (37 days)

Linux 2.6.6 Released 10 May, 2004 (36 days)

Linux 2.6.5 Released 4 April, 2004 (24 days)

Linux 2.6.4 Released 11 March, 2004 (22 days)

Linux 2.6.3 Released 18 February, 2004 (14 days)

Linux 2.6.2 Released 4 February, 2004 (26 days)

Linux 2.6.1 Released 9 January, 2004 (22 days)

Linux 2.6.0 Released 18 December, 2003

LApTaC – Learn Appreciate Teach and Contribute

For a score and 5 years – my adult working life so far – I’ve had the good fortune to pursue things and areas that interest me deeply.

Systems and assembly programming on the DOS platform in the early ’90s, Kyokushin Karate on the side, UNIX, and that too in DEC (Digital Equipment Corp, now a part of HP) in the mid-90’s, consulting and corporate training on the Linux OS after that (still going strong), with long distance running on the side. Now, last couple of years, Linux training/consulting continues, product development (my own product, yeah!), managing a product team working on an analystics database project, and of course, distance running.

Yawn, yeah, I know, they interest me not you. Nevertheless, a good ride, and still riding!
Which brings me to the title:

LApTaC :
Teach, and

An oft heard complaint, the lament of the modern person- “I don’t feel fulfilled in my life”.
Perhaps one ought to step back, reassess goals and priorities (striving too hard for how our society defines “success”?), and LApTaC in your life!

Its true you know, it never stops. Bored? Learn something new; we now know that age is certainly no barrier. That stuff we used to hear “your brain cells will die as you age and never regenerate”, turns out to be BS.

Enough said; appreciate what you have and the new things you learn everyday. I like Mark Manson’s practical and straightforward writing; read his essay “Stop Trying to be Happy”, among many others.

Also take a gander at this book: “18 Minutes: Find Your Focus, Master Distraction and Get the Right Things Done” by Peter Bregman.

A wise man once said, “If you want to see if you truly understand something, teach it”. Over a decade of training experience shows me this truth; also, the simpler you can explain it, the better you understand it. Jargon rarely, if ever, works.
Volunteer to teach something you know: the “fulfilment” payback you get from voluntary work cannot be overestimated.

and, Contribute:

A passage from the book – “A Return to Love : Reflections on the Principles of a Course in Miracles” by Marianne Williamson – has become popular as an inspirational quote:

Our deepest fear is not that we are inadequate. Our deepest fear is that we are powerful beyond measure. It is our light, not our darkness, that most frightens us. We ask ourselves, who am I to be brilliant, gorgeous, talented, fabulous? Actually, who are you not to be? You are a child of God. Your playing small doesn’t serve the world. There’s nothing enlightened about shrinking so that other people won’t feel insecure around you. We are all meant to shine, as children do. We were born to make manifest the glory of God that is within us. It’s not just in some of us; it’s in everyone. And as we let our own light shine, we unconsciously give other people permission to do the same. As we’re liberated from our own fear, our presence automatically liberates others.
So go full steam ahead and LApTaC your beautiful life!


Interesting Numbers

This article delves into looking up Interesting Numbers within (as of now) the following sections:

  • Networking
    • Numbers (with sheet screenshot)
    • Mitigation / Solutions
    • Resources
  • SLOCs – Source Lines Of Code
    • Cars
    • OS’s
  • Powers of 2  [Edit: 09 Jun 2015]



  • In general, one requires 1 MHz CPU power to drive 1 Mbps of data (or put another way, 1 CPU cycle per bit of data)
  • Given the (heavy legacy baggage) fact that the standard Ethernet MTU (Maximum Transmission Unit) size is typically 1500 bytes:
    • A 10 Gbps network link running at wire speed, will require to transfer over 800,000 packets per second
    • The table below enumerates the story for differing Ethernet packet sizes and wire rate to be maintained

Screenshot of a sheet describing the relationship between Ethernet frame size and Line Rate to Maintain (click to enlarge)

Screenshot from 2015-05-01 17:28:49
Relationship between Ethernet frame size and network Line Rate

Below Source: “Diving into Linux Networking Stack I”, MJ Schultz

Thus, at a rate of 10 Gbps, for MTU-size packets, we require to sustain a rate of processing approximately 1 packet per microsecond! (and that’s half-duplex, effectively cutting the time down to half for full-duplex)!

How is this possible?

Well, it’s not- not over sustained periods. For one, the interrupt load would be far too high for the processor to effectively handle (leading to’s called “receive livelock”). For another, the IP (and above) protocol stack processing would also be hard put to sustain these rates.

The solution is two-fold:

  • hardware interrupt mitigation is achieved via the NAPI technique (which many modern drivers use as the default processing mode, switching to interrupt mode only when there are no or few packets left to process)
  • Modern hardware NICs and operating systems use high performance offloading techniques (TSO / LRO / GRO). These essentially offload work from the host processor to the hardware NIC, and effectively allow large packet sizes as well.

TSO effectively lets us offload 64KB of data (to the hardware NIC for segmentation and processing). If the host did the usual TCP processing at typical MSS sizes, this works out to approximately (MTU-40)-sized segments ~= 1460 bytes.
Thus, with TSO, we get an ~ > 40X saving (65536/1460 = 44.88) on CPU utilization!


NIC Adapter Time available between packets for MTU-size (1538 bytes) packets Packets per second (pps)
10 Gbps 1,230 ns (1.23 us) 813,008 (~ 0.8 M pps)
40 Gbps 120 ns 8,333,333 (~ 8M pps)
100 Gbps ~ 48 ns ~ 20,833,333 (nearly 21M pps) !

[Update: 25 May 2016]:

See this presentation made by Jasper D Brouer, Principal Kernel Engineer, RedHat at DevConf, Feb 2016 : Kernel network stack challenges at increasing speeds [ODP]



[Update: 09 Aug 2015]:

[Inputs below from this LWN article “Improving Linux Networking Performance”, Jan 2015]

Latency-sensitive workloads:
So, we’ve got approx 48 ns to process a packet on a 100 Gbe capable network adapter. Assuming we have a 3 GHz processor, it give us:

  • ~ 200 cycles to process each packet
  • a cache miss will take about 32ns to resolve
  • an SKB on 64-bit requires around 4 cache lines, and they’re written to during packet handling
  • thus, more than 2 cache misses will wipe out the available time budget!
  • what makes it worse: critical code sections require locking
    • the Intel LOCK prefix instruction (used to implement atomic operations at the machine level via cmpxchg or similar) takes ~ 8.25 ns
    • thus a spin_lock/spin_unlock section will take at least 16 ns
  • System call
    • with SELinux and auditing support- ~ 75 ns
    • without SELinux and auditing support- ~ just under 42 ns

“The (Linux) kernel, today, can only forward something between 1M (M=million) and 2M packets per core every second, while some of the bypass alternatives approach a rate of 15M packets per core per second.” Source: Improving Linux networking performance”, Jon Corbet, Jan 2015.



Presentation slides by Jasper D Brouer, Principal Kernel Engineer, RedHat at DevConf, Feb 2016 : Kernel network stack challenges at increasing speeds [ODP]

Large Segmentation Offload (LSO) on Wikipedia

“Improving Linux networking performance”, LWN, Jon Corbet, Jan 2015

JLS2009: Generic receive offload

Linux and TCP Offload Engines [LWN]

Whitepaper: “Introduction to TCP Offload Engines” (Dell)

Whitepaper: “Boosting Data Transfer with TCP Offload Engine Technology” (Dell, Broadcom, MS; benchmarks displayed here)

“The Ethernet standard assumes it will take roughly 50 microseconds for a signal to reach its destination.” – Source: Basic-Networking-Tutorial

SLOCs – Source Lines Of Code

First, please view this brilliant infographic from the “informationisbeautiful” book (and website).
And, here’s the same numbers in a Google sheet!


Below snippet directly quoted from “This Car Runs on Code”

“The avionics system in the F-22 Raptor, the current U.S. Air Force frontline jet fighter, consists of about 1.7 million lines of software code. The F-35 Joint Strike Fighter, scheduled to become operational in 2010, will require about 5.7 million lines of code to operate its onboard systems. And Boeing’s new 787 Dreamliner, scheduled to be delivered to customers in 2010, requires about 6.5 million lines of software code to operate its avionics and onboard support systems.

These are impressive amounts of software, yet if you bought a premium-class automobile recently, ”it probably contains close to 100 million lines of software code,” says Manfred Broy, a professor of informatics at Technical University, Munich, and a leading expert on software in cars. All that software executes on 70 to 100 microprocessor-based electronic control units (ECUs) networked throughout the body of your car.


Edit: 04 jan 2017
“Car Software: 100M Lines of Code and Counting”
– Article on LinkedIn.


Operating Systems

Source: Wikipedia article on SLOCs

… According to Vincent Maraia,[1] the SLOC values for various operating systems in Microsoft‘s Windows NT product line are as follows:

Year Operating System SLOC (Million)
1993 Windows NT 3.1 4-5[1]
1994 Windows NT 3.5 7–8[1]
1996 Windows NT 4.0 11–12[1]
2000 Windows 2000 more than 29[1]
2001 Windows XP 45[2][3]
2003 Windows Server 2003 50[1]

David A. Wheeler studied the Red Hat distribution of the Linux operating system, and reported that Red Hat Linux version 7.1[4] (released April 2001) contained over 30 million physical SLOC. He also extrapolated that, had it been developed by conventional proprietary means, it would have required about 8,000 man-years of development effort and would have cost over $1 billion (in year 2000 U.S. dollars).

A similar study was later made of Debian GNU/Linux version 2.2 (also known as “Potato”); this operating system was originally released in August 2000. This study found that Debian GNU/Linux 2.2 included over 55 million SLOC, and if developed in a conventional proprietary way would have required 14,005 man-years and cost $1.9 billion USD to develop. Later runs of the tools used report that the following release of Debian had 104 million SLOC, and as of year 2005, the newest release is going to include over 213 million SLOC.

One can find figures of major operating systems (the various Windows versions have been presented in a table above).

Year Operating System SLOC (Million)
2000 Debian 2.2 55–59[5][6]
2002 Debian 3.0 104[6]
2005 Debian 3.1 215[6]
2007 Debian 4.0 283[6]
2009 Debian 5.0 324[6]
2012 Debian 7.0 419[7]
2009 OpenSolaris 9.7
FreeBSD 8.8
2005 Mac OS X 10.4 86[8][n 1]
2001 Linux kernel 2.4.2 2.4[4]
2003 Linux kernel 2.6.0 5.2
2009 Linux kernel 2.6.29 11.0
2009 Linux kernel 2.6.32 12.6[9]
2010 Linux kernel 2.6.35 13.5[10]
2012 Linux kernel 3.6 15.9[11]

Powers of 2

Often, especially for nerdy programmers, it’s a good idea to be familiar with powers of 2. I won’t bore you with the “usual” ones (do it yourself IOW 🙂 ).

^2 Quick Summary:

1000 kB kilobyte
10002 MB megabyte
10003 GB gigabyte
10004 TB terabyte
10005 PB petabyte
10006 EB exabyte
10007 ZB zettabyte
10008 YB yottabyte
1024 KiB kibibyte KB kilobyte
10242 MiB mebibyte MB megabyte
10243 GiB gibibyte GB gigabyte
10244 TiB tebibyte
10245 PiB pebibyte
10246 EiB exbibyte
10247 ZiB zebibyte
10248 YiB yobibyte

For example, on an x86_64 running the Linux OS (kernel ver >= 2.6.x), the memory management layer divides the 64-bit process VAS (Virtual Address Space) into two regions:

  • a 128 TB region at the low end for Userland (this includes the text, data, library/memory mapping and stack segments)
  • a 128 TB region at the upper end for kernel VAS (the kernel segment)

How large is the entire VAS?
It’s 2^64 of course, which is 18,446,744,073,709,551,616 bytes !
Wow. What the heck’s that, you ask??
Ok easier: it’s 16 EB (exabytes)    🙂
(see the Summary Table below too).

From the Wikipedia page on Powers of 2 :

The first 96 powers of two
(sequence A000079 in OEIS)

20 = 1 216 = 65,536 232 = 4,294,967,296 248 = 281,474,976,710,656 264 = 18,446,744,073,709,551,616 280 = 1,208,925,819,614,629,174,706,176
21 = 2 217 = 131,072 233 = 8,589,934,592 249 = 562,949,953,421,312 265 = 36,893,488,147,419,103,232 281 = 2,417,851,639,229,258,349,412,352
22 = 4 218 = 262,144 234 = 17,179,869,184 250 = 1,125,899,906,842,624 266 = 73,786,976,294,838,206,464 282 = 4,835,703,278,458,516,698,824,704
23 = 8 219 = 524,288 235 = 34,359,738,368 251 = 2,251,799,813,685,248 267 = 147,573,952,589,676,412,928 283 = 9,671,406,556,917,033,397,649,408
24 = 16 220 = 1,048,576 236 = 68,719,476,736 252 = 4,503,599,627,370,496 268 = 295,147,905,179,352,825,856 284 = 19,342,813,113,834,066,795,298,816
25 = 32 221 = 2,097,152 237 = 137,438,953,472 253 = 9,007,199,254,740,992 269 = 590,295,810,358,705,651,712 285 = 38,685,626,227,668,133,590,597,632
26 = 64 222 = 4,194,304 238 = 274,877,906,944 254 = 18,014,398,509,481,984 270 = 1,180,591,620,717,411,303,424 286 = 77,371,252,455,336,267,181,195,264
27 = 128 223 = 8,388,608 239 = 549,755,813,888 255 = 36,028,797,018,963,968 271 = 2,361,183,241,434,822,606,848 287 = 154,742,504,910,672,534,362,390,528
28 = 256 224 = 16,777,216 240 = 1,099,511,627,776 256 = 72,057,594,037,927,936 272 = 4,722,366,482,869,645,213,696 288 = 309,485,009,821,345,068,724,781,056
29 = 512 225 = 33,554,432 241 = 2,199,023,255,552 257 = 144,115,188,075,855,872 273 = 9,444,732,965,739,290,427,392 289 = 618,970,019,642,690,137,449,562,112
210 = 1,024 226 = 67,108,864 242 = 4,398,046,511,104 258 = 288,230,376,151,711,744 274 = 18,889,465,931,478,580,854,784 290 = 1,237,940,039,285,380,274,899,124,224
211 = 2,048 227 = 134,217,728 243 = 8,796,093,022,208 259 = 576,460,752,303,423,488 275 = 37,778,931,862,957,161,709,568 291 = 2,475,880,078,570,760,549,798,248,448
212 = 4,096 228 = 268,435,456 244 = 17,592,186,044,416 260 = 1,152,921,504,606,846,976 276 = 75,557,863,725,914,323,419,136 292 = 4,951,760,157,141,521,099,596,496,896
213 = 8,192 229 = 536,870,912 245 = 35,184,372,088,832 261 = 2,305,843,009,213,693,952 277 = 151,115,727,451,828,646,838,272 293 = 9,903,520,314,283,042,199,192,993,792
214 = 16,384 230 = 1,073,741,824 246 = 70,368,744,177,664 262 = 4,611,686,018,427,387,904 278 = 302,231,454,903,657,293,676,544 294 = 19,807,040,628,566,084,398,385,987,584
215 = 32,768 231 = 2,147,483,648 247 = 140,737,488,355,328 263 = 9,223,372,036,854,775,808 279 = 604,462,909,807,314,58

Some selected powers of two

28 = 256
The number of values represented by the 8 bits in a byte, more specifically termed as an octet. (The term byte is often defined as a collection of bits rather than the strict definition of an 8-bit quantity, as demonstrated by the term kilobyte.)
210 = 1,024
The binary approximation of the kilo-, or 1,000 multiplier, which causes a change of prefix. For example: 1,024 bytes = 1 kilobyte (or kibibyte).
This number has no special significance to computers, but is important to humans because we make use of powers of ten.
212 = 4,096
The hardware page size of Intel x86 processor.
216 = 65,536
The number of distinct values representable in a single word on a 16-bit processor, such as the original x86 processors.[4]
The maximum range of a short integer variable in the C#, and Java programming languages. The maximum range of a Word or Smallint variable in the Pascal programming language.
220 = 1,048,576
The binary approximation of the mega-, or 1,000,000 multiplier, which causes a change of prefix. For example: 1,048,576 bytes = 1 megabyte (or mibibyte).
This number has no special significance to computers, but is important to humans because we make use of powers of ten.
224 = 16,777,216
The number of unique colors that can be displayed in truecolor, which is used by common computer monitors.
This number is the result of using the three-channel RGB system, with 8 bits for each channel, or 24 bits in total.
230 = 1,073,741,824
The binary approximation of the giga-, or 1,000,000,000 multiplier, which causes a change of prefix. For example, 1,073,741,824 bytes = 1 gigabyte (or gibibyte).
This number has no special significance to computers, but is important to humans because we make use of powers of ten.
231 = 2,147,483,648
The number of non-negative values for a signed 32-bit integer. Since Unix time is measured in seconds since January 1, 1970, it will run out at 2,147,483,647 seconds or 03:14:07 UTC on Tuesday, 19 January 2038 on 32-bit computers running Unix, a problem known as the year 2038 problem.
232 = 4,294,967,296
The number of distinct values representable in a single word on a 32-bit processor. Or, the number of values representable in a doubleword on a 16-bit processor, such as the original x86 processors.[4]
The range of an int variable in the Java and C# programming languages.
The range of a Cardinal or Integer variable in the Pascal programming language.
The minimum range of a long integer variable in the C and C++ programming languages.
The total number of IP addresses under IPv4. Although this is a seemingly large number, IPv4 address exhaustion is imminent.
240 = 1,099,511,627,776
The binary approximation of the tera-, or 1,000,000,000,000 multiplier, which causes a change of prefix. For example, 1,099,511,627,776 bytes = 1 terabyte (or tebibyte).
This number has no special significance to computers, but is important to humans because we make use of powers of ten.
250 = 1,125,899,906,842,624
The binary approximation of the peta-, or 1,000,000,000,000,000 multiplier. 1,125,899,906,842,624 bytes = 1 petabyte (or pebibyte).
260 = 1,152,921,504,606,846,976
The binary approximation of the exa-, or 1,000,000,000,000,000,000 multiplier. 1,152,921,504,606,846,976 bytes = 1 exabyte (or exbibyte).
264 = 18,446,744,073,709,551,616
The number of distinct values representable in a single word on a 64-bit processor. Or, the number of values representable in a doubleword on a 32-bit processor. Or, the number of values representable in a quadword on a 16-bit processor, such as the original x86 processors.[4]
The range of a long variable in the Java and C# programming languages.
The range of a Int64 or QWord variable in the Pascal programming language.
The total number of IPv6 addresses generally given to a single LAN or subnet.
One more than the number of grains of rice on a chessboard, according to the old story, where the first square contains one grain of rice and each succeeding square twice as many as the previous square. For this reason the number 264 – 1 is known as the “chess number”.
270 = 1,180,591,620,717,411,303,424
The binary approximation of yotta-, or 1,000,000,000,000,000,000,000 multiplier, which causes a change of prefix. For example, 1,180,591,620,717,411,303,424 bytes = 1 Yottabyte (or yobibyte).
286 = 77,371,252,455,336,267,181,195,264
286 is conjectured to be the largest power of two not containing a zero.[5]
296 = 79,228,162,514,264,337,593,543,950,336
The total number of IPv6 addresses generally given to a local Internet registry. In CIDR notation, ISPs are given a /32, which means that 128-32=96 bits are available for addresses (as opposed to network designation). Thus, 296 addresses.
2128 = 340,282,366,920,938,463,463,374,607,431,768,211,456
The total number of IP addresses available under IPv6. Also the number of distinct universally unique identifiers (UUIDs).
2333 =
The smallest power of 2 which is greater than a googol (10100).
21024 ≈ 1.7976931348E+308
The maximum number that can fit in an IEEE double-precision floating-point format, and hence the maximum number that can be represented by many programs, for example Microsoft Excel.
257,885,161 = 581,887,266,232,246,442,175,100,…,725,746,141,988,071,724,285,952
One more than the largest known prime number as of 2013. It has more than 17 million digits.[6]

Again, from the Wikipedia page on Terabyte:


Illustrative usage examples

Examples of the use of terabyte to describe data sizes in different fields are:

  • Library data: The U.S. Library of Congress Web Capture team claims that as of March 2014 “the Library has collected about 525 terabytes of web archive data” and that it adds about 5 terabytes per month.[20]
  • Online databases: claims approximately 600 TB of genealogical data with the inclusion of US Census data from 1790 to 1930.[21]
  • Computer hardware: Hitachi introduced the world’s first one terabyte hard disk drive in 2007.[22]
  • Historical Internet traffic: In 1993, total Internet traffic amounted to approximately 100 TB for the year.[23] As of June 2008, Cisco Systems estimated Internet traffic at 160 TB/s (which, assuming to be statistically constant, comes to 5 zettabytes for the year).[24] In other words, the amount of Internet traffic per second in 2008 exceeded all of the Internet traffic in 1993.
  • Social networks: As of May 2009, Yahoo! Groups had “40 terabytes of data to index”.[25]
  • Video: Released in 2009, the 3D animated film Monsters vs. Aliens used 100 TB of storage during development.[26]
  • Usenet: In October 2000, the Deja News Usenet archive had stored over 500 million Usenet messages which used 1.5 TB of storage.[27]
  • Encyclopedia: In January 2010, the database of Wikipedia consists of a 5.87 terabyte SQL dataset.[28]
  • Climate science: In 2010, the German Climate Computing Centre (DKRZ) was generating 10000 TB of data per year, from a supercomputer with a 20 TB memory and 7000 TB disk space.[29]
  • Audio: One terabyte of audio recorded at CD quality contains approx. 2000 hours of audio. Additionally, one terabyte of compressed audio recorded at 128 kB/s contains approx. 17,000 hours of audio.
  • The Hubble Space Telescope has collected more than 45 terabytes of data in its first 20 years of observations.[30]
  • The IBM computer Watson, against which Jeopardy! contestants competed in February 2011, has 16 terabytes of RAM.[31]


Linux : 23 years on

Happy 23rd Linux !

Yes, 25 August 1991 is when Linus posted that (famous?) email…That email was sent on 25 August 1991; it’s exactly 23 years since then today!

"Hello everybody out there using minix -
I'm doing a (free) operating system (just a hobby, won't be big and
professional like gnu) for 386(486) AT clones. This has been brewing
since april, and is starting to get ready. ..."

(See the above link for the full text).

Here’s another post by Linus on Linux’s History.

“Some people have told me they don’t think a fat penguin really embodies the grace of Linux, which just tells me they have never seen an angry penguin charging at them in excess of 100 mph …”

Today, like everyday, it’s pretty much business-as-usual : the Linux OS gallops ahead on a smooth trajectory to “world domination” 
Thank you Linus and Linux : May you live and prosper a Googol years!

Simple System Monitoring for a Linux Desktop

The Problem

What exactly is eating into my HDD / processor / network right now??

Yeah! On the (Linux) desktop, we’d like to know why things crawl along sometimes. Which process(es) is the culprit behind that disk activity, or the memory hogger, or eating up network bandwidth?

Many tools exist that can help us pinpoint these facts. Sometimes, though, it’s just easier if someone shows us a quick easy way to get relevant facts; so here goes:

Continue reading Simple System Monitoring for a Linux Desktop

A Header of Convenience

Over the years, we tend to collect little snippets of code and routines that we use, like, refine and reuse.

I’ve done so, for (mostly) user-space and kernel programming on the 2.6 / 3.x Linux kernel. Feel free to use it. Please do get back with any bugs you find, suggestions, etc.

License: GPL / LGPL

Click here to view the code!

There are macros / functions to:

  • make debug prints along with function name and line# info (via the usual printk() or trace_printk()) – (only if DEBUG mode is On)
    • [EDIT] : rate-limiting turned Off by default (else we risk missing some prints)
      -will preferably use rate-limited printk’s 
  • dump the kernel-mode stack
  • print the current context (process or interrupt along with flags in the form that ftrace uses)
  • a simple assert() macro (!)
  • a cpu-intensive DELAY_LOOP (useful for test rigs that must spin on the processor)
  • an equivalent to usermode sleep functionality
  • a function to calculate the time delta given two timestamps (timeval structs)
  • convert decimal to binary, and
  • a few more.

Whew 🙂

Edit: removed the header listing inline here; it’s far more convenient to just view it online here (on

Linux Tools for the serious Systems Programmer

Tools that help. When developing code (systems programming) on the Linux OS: a compilation by Kaiwan N Billimoria :


Tool Type


ARM support (on target)?



find/grep Source Code browsers Y -busybox Source; reqd on host dev system only
cscope NA
ctags NA
Source Code static analysis. FOSS NA
splint (prev LCLint) NA
Coverity / Klocwork / etc Commercial ?
strace Application trace Y
ltrace Y
[f]printf Application – simple instrumentation Y Code-based
My “MSG” and other macros  Header file  Useful Y
gdb Source-level debuggers Y Usually on host dev system only
ddd ?
Insight ?
ps Process state Y -busybox
pgrep, pkill Y -busybox
pstree ?
top Y
pidstat ?
procfs System state / performance tuning
vmstat generic Y
dstat  Tip:
dstat –time –top-io-adv –top-cpu –top-mem 5
(every 5s)
iotop, iostat, ionice disk IO Y buildroot
sar ? package: sysstat
lsof ?
Valgrind Memory Checkers and analysis Considered the best OSS memory checker suite Y -ver 3.7 on buildroot; only for Cortex A8/A9 && kernel ver < 3.x
Electric Fence ?
Dmalloc Y
mtrace Y
iftop Network monitoring, etc ?
iptraf ?
netstat Y -netstat-nat
ethtool Y
tcpdump Y
wireshark Ethernet, USB sniffer N GUI- on host
 Also, BTW, here’s a nice link :

16 commands to check hardware information on Linux


printk Kernel – simple instrumentation Y Kernel code-based debugging techniques [note: recommend you use debugfs and not procfs for debug-related stuff].
My “MSG” and other macros  Header file  Useful Y
procfs Kernel Analysis & Tuning w/ sysctl Y
ioctl Y
debugfs Recommended Y
Magic SysRq During development / system lockups Y
gdb with proc/kcore Kernel lookup Y
KGDB Kernel development debugging Y
KProbes, JProbes Non-intrusive kernel hooks  V useful; for learning / debugging Y
SystemTap Kernel scriptable tracing/probing instrumentation tool  (AFAIK, layered on Kprobes) ?
Ftrace Kernel trace framework Y
OProfile Kernel and App profiler ?
LTTng Linux Trace Toolkit next gen – Instrumentation ?
Kdump, Kexec and Crash Crash dump and analysis Y -kexec crash -on host
Perf / Perfmon2 HW-based performance monitoring Y (limited?) Arch-independent
cpufreq Power Management
CGroups Scheduler Y
Proc – sysctl Y
chrt Y buildroot
cpuset, taskset Y buildroot
sparse Kernel-space static code analysis NA -src Reqd on dev host only
QEMU Virtualization, open source Y
VirtualBox ?
Tip: Using buildroot,enable the packages/features you want for embedded!
Kaiwan N Billimoria, kaiwanTECH.

A quick-ref pic from Brendan Gregg’s fantastic site on Linux Performance tools (and Linux performance monitoring in general):

Click to zoom


Tech musings, hands-on, mostly Linux