Home Row Computing

Update August, 2013: Stick Shift is a product that implements this on Windows.

This post teaches you how to set up your computer so that you can generate any keystroke or key combination without taking your hands off home row (the 'asdf' row of keys). You can then use the arrow keys, page up/down, shortcuts, and more while in regular typing position. It works across applications, allowing you to type and move about much faster. I call it 'home row computing' and it's something I started doing after I learned touch typing.

I'll explain the idea and give step-by-step instructions for Windows systems. The first step is to do what power users have done all along: reclaim the Caps Lock key. It is a fine piece of keyboard real estate gone completely to waste, like a landfill in the middle of Manhattan. We must also do some key remapping:

Keyboard Remapping

The idea is to change Caps Lock so that it can be combined with other accessible keys to produce all of the faraway keys shown above, plus frequent key combinations and anything else you might want. When I say "map to home row" I mean mapping to keys that can be reached while your hands remain on home row. Nearby letter keys are fine too. For example, Caps Lock+j could become the Up arrow and Caps Lock+e could become Alt+F4.

It is common to turn Caps Lock into Control, but we don't actually want that because nearly all Control+X combinations are already taken by various programs, so we wouldn't be able to do much remapping. Some Control+X combinations are very high yield (e.g., the copy/paste/cut combos) so we'll remap those onto home row. I think the best substitute for Caps Lock is the "Apps Key". Since no software uses it as a shortcut ingredient, we can hijack it without fear. Also, its normal role is to open a right-click context menu, so that's a useful thing to have on home row as it helps you stay clear of the mouse.

So download SharpKeys, run it, and turn Caps Lock into the Application Key as shown below:

SharpKeys screenshot, remap Application Key onto Caps Lock

OK, then click "Write to registry", log off and back on, and you're in business. The second step is to download the outstanding AutoHotKey to run our remapping script. AutoHotKey is a must have for Windows power users, featured repeatedly on lifehacker. Since it is a full-on automation and scripting engine there are many ways to do things. However its native key remapping syntax (OldKey::RemappedKey) doesn't work for us here since we want to map key combinations into single keys. But you don't have to worry about that, you can just download my ready-made script for either the qwerty (normal) keyboard layout or the Dvorak (freak) layout. Put it anywhere in your file system, then create a shortcut to the script in the Programs > Startup folder so it runs when you log on to Windows. By inspecting the script you'll be able to tweak it for your tastes. Here's what it does out of the box:

Home Row Computing Cheatsheet

Thanks to AutoHotKey's eliteness the layout works well in practice. The timing is not quirky at all and there are zero misfires. If you hold down Caps Lock, you can press the other keys repeatedly and they'll be remapped. Let go of Caps Lock and they're back to normal. The script also handles modifiers such as Alt and Shift being pressed along with the key combinations. It's pretty transparent. If you want to actually toggle caps lock, then Windows Key + Caps Lock will do the trick.

It is a joy to have navigation keys on the home row. It makes browsing, programming and writing much smoother. You can fly through dialogs. Having Esc nearby is not bad either. All in all, I can't imagine going back to a regular keyboard. Given AutoHotKey's power you can write scripts to handle key combinations so there are many possibilities. I adapted to this thing pretty fast; the symmetries between page up/up arrow, home/left and so on helped a bit. Again, it's trivial to pick your own bindings, take a look at other ideas for cursor movement and cook up your own scheme if you wish.

Some quick thoughts regarding text editors. When I first did this I was an Emacs user so having a convenient Control key was crucial, but I still think it's better to turn Caps Lock into the Apps Key instead. Then you can pick and choose your macro bindings between Control and Caps Lock, and given AutoHotKey's power you have a lot of leeway. I have since switched to Vim (apostasy!!) and never looked back. Vim users will recognize the beauty of having arrow keys on the home row, but will probably barf at the exact key choices I used since jklh is burned into their brains. Just edit the script. You'll like navigating without switching modes, and Esc on the home row is great for when you do switch. In Visual Studio this works seamlessly without interfering with any of the native shortcuts.

I've tried this out on Windows XP, Server 2003, Vista, and Server 2008, both 32- and 64-bit flavors. No problems that I know of, but use it at your own risk. The script works over Putty so the keys are available on the command line and Vim for all of the Unix boxes I use. If you know of similar approaches in other OSes I'd love to hear it in the comments. Hope this is useful!

Update: Amjith pointed me to XKeymacs, a tool that implements the same idea but for emacs keybindings (you could do a similar thing via AutoHotKey, but this is convenient for sure). Also, you guys in the comments are all in favor of Vim-style HJKL bindings, so if anyone makes a script to do that, I'd be happy to host it.

Update 2: Simon Scarfe has posted Vim-like bindings for qwerty here. See the comments for details. Thanks!

Update 3: Paul has posted a Linux implementation of this in a comment below. Sweet.


What Your Computer Does While You Wait

This post takes a look at the speed - latency and throughput - of various subsystems in a modern commodity PC, an Intel Core 2 Duo at 3.0GHz. I hope to give a feel for the relative speed of each component and a cheatsheet for back-of-the-envelope performance calculations. I've tried to show real-world throughputs (the sources are posted as a comment) rather than theoretical maximums. Time units are nanoseconds (ns, 10-9 seconds), milliseconds (ms, 10-3 seconds), and seconds (s). Throughput units are in megabytes and gigabytes per second. Let's start with CPU and memory, the north of the northbridge:

Latency and throughput in an Intel Core 2 Duo computer, North Side

The first thing that jumps out is how absurdly fast our processors are. Most simple instructions on the Core 2 take one clock cycle to execute, hence a third of a nanosecond at 3.0Ghz. For reference, light only travels ~4 inches (10 cm) in the time taken by a clock cycle. It's worth keeping this in mind when you're thinking of optimization - instructions are comically cheap to execute nowadays.

As the CPU works away, it must read from and write to system memory, which it accesses via the L1 and L2 caches. The caches use static RAM, a much faster (and expensive) type of memory than the DRAM memory used as the main system memory. The caches are part of the processor itself and for the pricier memory we get very low latency. One way in which instruction-level optimization is still very relevant is code size. Due to caching, there can be massive performance differences between code that fits wholly into the L1/L2 caches and code that needs to be marshalled into and out of the caches as it executes.

Normally when the CPU needs to touch the contents of a memory region they must either be in the L1/L2 caches already or be brought in from the main system memory. Here we see our first major hit, a massive ~250 cycles of latency that often leads to a stall, when the CPU has no work to do while it waits. To put this into perspective, reading from L1 cache is like grabbing a piece of paper from your desk (3 seconds), L2 cache is picking up a book from a nearby shelf (14 seconds), and main system memory is taking a 4-minute walk down the hall to buy a Twix bar.

The exact latency of main memory is variable and depends on the application and many other factors. For example, it depends on the CAS latency and specifications of the actual RAM stick that is in the computer. It also depends on how successful the processor is at prefetching - guessing which parts of memory will be needed based on the code that is executing and having them brought into the caches ahead of time.

Looking at L1/L2 cache performance versus main memory performance, it is clear how much there is to gain from larger L2 caches and from applications designed to use it well. For a discussion of all things memory, see Ulrich Drepper's What Every Programmer Should Know About Memory (pdf), a fine paper on the subject.

People refer to the bottleneck between CPU and memory as the von Neumann bottleneck. Now, the front side bus bandwidth, ~10GB/s, actually looks decent. At that rate, you could read all of 8GB of system memory in less than one second or read 100 bytes in 10ns. Sadly this throughput is a theoretical maximum (unlike most others in the diagram) and cannot be achieved due to delays in the main RAM circuitry. Many discrete wait periods are required when accessing memory. The electrical protocol for access calls for delays after a memory row is selected, after a column is selected, before data can be read reliably, and so on. The use of capacitors calls for periodic refreshes of the data stored in memory lest some bits get corrupted, which adds further overhead. Certain consecutive memory accesses may happen more quickly but there are still delays, and more so for random access. Latency is always present.

Down in the southbridge we have a number of other buses (e.g., PCIe, USB) and peripherals connected:

Latency and throughput in an Intel Core 2 Duo computer, South Side

Sadly the southbridge hosts some truly sluggish performers, for even main memory is blazing fast compared to hard drives. Keeping with the office analogy, waiting for a hard drive seek is like leaving the building to roam the earth for one year and three months. This is why so many workloads are dominated by disk I/O and why database performance can drive off a cliff once the in-memory buffers are exhausted. It is also why plentiful RAM (for buffering) and fast hard drives are so important for overall system performance.

While the "sustained" disk throughput is real in the sense that it is actually achieved by the disk in real-world situations, it does not tell the whole story. The bane of disk performance are seeks, which involve moving the read/write heads across the platter to the right track and then waiting for the platter to spin around to the right position so that the desired sector can be read. Disk RPMs refer to the speed of rotation of the platters: the faster the RPMs, the less time you wait on average for the rotation to give you the desired sector, hence higher RPMs mean faster disks. A cool place to read about the impact of seeks is the paper where a couple of Stanford grad students describe the Anatomy of a Large-Scale Hypertextual Web Search Engine (pdf).

When the disk is reading one large continuous file it achieves greater sustained read speeds due to the lack of seeks. Filesystem defragmentation aims to keep files in continuous chunks on the disk to minimize seeks and boost throughput. When it comes to how fast a computer feels, sustained throughput is less important than seek times and the number of random I/O operations (reads/writes) that a disk can do per time unit. Solid state disks can make for a great option here.

Hard drive caches also help performance. Their tiny size - a 16MB cache in a 750GB drive covers only 0.002% of the disk - suggest they're useless, but in reality their contribution is allowing a disk to queue up writes and then perform them in one bunch, thereby allowing the disk to plan the order of the writes in a way that - surprise - minimizes seeks. Reads can also be grouped in this way for performance, and both the OS and the drive firmware engage in these optimizations.

Finally, the diagram has various real-world throughputs for networking and other buses. Firewire is shown for reference but is not available natively in the Intel X48 chipset. It's fun to think of the Internet as a computer bus. The latency to a fast website (say, google.com) is about 45ms, comparable to hard drive seek latency. In fact, while hard drives are 5 orders of magnitude removed from main memory, they're in the same magnitude as the Internet. Residential bandwidth still lags behind that of sustained hard drive reads, but the 'network is the computer' in a pretty literal sense now. What happens when the Internet is faster than a hard drive?

I hope this diagram is useful. It's fascinating for me to look at all these numbers together and see how far we've come. Sources are posted as a comment. I posted a full diagram showing both north and south bridges here if you're interested.


First Recorded Usage of "Hacker"

Here's the first known recorded usage of the word "hacker" in the tech sense, published in 1963 in MIT's The Tech newspaper:

First recorded usage of hacker

It was tracked down by Fred Shapiro, editor of The Yale Dictionary Of Quotations and author of a paper with a most hilarious and offensive name. The MIT article dispels the common notion that "hacker" was a purely white-hat term later corrupted by the media. The black-hat connotation was there early on; Richard Stallman was 10 years old when this was printed.

Most of what the article describes is now known as phone phreaking, though some sounds like war dialing. It is fascinating this was taking place in 63, over four decades ago! The international phreaking scene was still strong through the late 90s, abusing home country direct lines with software like BlueBEEP ("when freedom is outlawed, only outlaws will be free"), plus cell phone cloning and whatnot. That's nearly 40 years of phreaking, though phone companies have since managed to stop widespread fraud.

A hacker Hacking

This of course doesn't "prove" that the black-hat meaning is the "true" meaning of hacker, just as "one who is employed doing mathematical calculations" is not the "true" meaning of computer. Language is fluid. The New Hacker's Dictionary has this to say in the word's definition:

  1. A person who enjoys exploring the details of programmable systems and how to stretch their capabilities, as opposed to most users, who prefer to learn only the minimum necessary.
  2. One who programs enthusiastically (even obsessively) or who enjoys programming rather than just theorizing about programming. (...)
  3. [deprecated] A malicious meddler who tries to discover sensitive information by poking around. Hence "password hacker", "network hacker". The correct term for this sense is cracker.

The white-hat definitions are popular among geeks, but I'm not so sure about the deprecation. For one thing, cracker has a precise and also popular meaning: one who removes copyright protection from software. Meanwhile, blackhats aren't exactly rolling over to surrender their language either. From the latest Phrack issue:

So no, I wasn't that kid that used to hang out at Radio Shack pulling apart electronic equipment and reassembling it to "see how it works." (...) that doesn't make you a "hacker" - it makes you a wannabe EE undergrad. (...) Hacking boxes makes you a "hacker" ! That's right! Write your local representatives at Wikipedia / urbandictionary / OED and let them know that hackers are people that gain unauthorized access/privileges to computerized systems!
The whole thing would offend most people, so no link. It's readily googable but be careful about browser exploits, you never know. As a neutral party, Wikipedia has sensible guidelines when it comes to controversial names:

A city, country, people or person, by contrast, is a self-identifying entity: it has a preferred name for itself. The city formerly called Danzig now calls itself Gdansk; the man formerly known as Cassius Clay now calls himself Muhammad Ali. These names are not simply arbitrary terms but are key statements of an entity's own identity. This should always be borne in mind when dealing with controversies involving self-identifying names. (...)

A number of objective criteria can be used to determine common or self-identifying usage: Is the name in common usage in English? (...) Is it the name used by the subject to describe itself or themselves?

These criteria apply squarely to both types of "hackers." Common usage? Check. Used to describe themselves? Definitely. The word is now hopelessly ambiguous, as it seems to have been from the start, puzzling outsiders. But it's always clear to hackers.


Richard Feynman's Modest Science

When I was 18 and newly arrived in the US, I used to wonder around enjoying new features like the rule of law and great libraries everywhere. Once while bumming out in North Denver I went into the Regis University library determined to read about physics. I had tried that once before, back in my high school, with poor results. As a teenager I had been obsessed with "understanding" physics and chemistry, especially atomic and quantum theory. I didn't know enough math to study the subjects deeply, but I wanted a conceptual grasp, however incomplete, that was at least half-way consistent and clear.

My high school books and classes left me with the strong feeling that I simply did not get physics. Try as I might, I could not accept the bizarre results of quantum mechanics, wave-particle duality, or how Heisenberg's uncertainty principle could even be science. I was baffled. When I brought it up with teachers, they had ready-made analogies to "teach" what happened in this sub-atomic world. "Think of the solar system," "think of springs connected to each other," "well, it's like this, suppose you have..." These analogies didn't help at all. "You think too concretely, that's why you can't visualize it," told me a teacher.

So I'd sit there and try things, think nonverbally, think in wild shapes, somehow think differently to see if I could imagine a sub-atomic particle and "get it." No go. I wondered whether programming had perhaps damaged my mind by making it inflexible. I went to the school library and found a more advanced physics book, a bit tattered but no matter. I quit reading when I realized the book still assumed the existence of the ether. "Screw this," I thought. So I flipped off the science bit, kept to my computers, and carried on.

Feynman Lectures on Physics

But here I was in the USA, land of opportunity and well-stocked libraries. Looking in the physics section I saw "The Feynman Lectures on Physics" sitting there, three volumes. I had a vague idea of who Feynman was, so I picked up the books and went straight to Volume 3, Chapter 1, Quantum Behavior. In the very first page he comes right out and says:

Things on a very small scale behave like nothing that you have any direct experience about. They do not behave like waves, they do not behave like particles, they do not behave like clouds, or billiard balls, or weights on springs, or like anything that you have ever seen. (...)

Because atomic behavior is so unlike ordinary experience, it is very difficult to get used to, and it appears peculiar and mysterious to everyone--both to the novice and the experienced physicist. Even the experts do not understand it the way they would like to, and it is perfectly reasonable that they should not, because all of direct human experience and human intuition applies to large objects. We know how large objects will act, but things on a small scale just do not act that way.

I felt a rush of enthusiasm reading this. It was so humble and visceral and honest. This was science in a way I had never seen before, simultaneously more rigorous and human. That first page alone drove a sledgehammer to my worldview and started rebuilding it. Perhaps childishly, I thought of the Hacker's Manifesto: "we've been spoon-fed baby food at school when we hungered for steak." I had just found one hell of a juicy stake. At one point Feynman asks students to imagine the various electromagnetic fields and waves in the classroom: coming from the earth's interior, carrying radio and TV signals, traveling from warm foreheads to the blackboard, and so on. Then he says:

I have asked you to imagine these electric and magnetic fields. What do you do? Do you know how? How do I imagine the electric and magnetic field? What do I actually see? What are the demands of the scientific imagination? Is it any different from trying to imagine that the room is full of invisible angels? No, it is not like imagining invisible angels. It requires a much higher degree of imagination (...). Why? Because invisible angels are understandable. (...) So you say, "Professor, please give me an approximate description of the electromagnetic waves, even though it may be slightly innacurate, so that I too can see them as well as I can see almost-invisible angels. Then I will modify the picture to the necessary abstraction."

I'm sorry I can't do that for you. I don't know how. I have no picture of this electromagnetic field that is in any sense accurate. (...) So if you have some difficulty in making such a picture, you should not be worried that your difficulty is unusual.

Volume 2, pages 20-9 and 20-10

Surely you're joking - you don't know?? I could hardly believe what I was reading. I had been hoping for a better explanation - a masterful analogy of weights on springs that would allow me to really understand physics. Instead, here was a Nobel laureate telling me that he didn't really understand it either - not in the definite, make-believe fashion of high school science. Feynman lifted the veil for me - all my sanitized textbooks and uninspired teachers presented science with finality and devoid of context, as if the gods had handed down a few scientific models to us. Analogies that were meant to "help understand" reality had in fact supplanted it; it was not simplification, but a gross distortion of what science really is. This fake teaching would never say that atomic behavior is "peculiar and mysterious" because "human intuition applies to large objects." No, its entire aim was to pretend that science is not mysterious.

Feynman embraces the whole of science: its beauty, its methods, the history and relationships of its ideas, how our minds react to it, and above all how it stands before the ultimate judge, nature. He's at once fiercely empirical yet mindful of the crucial human context surrounding scientific ideas. The lectures are not only great technical writing but also a deep look into how we think about the world, into reason and the nature of knowledge. Of course, much of the work is to be done with paper, pencil, and math. Back then I didn't even know calculus, so I couldn't really follow all the equations. But the books still gave me what I was looking for, and then some.

Now I have an undergrad in math, which puts me roughly in the 18th century, but better equipped to learn on my own. Some day I hope to take time off and hit physics again. If you want to read more of his stuff, Feynman wrote an insightful essay on engineering and there's the classic Cargo Cult Science, both online. Amazon has the lectures along with other books, and so might your local library. :)


CPU Rings, Privilege, and Protection

You probably know intuitively that applications have limited powers in Intel x86 computers and that only operating system code can perform certain tasks, but do you know how this really works? This post takes a look at x86 privilege levels, the mechanism whereby the OS and CPU conspire to restrict what user-mode programs can do. There are four privilege levels, numbered 0 (most privileged) to 3 (least privileged), and three main resources being protected: memory, I/O ports, and the ability to execute certain machine instructions. At any given time, an x86 CPU is running in a specific privilege level, which determines what code can and cannot do. These privilege levels are often described as protection rings, with the innermost ring corresponding to highest privilege. Most modern x86 kernels use only two privilege levels, 0 and 3:

x86 Protectiong Rings
x86 Protection Rings

About 15 machine instructions, out of dozens, are restricted by the CPU to ring zero. Many others have limitations on their operands. These instructions can subvert the protection mechanism or otherwise foment chaos if allowed in user mode, so they are reserved to the kernel. An attempt to run them outside of ring zero causes a general-protection exception, like when a program uses invalid memory addresses. Likewise, access to memory and I/O ports is restricted based on privilege level. But before we look at protection mechanisms, let's see exactly how the CPU keeps track of the current privilege level, which involves the segment selectors from the previous post. Here they are:

x86 Segment Selectors
Segment Selectors - Data and Code

The full contents of data segment selectors are loaded directly by code into various segment registers such as ss (stack segment register) and ds (data segment register). This includes the contents of the Requested Privilege Level (RPL) field, whose meaning we tackle in a bit. The code segment register (cs) is, however, magical. First, its contents cannot be set directly by load instructions such as mov, but rather only by instructions that alter the flow of program execution, like call. Second, and importantly for us, instead of an RPL field that can be set by code, cs has a Current Privilege Level (CPL) field maintained by the CPU itself. This 2-bit CPL field in the code segment register is always equal to the CPU's current privilege level. The Intel docs wobble a little on this fact, and sometimes online documents confuse the issue, but that's the hard and fast rule. At any time, no matter what's going on in the CPU, a look at the CPL in cs will tell you the privilege level code is running with.

Keep in mind that the CPU privilege level has nothing to do with operating system users. Whether you're root, Administrator, guest, or a regular user, it does not matter. All user code runs in ring 3 and all kernel code runs in ring 0, regardless of the OS user on whose behalf the code operates. Sometimes certain kernel tasks can be pushed to user mode, for example user-mode device drivers in Windows Vista, but these are just special processes doing a job for the kernel and can usually be killed without major consequences.

Due to restricted access to memory and I/O ports, user mode can do almost nothing to the outside world without calling on the kernel. It can't open files, send network packets, print to the screen, or allocate memory. User processes run in a severely limited sandbox set up by the gods of ring zero. That's why it's impossible, by design, for a process to leak memory beyond its existence or leave open files after it exits. All of the data structures that control such things - memory, open files, etc - cannot be touched directly by user code; once a process finishes, the sandbox is torn down by the kernel. That's why our servers can have 600 days of uptime - as long as the hardware and the kernel don't crap out, stuff can run for ever. This is also why Windows 95 / 98 crashed so much: it's not because "M$ sucks" but because important data structures were left accessible to user mode for compatibility reasons. It was probably a good trade-off at the time, albeit at high cost.

The CPU protects memory at two crucial points: when a segment selector is loaded and when a page of memory is accessed with a linear address. Protection thus mirrors memory address translation where both segmentation and paging are involved. When a data segment selector is being loaded, the check below takes place:

x86 Segment Protection
x86 Segment Protection

Since a higher number means less privilege, MAX() above picks the least privileged of CPL and RPL, and compares it to the descriptor privilege level (DPL). If the DPL is higher or equal, then access is allowed. The idea behind RPL is to allow kernel code to load a segment using lowered privilege. For example, you could use an RPL of 3 to ensure that a given operation uses segments accessible to user-mode. The exception is for the stack segment register ss, for which the three of CPL, RPL, and DPL must match exactly.

In truth, segment protection scarcely matters because modern kernels use a flat address space where the user-mode segments can reach the entire linear address space. Useful memory protection is done in the paging unit when a linear address is converted into a physical address. Each memory page is a block of bytes described by a page table entry containing two fields related to protection: a supervisor flag and a read/write flag. The supervisor flag is the primary x86 memory protection mechanism used by kernels. When it is on, the page cannot be accessed from ring 3. While the read/write flag isn't as important for enforcing privilege, it's still useful. When a process is loaded, pages storing binary images (code) are marked as read only, thereby catching some pointer errors if a program attempts to write to these pages. This flag is also used to implement copy on write when a process is forked in Unix. Upon forking, the parent's pages are marked read only and shared with the forked child. If either process attempts to write to the page, the processor triggers a fault and the kernel knows to duplicate the page and mark it read/write for the writing process.

Finally, we need a way for the CPU to switch between privilege levels. If ring 3 code could transfer control to arbitrary spots in the kernel, it would be easy to subvert the operating system by jumping into the wrong (right?) places. A controlled transfer is necessary. This is accomplished via gate descriptors and via the sysenter instruction. A gate descriptor is a segment descriptor of type system, and comes in four sub-types: call-gate descriptor, interrupt-gate descriptor, trap-gate descriptor, and task-gate descriptor. Call gates provide a kernel entry point that can be used with ordinary call and jmp instructions, but they aren't used much so I'll ignore them. Task gates aren't so hot either (in Linux, they are only used in double faults, which are caused by either kernel or hardware problems).

That leaves two juicier ones: interrupt and trap gates, which are used to handle hardware interrupts (e.g., keyboard, timer, disks) and exceptions (e.g., page faults, divide by zero). I'll refer to both as an "interrupt". These gate descriptors are stored in the Interrupt Descriptor Table (IDT). Each interrupt is assigned a number between 0 and 255 called a vector, which the processor uses as an index into the IDT when figuring out which gate descriptor to use when handling the interrupt. Interrupt and trap gates are nearly identical. Their format is shown below along with the privilege checks enforced when an interrupt happens. I filled in some values for the Linux kernel to make things concrete.

Interrupt Descriptor with Privilege Check
Interrupt Descriptor with Privilege Check

Both the DPL and the segment selector in the gate regulate access, while segment selector plus offset together nail down an entry point for the interrupt handler code. Kernels normally use the segment selector for the kernel code segment in these gate descriptors. An interrupt can never transfer control from a more-privileged to a less-privileged ring. Privilege must either stay the same (when the kernel itself is interrupted) or be elevated (when user-mode code is interrupted). In either case, the resulting CPL will be equal to to the DPL of the destination code segment; if the CPL changes, a stack switch also occurs. If an interrupt is triggered by code via an instruction like int n, one more check takes place: the gate DPL must be at the same or lower privilege as the CPL. This prevents user code from triggering random interrupts. If these checks fail - you guessed it - a general-protection exception happens. All Linux interrupt handlers end up running in ring zero.

During initialization, the Linux kernel first sets up an IDT in setup_idt() that ignores all interrupts. It then uses functions in include/asm-x86/desc.h to flesh out common IDT entries in arch/x86/kernel/traps_32.c. In Linux, a gate descriptor with "system" in its name is accessible from user mode and its set function uses a DPL of 3. A "system gate" is an Intel trap gate accessible to user mode. Otherwise, the terminology matches up. Hardware interrupt gates are not set here however, but instead in the appropriate drivers.

Three gates are accessible to user mode: vectors 3 and 4 are used for debugging and checking for numeric overflows, respectively. Then a system gate is set up for the SYSCALL_VECTOR, which is 0x80 for the x86 architecture. This was the mechanism for a process to transfer control to the kernel, to make a system call, and back in the day I applied for an "int 0x80" vanity license plate :). Starting with the Pentium Pro, the sysenter instruction was introduced as a faster way to make system calls. It relies on special-purpose CPU registers that store the code segment, entry point, and other tidbits for the kernel system call handler. When sysenter is executed the CPU does no privilege checking, going immediately into CPL 0 and loading new values into the registers for code and stack (cs, eip, ss, and esp). Only ring zero can load the sysenter setup registers, which is done in enable_sep_cpu().

Finally, when it's time to return to ring 3, the kernel issues an iret or sysexit instruction to return from interrupts and system calls, respectively, thus leaving ring 0 and resuming execution of user code with a CPL of 3. Vim tells me I'm approaching 1,900 words, so I/O port protection is for another day. This concludes our tour of x86 rings and protection. Thanks for reading!


Memory Translation and Segmentation

This post is the first in a series about memory and protection in Intel-compatible (x86) computers, going further down the path of how kernels work. As in the boot series, I'll link to Linux kernel sources but give Windows examples as well (sorry, I'm ignorant about the BSDs and the Mac, but most of the discussion applies). Let me know what I screw up.

In the chipsets that power Intel motherboards, memory is accessed by the CPU via the front side bus, which connects it to the northbridge chip. The memory addresses exchanged in the front side bus are physical memory addresses, raw numbers from zero to the top of the available physical memory. These numbers are mapped to physical RAM sticks by the northbridge. Physical addresses are concrete and final - no translation, no paging, no privilege checks - you put them on the bus and that's that. Within the CPU, however, programs use logical memory addresses, which must be translated into physical addresses before memory access can take place. Conceptually address translation looks like this:

Memory address translation
Memory address translation in x86 CPUs with paging enabled

This is not a physical diagram, only a depiction of the address translation process, specifically for when the CPU has paging enabled. If you turn off paging, the output from the segmentation unit is already a physical address; in 16-bit real mode that is always the case. Translation starts when the CPU executes an instruction that refers to a memory address. The first step is translating that logic address into a linear address. But why go through this step instead of having software use linear (or physical) addresses directly? For roughly the same reason humans have an appendix whose primary function is getting infected. It's a wrinkle of evolution. To really make sense of x86 segmentation we need to go back to 1978.

The original 8086 had 16-bit registers and its instructions used mostly 8-bit or 16-bit operands. This allowed code to work with 216 bytes, or 64K of memory, yet Intel engineers were keen on letting the CPU use more memory without expanding the size of registers and instructions. So they introduced segment registers as a means to tell the CPU which 64K chunk of memory a program's instructions were going to work on. It was a reasonable solution: first you load a segment register, effectively saying "here, I want to work on the memory chunk starting at X"; afterwards, 16-bit memory addresses used by your code are interpreted as offsets into your chunk, or segment. There were four segment registers: one for the stack (ss), one for program code (cs), and two for data (ds, es). Most programs were small enough back then to fit their whole stack, code, and data each in a 64K segment, so segmentation was often transparent.

Nowadays segmentation is still present and is always enabled in x86 processors. Each instruction that touches memory implicitly uses a segment register. For example, a jump instruction uses the code segment register (cs) whereas a stack push instruction uses the stack segment register (ss). In most cases you can explicitly override the segment register used by an instruction. Segment registers store 16-bit segment selectors; they can be loaded directly with instructions like MOV. The sole exception is cs, which can only be changed by instructions that affect the flow of execution, like CALL or JMP. Though segmentation is always on, it works differently in real mode versus protected mode.

In real mode, such as during early boot, the segment selector is a 16-bit number specifying the physical memory address for the start of a segment. This number must somehow be scaled, otherwise it would also be limited to 64K, defeating the purpose of segmentation. For example, the CPU could use the segment selector as the 16 most significant bits of the physical memory address (by shifting it 16 bits to the left, which is equivalent to multiplying by 216). This simple rule would enable segments to address 4 gigs of memory in 64K chunks, but it would increase chip packaging costs by requiring more physical address pins in the processor. So Intel made the decision to multiply the segment selector by only 24 (or 16), which in a single stroke confined memory to about 1MB and unduly complicated translation. Here's an example showing a jump instruction where cs contains 0x1000:

Real mode segmentation
Real mode segmentation

Real mode segment starts range from 0 all the way to 0xFFFF0 (16 bytes short of 1 MB) in 16-byte increments. To these values you add a 16-bit offset (the logical address) between 0 and 0xFFFF. It follows that there are multiple segment/offset combinations pointing to the same memory location, and physical addresses fall above 1MB if your segment is high enough (see the infamous A20 line). Also, when writing C code in real mode a far pointer is a pointer that contains both the segment selector and the logical address, which allows it to address 1MB of memory. Far indeed. As programs started getting bigger and outgrowing 64K segments, segmentation and its strange ways complicated development for the x86 platform. This may all sound quaintly odd now but it has driven programmers into the wretched depths of madness.

In 32-bit protected mode, a segment selector is no longer a raw number, but instead it contains an index into a table of segment descriptors. The table is simply an array containing 8-byte records, where each record describes one segment and looks thus:

Segment descriptor
Segment descriptor

There are three types of segments: code, data, and system. For brevity, only the common features in the descriptor are shown here. The base address is a 32-bit linear address pointing to the beginning of the segment, while the limit specifies how big the segment is. Adding the base address to a logical memory address yields a linear address. DPL is the descriptor privilege level; it is a number from 0 (most privileged, kernel mode) to 3 (least privileged, user mode) that controls access to the segment.

These segment descriptors are stored in two tables: the Global Descriptor Table (GDT) and the Local Descriptor Table (LDT). Each CPU (or core) in a computer contains a register called gdtr which stores the linear memory address of the first byte in the GDT. To choose a segment, you must load a segment register with a segment selector in the following format:

Segment Selector
Segment Selector

The TI bit is 0 for the GDT and 1 for the LDT, while the index specifies the desired segment selector within the table. We'll deal with RPL, Requested Privilege Level, later on. Now, come to think of it, when the CPU is in 32-bit mode registers and instructions can address the entire linear address space anyway, so there's really no need to give them a push with a base address or other shenanigan. So why not set the base address to zero and let logical addresses coincide with linear addresses? Intel docs call this "flat model" and it's exactly what modern x86 kernels do (they use the basic flat model, specifically). Basic flat model is equivalent to disabling segmentation when it comes to translating memory addresses. So in all its glory, here's the jump example running in 32-bit protected mode, with real-world values for a Linux user-mode app:

Protected Mode Segmentation
Protected Mode Segmentation

The contents of a segment descriptor are cached once they are accessed, so there's no need to actually read the GDT in subsequent accesses, which would kill performance. Each segment register has a hidden part to store the cached descriptor that corresponds to its segment selector. For more details, including more info on the LDT, see chapter 3 of the Intel System Programming Guide Volume 3a. Volumes 2a and 2b, which cover every x86 instruction, also shed light on the various types of x86 addressing operands - 16-bit, 16-bit with segment selector (which can be used by far pointers), 32-bit, etc.

In Linux, only 3 segment descriptors are used during boot. They are defined with the GDT_ENTRY macro and stored in the boot_gdt array. Two of the segments are flat, addressing the entire 32-bit space: a code segment loaded into cs and a data segment loaded into the other segment registers. The third segment is a system segment called the Task State Segment. After boot, each CPU has its own copy of the GDT. They are all nearly identical, but a few entries change depending on the running process. You can see the layout of the Linux GDT in segment.h and its instantiation is here. There are four primary GDT entries: two flat ones for code and data in kernel mode, and another two for user mode. When looking at the Linux GDT, notice the holes inserted on purpose to align data with CPU cache lines - an artifact of the von Neumann bottleneck that has become a plague. Finally, the classic "Segmentation fault" Unix error message is not due to x86-style segments, but rather invalid memory addresses normally detected by the paging unit - alas, topic for an upcoming post.

Intel deftly worked around their original segmentation kludge, offering a flexible way for us to choose whether to segment or go flat. Since coinciding logical and linear addresses are simpler to handle, they became standard, such that 64-bit mode now enforces a flat linear address space. But even in flat mode segments are still crucial for x86 protection, the mechanism that defends the kernel from user-mode processes and every process from each other. It's a dog eat dog world out there! In the next post, we'll take a peek at protection levels and how segments implement them.

Thanks to Nate Lawson for a correction in this post.


Lucky to be a Programmer

For the past few weeks I've been working with a fellow developer on a project that required an all-out programming effort. It's done now, so we're back to a regular schedule, but when people hear about the crazy hours they often say they're sorry. They really shouldn't be. I would never do this often, or for long periods, or without proper compensation if done for an employer, but the truth is that these programming blitzkriegs are some of my favorite periods in life. Under the right conditions, writing software is so intensely pleasurable it should be illegal.

Many programmers relate to this, but others are taken aback when they hear it. I think it's because institutions are so good at squeezing the fun out of everything. It's appalling for example how schools can take the most vibrant topics and mangle them into formulaic, mediocre slog. And so it is for programming. Many corporations turn an inherently rewarding experience into something people just barely stomach in exchange for a paycheck.

That's too bad. Few things are better than spending time in a creative haze, consumed by ideas, watching your work come to life, going to bed eager to wake up quickly and go try things out. I am not suggesting that excessive hours are needed or even advisable; a sane schedule is a must except for occasional binges. The point is that programming is an intense creative pleasure, a perfect mixture of puzzles, writing, and craftsmanship.

Programming offers intriguing challenges and ample room for invention. Some problems are investigative and reductionist: Why is this code running slowly? What on earth is causing that bug? Others are constructive, like devising algorithms and architectures. All of them are a delight if you enjoy analytical work, immersed in a world full of beasts like malware, routers, caches, protocols, databases, graphs, and numbers.

This analytical side is what most people associate with programming. It does make it interesting, like a complex strategy game. But in most software the primary challenge is communication: with fellow programmers via code and with users via interfaces. By and large, writing code is more essay than puzzle. It is shaping your ideas and schemes into a coherent body; it is seeking clarity, simplicity and conciseness. Both code and interfaces abound with the simple joy of creation.

Another source of pleasure is that under certain conditions, beauty arises in programming. It may sound like bullshit but it's real, the kind of thing that makes your day better. Take for example Euclid's 2-line proof that prime numbers are infinite. I think many would find it beautiful - so succint and such a fascinating result. This is the beauty of math, cold and austere, and it pervades software. It is in clever algorithms like quicksort, in the sources of kernels and compilers, in elegant exploits and in the tricks we pull to solve everyday problems. When you see these solutions, be it famous algorithm or mundane trick, you smile and think "how smart" and it feels good. How noble in reason!

A non-math sort of beauty also exists in code, analogous to eloquence in discourse. It's present in well-factored software that does a lot with little code, in short and crisp methods, in well-done architectures. Some languages make this hard and not all programmers produce it, but it's a joy to read and work on such code. If you're working in an expressive language with coworkers whose code you enjoy, it happens often enough to brighten things up.

Now for craftsmanship. In a sense software is abstract - where does program behavior exist but in our minds? Yet we call it building software for a reason. Programs are shaped feature by feature, architectures start out as scaffolds and grow, user interfaces come together, bugs are fixed and hotspots are optimized to make things run fast. Software provides a deeply satisfying sense of craft. We build stuff out of pure ideas and then get to watch it working to solve real problems and make people a little better off. Or far better off, as the case may be.

Take Biology. Despite nearly 400 years of scientific revolution, Biology has been unable to deliver on crucial problems like effective cures for viral infections or cancer. Some of our best progress, like antibiotics, has been due to chance and random experimentation. You start a clinical trial for a hypertension drug and suddenly - whoah - all your subjects have hard-ons! Viagra is born. To be sure, chance plays a role in all endeavours, but Physics and Chemistry have a comprehensive theoretical basis powering systematic improvements, whereas Biology has been largely confined to kludges. Wanna treat cancer? Here, blast the patient with radiation and poison and hopefully the cancer will die first. They're brilliant kludges, and I'm happy to have them, but it's a far cry from the precision we've had elsewhere.

Software is changing that. Just barely 50 years ago the shape of DNA was being discovered, but now anyone can browse and download hundreds of complete genome sequences. Or look up thousands of genes (DLEC1 for a random example), complete with nucleotide sequence, amino-acid sequence for expressed proteins, literature mentioning the gene, you name it! Or you can search vast gene and protein databases for nucleotide or amino-acid sequences, perhaps after sequencing something in ever-cheaper devices, and get a comprehensive report on the match. It doesn't matter if they're exact, because the algorithm in BLAST, the standard sequence search tool, delivers partial maches across databases and species, scored by match likelihood. These advances will enable massive breakthroughs in medicine. Biology is entering a new era, like Physics in the 18th century, propelled by software.

Yea, sure, biologists have a minor role :P, but we in computing increasingly power major developments in science, culture, and business. When a third-world kid looks up a Wikipedia entry, it's our work too! We wrote the RFCs and the networking stacks, the browser and MediaWiki, the OSes and the HTTP servers. Not to mention a lot of the Wikipedia entries, but since a few were on company time I'll leave them aside. The influence of technologists goes beyond bits and bytes: it was a programmer who invented wikis and our community started blogs. Henry Mencken pointed out correctly that "freedom of the press is limited to those who own one". It's a pity he's not around to watch our creations break down the stifling conformity and cozy subservience of professional journalism. Less glamorously but to great benefit our applications have delivered steep productivity gains to businesses across the economy. These are a few examples in a long list.

Three years ago, when I finished my undergrad (after being a programmer for many years), I was about to enter med school. At that point, a couple of negative experiences had me somewhat burned out on computer work. I'm happy I stuck with it. I'm still interested in biomedical research, but if I were to get involved I'd rather come in from the software angle, because frankly it's too much fun to pass on. My mom thinks I'm a typist but oh well.

If you find yourself stuck in a place that's killing your innate passion for technology, by all means, move the hell on! Don't stay put while your enthusiasm is slowly drained. It's hard to find motivated people to hire so you've got a major asset already; there are plenty of employers - and companies to be started - that will better suit you. For people who think they might like programming, your mileage may vary, but I highly recommend it as a career. Not only is the outlook bullish on the job front, but as the role of software grows in society we'll see more exciting and beneficial changes delivered by technology. I'm delighted to be along for the ride as constantly my art and craft I try to master.

PS: thanks for putting up with the irregular posting schedule. The plan is to stick to regular posting now that things have calmed down. And if you like the song, download the mp3 because the YouTube audio doesn't do it justice.


The Kernel Boot Process

The previous post explained how computers boot up right up to the point where the boot loader, after stuffing the kernel image into memory, is about to jump into the kernel entry point. This last post about booting takes a look at the guts of the kernel to see how an operating system starts life. Since I have an empirical bent I'll link heavily to the sources for Linux kernel at the Linux Cross Reference. The sources are very readable if you are familiar with C-like syntax; even if you miss some details you can get the gist of what's happening. The main obstacle is the lack of context around some of the code, such as when or why it runs or the underlying features of the machine. I hope to provide a bit of that context. Due to brevity (hah!) a lot of fun stuff - like interrupts and memory - gets only a nod for now. The post ends with the highlights for the Windows boot.

At this point in the Intel x86 boot story the processor is running in real-mode, is able to address 1 MB of memory, and RAM looks like this for a modern Linux system:

RAM contents after boot loader runs
RAM contents after boot loader is done

The kernel image has been loaded to memory by the boot loader using the BIOS disk I/O services. This image is an exact copy of the file in your hard drive that contains the kernel, e.g. /boot/vmlinuz-2.6.22-14-server. The image is split into two pieces: a small part containing the real-mode kernel code is loaded below the 640K barrier; the bulk of the kernel, which runs in protected mode, is loaded after the first megabyte of memory.

The action starts in the real-mode kernel header pictured above. This region of memory is used to implement the Linux boot protocol between the boot loader and the kernel. Some of the values there are read by the boot loader while doing its work. These include amenities such as a human-readable string containing the kernel version, but also crucial information like the size of the real-mode kernel piece. The boot loader also writes values to this region, such as the memory address for the command-line parameters given by the user in the boot menu. Once the boot loader is finished it has filled in all of the parameters required by the kernel header. It's then time to jump into the kernel entry point. The diagram below shows the code sequence for the kernel initialization, along with source directories, files, and line numbers:

Architecture-specific Linux Kernel Initialization
Architecture-specific Linux Kernel Initialization

The early kernel start-up for the Intel architecture is in file arch/x86/boot/header.S. It's in assembly language, which is rare for the kernel at large but common for boot code. The start of this file actually contains boot sector code, a left over from the days when Linux could work without a boot loader. Nowadays this boot sector, if executed, only prints a "bugger_off_msg" to the user and reboots. Modern boot loaders ignore this legacy code. After the boot sector code we have the first 15 bytes of the real-mode kernel header; these two pieces together add up to 512 bytes, the size of a typical disk sector on Intel hardware.

After these 512 bytes, at offset 0x200, we find the very first instruction that runs as part of the Linux kernel: the real-mode entry point. It's in header.S:110 and it is a 2-byte jump written directly in machine code as 0x3aeb. You can verify this by running hexdump on your kernel image and seeing the bytes at that offset - just a sanity check to make sure it's not all a dream. The boot loader jumps into this location when it is finished, which in turn jumps to header.S:229 where we have a regular assembly routine called start_of_setup. This short routine sets up a stack, zeroes the bss segment (the area that contains static variables, so they start with zero values) for the real-mode kernel and then jumps to good old C code at arch/x86/boot/main.c:122.

main() does some house keeping like detecting memory layout, setting a video mode, etc. It then calls go_to_protected_mode(). Before the CPU can be set to protected mode, however, a few tasks must be done. There are two main issues: interrupts and memory. In real-mode the interrupt vector table for the processor is always at memory address 0, whereas in protected mode the location of the interrupt vector table is stored in a CPU register called IDTR. Meanwhile, the translation of logical memory addresses (the ones programs manipulate) to linear memory addresses (a raw number from 0 to the top of the memory) is different between real-mode and protected mode. Protected mode requires a register called GDTR to be loaded with the address of a Global Descriptor Table for memory. So go_to_protected_mode() calls setup_idt() and setup_gdt() to install a temporary interrupt descriptor table and global descriptor table.

We're now ready for the plunge into protected mode, which is done by protected_mode_jump, another assembly routine. This routine enables protected mode by setting the PE bit in the CR0 CPU register. At this point we're running with paging disabled; paging is an optional feature of the processor, even in protected mode, and there's no need for it yet. What's important is that we're no longer confined to the 640K barrier and can now address up to 4GB of RAM. The routine then calls the 32-bit kernel entry point, which is startup_32 for compressed kernels. This routine does some basic register initializations and calls decompress_kernel(), a C function to do the actual decompression.

decompress_kernel() prints the familiar "Decompressing Linux..." message. Decompression happens in-place and once it's finished the uncompressed kernel image has overwritten the compressed one pictured in the first diagram. Hence the uncompressed contents also start at 1MB. decompress_kernel() then prints "done." and the comforting "Booting the kernel." By "Booting" it means a jump to the final entry point in this whole story, given to Linus by God himself atop Mountain Halti, which is the protected-mode kernel entry point at the start of the second megabyte of RAM (0x100000). That sacred location contains a routine called, uh, startup_32. But this one is in a different directory, you see.

The second incarnation of startup_32 is also an assembly routine, but it contains 32-bit mode initializations. It clears the bss segment for the protected-mode kernel (which is the true kernel that will now run until the machine reboots or shuts down), sets up the final global descriptor table for memory, builds page tables so that paging can be turned on, enables paging, initializes a stack, creates the final interrupt descriptor table, and finally jumps to to the architecture-independent kernel start-up, start_kernel(). The diagram below shows the code flow for the last leg of the boot:

Architecture-independent Linux Kernel Initialization
Architecture-independent Linux Kernel Initialization

start_kernel() looks more like typical kernel code, which is nearly all C and machine independent. The function is a long list of calls to initializations of the various kernel subsystems and data structures. These include the scheduler, memory zones, time keeping, and so on. start_kernel() then calls rest_init(), at which point things are almost all working. rest_init() creates a kernel thread passing another function, kernel_init(), as the entry point. rest_init() then calls schedule() to kickstart task scheduling and goes to sleep by calling cpu_idle(), which is the idle thread for the Linux kernel. cpu_idle() runs forever and so does process zero, which hosts it. Whenever there is work to do - a runnable process - process zero gets booted out of the CPU, only to return when no runnable processes are available.

But here's the kicker for us. This idle loop is the end of the long thread we followed since boot, it's the final descendent of the very first jump executed by the processor after power up. All of this mess, from reset vector to BIOS to MBR to boot loader to real-mode kernel to protected-mode kernel, all of it leads right here, jump by jump by jump it ends in the idle loop for the boot processor, cpu_idle(). Which is really kind of cool. However, this can't be the whole story otherwise the computer would do no work.

At this point, the kernel thread started previously is ready to kick in, displacing process 0 and its idle thread. And so it does, at which point kernel_init() starts running since it was given as the thread entry point. kernel_init() is responsible for initializing the remaining CPUs in the system, which have been halted since boot. All of the code we've seen so far has been executed in a single CPU, called the boot processor. As the other CPUs, called application processors, are started they come up in real-mode and must run through several initializations as well. Many of the code paths are common, as you can see in the code for startup_32, but there are slight forks taken by the late-coming application processors. Finally, kernel_init() calls init_post(), which tries to execute a user-mode process in the following order: /sbin/init, /etc/init, /bin/init, and /bin/sh. If all fail, the kernel will panic. Luckily init is usually there, and starts running as PID 1. It checks its configuration file to figure out which processes to launch, which might include X11 Windows, programs for logging in on the console, network daemons, and so on. Thus ends the boot process as yet another Linux box starts running somewhere. May your uptime be long and untroubled.

The process for Windows is similar in many ways, given the common architecture. Many of the same problems are faced and similar initializations must be done. When it comes to boot one of the biggest differences is that Windows packs all of the real-mode kernel code, and some of the initial protected mode code, into the boot loader itself (C:\NTLDR). So instead of having two regions in the same kernel image, Windows uses different binary images. Plus Linux completely separates boot loader and kernel; in a way this automatically falls out of the open source process. The diagram below shows the main bits for the Windows kernel:

Windows Kernel Initialization
Windows Kernel Initialization

The Windows user-mode start-up is naturally very different. There's no /sbin/init, but rather Csrss.exe and Winlogon.exe. Winlogon spawns Services.exe, which starts all of the Windows Services, and Lsass.exe, the local security authentication subsystem. The classic Windows login dialog runs in the context of Winlogon.

This is the end of this boot series. Thanks everyone for reading and for feedback. I'm sorry some things got superficial treatment; I've gotta start somewhere and only so much fits into blog-sized bites. But nothing like a day after the next; my plan is to do regular "Software Illustrated" posts like this series along with other topics. Meanwhile, here are some resources:

  • The best, most important resource, is source code for real kernels, either Linux or one of the BSDs.
  • Intel publishes excellent Software Developer's Manuals, which you can download for free.
  • Understanding the Linux Kernel is a good book and walks through a lot of the Linux Kernel sources. It's getting outdated and it's dry, but I'd still recommend it to anyone who wants to grok the kernel. Linux Device Drivers is more fun, teaches well, but is limited in scope. Finally, Patrick Moroney suggested Linux Kernel Development by Robert Love in the comments for this post. I've heard other positive reviews for that book, so it sounds worth checking out.
  • For Windows, the best reference by far is Windows Internals by David Solomon and Mark Russinovich, the latter of Sysinternals fame. This is a great book, well-written and thorough. The main downside is the lack of source code.

[Update: In a comment below, Nix covered a lot of ground on the initial root file system that I glossed over. Thanks to Marius Barbu for catching a mistake where I wrote "CR3" instead of GDTR]


How Computers Boot Up

The previous post described motherboards and the memory map in Intel computers to set the scene for the initial phases of boot. Booting is an involved, hacky, multi-stage affair - fun stuff. Here's an outline of the process:

Things start rolling when you press the power button on the computer (no! do tell!). Once the motherboard is powered up it initializes its own firmware - the chipset and other tidbits - and tries to get the CPU running. If things fail at this point (e.g., the CPU is busted or missing) then you will likely have a system that looks completely dead except for rotating fans. A few motherboards manage to emit beeps for an absent or faulty CPU, but the zombie-with-fans state is the most common scenario based on my experience. Sometimes USB or other devices can cause this to happen: unplugging all non-essential devices is a possible cure for a system that was working and suddenly appears dead like this. You can then single out the culprit device by elimination.

If all is well the CPU starts running. In a multi-processor or multi-core system one CPU is dynamically chosen to be the bootstrap processor (BSP) that runs all of the BIOS and kernel initialization code. The remaining processors, called application processors (AP) at this point, remain halted until later on when they are explicitly activated by the kernel. Intel CPUs have been evolving over the years but they're fully backwards compatible, so modern CPUs can behave like the original 1978 Intel 8086, which is exactly what they do after power up. In this primitive power up state the processor is in real mode with memory paging disabled. This is like ancient MS-DOS where only 1 MB of memory can be addressed and any code can write to any place in memory - there's no notion of protection or privilege.

Most registers in the CPU have well-defined values after power up, including the instruction pointer (EIP) which holds the memory address for the instruction being executed by the CPU. Intel CPUs use a hack whereby even though only 1MB of memory can be addressed at power up, a hidden base address (an offset, essentially) is applied to EIP so that the first instruction executed is at address 0xFFFFFFF0 (16 bytes short of the end of 4 gigs of memory and well above one megabyte). This magical address is called the reset vector and is standard for modern Intel CPUs.

The motherboard ensures that the instruction at the reset vector is a jump to the memory location mapped to the BIOS entry point. This jump implicitly clears the hidden base address present at power up. All of these memory locations have the right contents needed by the CPU thanks to the memory map kept by the chipset. They are all mapped to flash memory containing the BIOS since at this point the RAM modules have random crap in them. An example of the relevant memory regions is shown below:

The CPU then starts executing BIOS code, which initializes some of the hardware in the machine. Afterwards the BIOS kicks off the Power-on Self Test (POST) which tests various components in the computer. Lack of a working video card fails the POST and causes the BIOS to halt and emit beeps to let you know what's wrong, since messages on the screen aren't an option. A working video card takes us to a stage where the computer looks alive: manufacturer logos are printed, memory starts to be tested, angels blare their horns. Other POST failures, like a missing keyboard, lead to halts with an error message on the screen. The POST involves a mixture of testing and initialization, including sorting out all the resources - interrupts, memory ranges, I/O ports - for PCI devices. Modern BIOSes that follow the Advanced Configuration and Power Interface build a number of data tables that describe the devices in the computer; these tables are later used by the kernel.

After the POST the BIOS wants to boot up an operating system, which must be found somewhere: hard drives, CD-ROM drives, floppy disks, etc. The actual order in which the BIOS seeks a boot device is user configurable. If there is no suitable boot device the BIOS halts with a complaint like "Non-System Disk or Disk Error." A dead hard drive might present with this symptom. Hopefully this doesn't happen and the BIOS finds a working disk allowing the boot to proceed.

The BIOS now reads the first 512-byte sector (sector zero) of the hard disk. This is called the Master Boot Record and it normally contains two vital components: a tiny OS-specific bootstrapping program at the start of the MBR followed by a partition table for the disk. The BIOS however does not care about any of this: it simply loads the contents of the MBR into memory location 0x7c00 and jumps to that location to start executing whatever code is in the MBR.

The specific code in the MBR could be a Windows MBR loader, code from Linux loaders such as LILO or GRUB, or even a virus. In contrast the partition table is standardized: it is a 64-byte area with four 16-byte entries describing how the disk has been divided up (so you can run multiple operating systems or have separate volumes in the same disk). Traditionally Microsoft MBR code takes a look at the partition table, finds the (only) partition marked as active, loads the boot sector for that partition, and runs that code. The boot sector is the first sector of a partition, as opposed to the first sector for the whole disk. If something is wrong with the partition table you would get messages like "Invalid Partition Table" or "Missing Operating System." This message does not come from the BIOS but rather from the MBR code loaded from disk. Thus the specific message depends on the MBR flavor.

Boot loading has gotten more sophisticated and flexible over time. The Linux boot loaders Lilo and GRUB can handle a wide variety of operating systems, file systems, and boot configurations. Their MBR code does not necessarily follow the "boot the active partition" approach described above. But functionally the process goes like this:

  1. The MBR itself contains the first stage of the boot loader. GRUB calls this stage 1.
  2. Due to its tiny size, the code in the MBR does just enough to load another sector from disk that contains additional boostrap code. This sector might be the boot sector for a partition, but could also be a sector that was hard-coded into the MBR code when the MBR was installed.
  3. The MBR code plus code loaded in step 2 then read a file containing the second stage of the boot loader. In GRUB this is GRUB Stage 2, and in Windows Server this is c:\NTLDR. If step 2 fails in Windows you'd get a message like "NTLDR is missing". The stage 2 code then reads a boot configuration file (e.g., grub.conf in GRUB, boot.ini in Windows). It then presents boot choices to the user or simply goes ahead in a single-boot system.
  4. At this point the boot loader code needs to fire up a kernel. It must know enough about file systems to read the kernel from the boot partition. In Linux this means reading a file like "vmlinuz-2.6.22-14-server" containing the kernel, loading the file into memory and jumping to the kernel bootstrap code. In Windows Server 2003 some of the kernel start-up code is separate from the kernel image itself and is actually embedded into NTLDR. After performing several initializations, NTDLR loads the kernel image from file c:\Windows\System32\ntoskrnl.exe and, just as GRUB does, jumps to the kernel entry point.

There's a complication worth mentioning (ie, I told you this thing is hacky). The image for a current Linux kernel, even compressed, does not fit into the 640K of RAM available in real mode. My vanilla Ubuntu kernel is 1.7 MB compressed. Yet the boot loader must run in real mode in order to call the BIOS routines for reading from the disk, since the kernel is clearly not available at that point. The solution is the venerable unreal mode. This is not a true processor mode (I wish the engineers at Intel were allowed to have fun like that), but rather a technique where a program switches back and forth between real mode and protected mode in order to access memory above 1MB while still using the BIOS. If you read GRUB source code, you'll see these transitions all over the place (look under stage2/ for calls to real_to_prot and prot_to_real). At the end of this sticky process the loader has stuffed the kernel in memory, by hook or by crook, but it leaves the processor in real mode when it's done.

We're now at the jump from "Boot Loader" to "Early Kernel Initialization" as shown in the first diagram. That's when things heat up as the kernel starts to unfold and set things in motion. The next post will be a guided tour through the Linux Kernel initialization with links to sources at the Linux Cross Reference. I can't do the same for Windows ;) but I'll point out the highlights.

[Update: cleared up discussion of NTLDR.]


Motherboard Chipsets and the Memory Map

I'm going to write a few posts about computer internals with the goal of explaining how modern kernels work. I hope to make them useful to enthusiasts and programmers who are interested in this stuff but don't have experience with it. The focus is on Linux, Windows, and Intel processors. Internals are a hobby for me, I have written a fair bit of kernel-mode code but haven't done so in a while. This first post describes the layout of modern Intel-based motherboards, how the CPU accesses memory and the system memory map.

To start off let's take a look at how an Intel computer is wired up nowadays. The diagram below shows the main components in a motherboard and dubious color taste:

Diagram for modernmotherboard
Diagram for modern motherboard. The northbridge and southbridge make up the chipset.

As you look at this, the crucial thing to keep in mind is that the CPU doesn't really know anything about what it's connected to. It talks to the outside world through its pins but it doesn't care what that outside world is. It might be a motherboard in a computer but it could be a toaster, network router, brain implant, or CPU test bench. There are three main ways by which the CPU and the outside communicate: memory address space, I/O address space, and interrupts. We only worry about motherboards and memory for now.

In a motherboard the CPU's gateway to the world is the front-side bus connecting it to the northbridge. Whenever the CPU needs to read or write memory it does so via this bus. It uses some pins to transmit the physical memory address it wants to write or read, while other pins send the value to be written or receive the value being read. An Intel Core 2 QX6600 has 33 pins to transmit the physical memory address (so there are 2^33^ choices of memory locations) and 64 pins to send or receive data (so data is transmitted in a 64-bit data path, or 8-byte chunks). This allows the CPU to physically address 64 gigabytes of memory (2^33^ locations * 8 bytes) although most chipsets only handle up to 8 gigs of RAM.

Now comes the rub. We're used to thinking of memory only in terms of RAM, the stuff programs read from and write to all the time. And indeed most of the memory requests from the processor are routed to RAM modules by the northbridge. But not all of them. Physical memory addresses are also used for communication with assorted devices on the motherboard (this communication is called memory-mapped I/O). These devices include video cards, most PCI cards (say, a scanner or SCSI card), and also the flash memory that stores the BIOS.

When the northbridge receives a physical memory request it decides where to route it: should it go to RAM? Video card maybe? This routing is decided via the memory address map. For each region of physical memory addresses, the memory map knows the device that owns that region. The bulk of the addresses are mapped to RAM, but when they aren't the memory map tells the chipset which device should service requests for those addresses. This mapping of memory addresses away from RAM modules causes the classic hole in PC memory between 640KB and 1MB. A bigger hole arises when memory addresses are reserved for video cards and PCI devices. This is why 32-bit OSes have problems using 4 gigs of RAM. In Linux the file /proc/iomem neatly lists these address range mappings. The diagram below shows a typical memory map for the first 4 gigs of physical memory addresses in an Intel PC:

Memory layout at boottime
Memory layout for the first 4 gigabytes in an Intel system.

Actual addresses and ranges depend on the specific motherboard and devices present in the computer, but most Core 2 systems are pretty close to the above. All of the brown regions are mapped away from RAM. Remember that these are physical addresses that are used on the motherboard buses. Inside the CPU (for example, in the programs we run and write), the memory addresses are logical and they must be translated by the CPU into a physical address before memory is accessed on the bus.

The rules for translation of logical addresses into physical addresses are complex and they depend on the mode in which the CPU is running (real mode, 32-bit protected mode, and 64-bit protected mode). Regardless of the translation mechanism, the CPU mode determines how much physical memory can be accessed. For example, if the CPU is running in 32-bit mode, then it is only capable of physically addressing 4 GB (well, there is an exception called physical address extension, but ignore it for now). Since the top 1 GB or so of physical addresses are mapped to motherboard devices the CPU can effectively use only ~3 GB of RAM (sometimes less - I have a Vista machine where only 2.4 GB are usable). If the CPU is in real mode, then it can only address 1 megabyte of physical RAM (this is the only mode early Intel processors were capable of). On the other hand, a CPU running in 64-bit mode can physically access 64GB (few chipsets support that much RAM though). In 64-bit mode it is possible to use physical addresses above the total RAM in the system to access the RAM regions that correspond to physical addresses stolen by motherboard devices. This is called reclaiming memory and it's done with help from the chipset.

That's all the memory we need for the next post, which describes the boot process from power up until the boot loader is about to jump into the kernel. If you'd like to learn more about this stuff, I highly recommend the Intel manuals. I'm big into primary sources overall, but the Intel manuals in particular are well written and accurate. Here are some:

  • Datasheet for Intel G35 Chipset documents a representative chipset for Core 2 processors. This is the main source for this post.
  • Datasheet for Intel Core 2 Quad-Core Q6000 Sequence is a processor datasheet. It documents each pin in the processor (there aren't that many actually, and after you group them there's really not a lot to it). Fascinating stuff, though some bits are arcane.
  • The Intel Software Developer's Manuals are outstanding. Far from arcane, they explain beautifully all sorts of things about the architecture. Volumes 1 and 3A have the good stuff (don't be put off by the name, the "volumes" are small and you can read selectively).
  • Pádraig Brady suggested that I link to Ulrich Drepper's excellent paper on memory. It's great stuff. I was waiting to link to it in a post about memory, but the more the merrier.