Uninformed: Informative Information for the Uninformed

Vol 6» 2007.Jan


Introduction

Software security has matured a lot over the past decade. It has gone from being an obscure problem that garnered little interest from corporations to something that has created an industry of its own. Corporations that once saw little value in investing resources in software security now have entire teams dedicated to rooting out security issues. The reason for this shift in attitude is surely multifaceted, but it could be argued that the greatest influence came from improvements to exploitation techniques that could be used to take advantage of software vulnerabilities. The refinement of these techniques made it possible for reliable exploits to be used without any knowledge of the vulnerability. This shift effectively eliminated the already thin crutch of barrier-to-entry complacency which many corporations were guilty of leaning on.

Whether or not the refinement of exploitation techniques was indeed the turning point, the fact remains that there now exists an industry that has been spawned in the name of software security. Of particular interest for the purpose of this paper are the corporations and individuals within this industry that have invested time in researching and implementing solutions that attempt to tackle the problem of exploit prevention. As a result of this time investment, things like non-executable pages, address space layout randomization (ASLR), stack canaries, and other novel preventative measures are becoming common place in the desktop market. While there should be no argument that the main-stream integration of many of these technologies is a good thing, there's a problem.

This problem centers around the fact that the majority of these exploit prevention solutions to date have been slightly narrow-sighted in their implementations. In particular, these solutions generally focus on preventing exploitation in only one context: user-mode2.1. The reason for this narrow-sightedness is often defended based on the fact that kernel-mode vulnerabilities have been far less prevalent. Furthermore, kernel-mode vulnerabilities are considered by most to require a much more sophisticated attack when compared with user-mode vulnerabilities.

The prevalence of kernel-mode vulnerabilities could be interpreted in many different ways. The naive way would be to think that kernel-mode vulnerabilities really are few and far between. After all, this is code that should have undergone rigorous code coverage testing. A second interpretation might consider that kernel-mode vulnerabilities are more complex and thus harder to find. A third interpretation might be that there are fewer eyes focused on looking for kernel-mode vulnerabilities. While there are certainly other factors, the authors feel that it is probably best captured by the second and third interpretation.

Even if prevalence is affected because of the relative difficulty of exploiting kernel-mode vulnerabilities, it's still a poor excuse for exploit prevention solutions to simply ignore it. The past has already shown that exploitation techniques for user-mode vulnerabilities were refined to the point of creating increasingly reliable exploits. These increasingly reliable exploits were then incorporated into automated worms. What's so different about kernel-mode vulnerabilities? Sure, they are complicated, but so were heap overflows. The authors see no reason to expect that kernel-mode vulnerabilities won't also experience a period of revolutionary public advancements to existing exploitation techniques. In fact, this period has already started[5,2,1]. Still, most corporations seem content to lean on the same set of crutches, waiting for proof that a problem really exists. It's hoped that this paper can assist in the process of making it clear that kernel-mode vulnerabilities can be just as easy to exploit as user-mode vulnerabilities.

It really shouldn't come as a surprise that kernel-mode vulnerabilities exist. The intense focus put upon preventing the exploitation of user-mode vulnerabilities has caused kernel-mode security to lag behind. This lag is further complicated by the fact that developers who write kernel-mode software must generally have a completely different mentality relative to what most user-mode developers are acustomed to. This is true regardless of what operating system a programmer might be dealing with2.2. User-mode programmers who decide to dabble in writing device drivers for NT will find themselves in for a few surprises. The most apparent thing one would notice is that the old Windows Driver Model (WDM) and the new Windows Driver Framework (WDF) represent completely different APIs relative to what a user-mode developer would be familiar with. There are a number of standard C runtime artifacts that can still be used, but their use in device driver code stands out like a sore thumb2.3.

While the API being completely different is surely a big hurdle, there are a number of other gotchas that a user-mode programmer wouldn't normally find themselves worrying about. One of the most interesting limitations imposed upon device driver developers is the conservation of stack space. On modern derivatives of NT, kernel-mode threads are only provided with 3 pages (12288 bytes) of stack space. In user-mode, thread stacks will generally grow as large as 256KB2.4. Due to the limited amount of kernel-mode thread stack space, it should be rare to ever see a device driver consuming a large amount of space within a stack frame. Nevertheless, it was observed that the Intel Centrino drivers have multiple instances of functions that consume over 1 page of stack space. That's 33% of the available stack space wasted within one stack frame!

Perhaps the most important of all of the differences is the extra care that must be taken when it comes to dealing with things like performance, error handling, and re-entrancy. These major elements are critical to ensuring the stability of the operating system as a whole. If a programmer is negligent in their handling of any of these things in user-mode, the worst that will happen is the application will crash. In kernel-mode, however, a failure to properly account for any of these elements will generally affect the stability of the system as a whole. Even worse, security related flaws in device drivers provide a point of exposure that can result in super-user privileges.

From this very brief introduction, it is hoped that the reader will begin to realize that device driver development is a different world. It's a world that's filled with a greater number of restrictions and problems, where the implications of software bugs are much greater than one would normally see in user-mode. It's a world that hasn't yet received adequate attention in the form of exploit prevention technology, thus making it possible to improve and refine kernel-mode exploitation techniques. It should come as no surprise that such a world would be attractive to researchers and tinkerers alike.

This very attraction is, in fact, one of the major motivations for this paper. While the authors will focus strictly on the process used to identify and exploit flaws in wireless device drivers, it should be noted that other device drivers are equally likely to be prone to security issues. However, most other device drivers don't have the distinction of exposing a connectionless layer2 attack surface to all devices in close proximity. Frankly, it's hard to get much cooler than that. That only happens in the movies, right?

To kick things off, the structure of this paper is as follows. In chapter 3, the steps used to find vulnerabilities in wireless device drivers, such as through the use of fuzzing, are described. Chapter 4 explains the process of actually leveraging a device driver vulnerability to execute arbitrary code and how the 3.0 version of the Metasploit Framework has been extended to make this trivial to deal with. Finally, chapter 5 provides three real world examples of wireless device driver vulnerabilities. Each real world example describes the trials and tribulations of the vulnerability starting with the initial discovery and ending with arbitrary code execution.