Multilevel security (MLS) is a technology to protect secrets from leaking between computer users, when some are allowed to see those secrets and others are not. This is generally used in defense applications (the military and intelligence communities) since nobody else is nearly as paranoid about data leaking. A modern wrinkle on this is called cross domain systems (CDS) in which we speak of domains instead of levels, and are usually sharing data on computer networks instead of individual computers
Personally, I was introduced to MLS through my work on the LOCK trusted computing system in the early 1990s.
Here are some MLS materials available on this site:
Note that some people like to spell it "multi-level security." I think the term is old enough that we can omit the hyphen.
Several years ago I was at a workshop sponsored by the Air Force to develop some new directions for information systems improvements. The workshop included both "end user" representatives from the Air Force and "R&D" representatives from laboratories and government contractors.
Discussions on MLS capabilities became rather heated. One vendor representative from the security working group declared the following in a plenary session:
"Don't ask for MLS. We've tried to give you MLS, but in fact you've never really wanted it or used it. But please, tell us what you do want!"
A voice in the back shouted, "MLS!"
That little incident reflects an important fact about MLS: it's an overloaded term that describes both an abstract security objective and a well-known mechanism that is supposed to achieve that objective, more or less. In her well-known paper on software safety, Nancy Leveson criticizes this type of labeling:
Labeling a technique, e.g., "software diversity" or "expert system," with the property we hope to achieve by it (and need to prove about it) is misleading and unscientific.
Unfortunately, we're stuck with the established terminology, so now we must focus on distinguishing between the two meanings.
This article by Rick Smith is licensed under a Creative Commons Attribution 3.0 United States License.
Multilevel security (MLS) has posed a challenge to the computer security community since the 1960s. MLS sounds like a mundane problem in access control: allow information to flow freely between recipients in a computing system who have appropriate security clearances while preventing leaks to unauthorized recipients. However, MLS systems incorporate two essential features: first, the system must enforce these restrictions regardless of the actions of system users or administrators, and second, MLS systems strive to enforce these restrictions with incredibly high reliability. This has led developers to implement specialized security mechanisms and to apply sophisticated techniques to review, analyze, and test those mechanisms for correct and reliable behavior.
Despite this, MLS systems have rarely provided the degree of security desired by their most demanding customers in the military services, intelligence organizations, and related agencies. The high costs associated with developing MLS products, combined with the limited size of the user community, have also prevented MLS capabilities from appearing in commercial products.
Portions of this article also appear as Chapter 205 of the Handbook of Information Security, Volume 3, Threats, Vulnerabilities, Prevention, Detection and Management, Hossein Bidgoli, ed., ISBN 0-471-64832-9, John Wiley, 2006.
Many businesses and organizations need to protect secret information, and most can tolerate some leakage. Organizations who use MLS systems tolerate no leakage at all. Businesses may face legal or financial risks if they fail to protect business secrets, but they can generally recover afterwards by paying to repair the damage. At worst, the business goes bankrupt. Managers who take risks with business secrets might lose their jobs if secrets are leaked, but they are more likely to lose their jobs to failed projects or overrun budgets. This places a limit on the amount of money a business will invest in data secrecy.
The defense community, which includes the military services, intelligence organizations, related government agencies, and their supporting enterprises, cannot easily recover from certain information leaks. Stealth systems aren't stealthy if the targets know what to look for, and surveillance systems don't see things if the targets know what camouflage to use. Such failures can't always be corrected just by spending more money. Even worse, a system's weakness might not be detected until after a diplomatic or military disaster reveals it. During the Cold War, the threat of nuclear annihilation led military and political leaders to take such risks very seriously. It was easy to argue that data leakage could threaten a country's very existence. The defense community demanded levels of computer security far beyond what the business community needed.
We use the term multilevel because the defense community has classified both people and information into different levels of trust and sensitivity. These levels represent the well-known security classifications: Confidential, Secret, and Top Secret. Before people are allowed to look at classified information, they must be granted individual clearances that are based on individual investigations to establish their trustworthiness. People who have earned a Confidential clearance are authorized to see Confidential documents, but they are not trusted to look at Secret or Top Secret information any more than any member of the general public. These levels form the simple hierarchy shown in Figure 1. The dashed arrows in the figure illustrate the direction in which the rules allow data to flow: from "lower" levels to "higher" levels, and not vice versa.
The defense community was the first and biggest customer for computing technology, and computers were still very expensive when they became routine fixtures in defense organizations. However, few organizations could afford separate computers to handle information at every different level: they had to develop procedures to share the computer without leaking classified information to uncleared (or insufficiently cleared) users. This was not as easy as it might sound. Even when people "took turns" running the computer at different security levels (a technique called periods processing), security officers had to worry about whether Top Secret information may have been left behind in memory or on the operating system's hard drive. Some sites purchased computers to dedicate exclusively to highly classified work, despite the cost, simply because they did not want to take the risk of leaking information.
Multiuser systems, like the early timesharing systems, made such sharing particularly challenging. Ideally, people with Secret clearances should be able to work at the same time others were working on Top Secret data, and everyone should be able to share common programs and unclassified files. While typical operating system mechanisms could usually protect different user programs from one another, they could not prevent a Confidential or Secret user from tricking a Top Secret user into releasing Top Secret information via a Trojan horse.
A Trojan horse is software that performs an invisible function that the user would not have chosen to perform. For example, consider a multiuser system in which users have stored numerous private files and have used the system's access permissions to protect those files from prying eyes. Imagine that the author of a locally-developed word processing program has an unhealthy curiosity about others in the user community and wishes to read their protected files. The author can install a Trojan horse function in the word processing program to retrieve the protected files. The function copies a user's private files into the author's own directory whenever a user runs the word processing program.
Unfortunately, this is not a theoretical threat. The "macro" function in modern word processors like Microsoft Word allow users to create arbitrarily complicated software procedures and attach them to word processing documents. When another user opens a document containing the macro, the word processor executes the procedure defined by the macro. This was the basis of "macro viruses" like the "I Love You" virus of the late 1990s. A macro can perform all the functions required of a Trojan horse program, including copying files.
When a user runs the word processing program, the program inherits that user's access permissions to the user's own files. Thus the Trojan horse circumvents the access permissions by performing its hidden function when the unsuspecting user runs it. This is true whether the function is implemented in a macro or embedded in the word processor itself. Viruses and network worms are Trojan horses in the sense that their replication logic is run under the context of the infected user. Occasionally, worms and viruses may include an additional Trojan horse mechanism that collects secret files from their victims. If the victim of a Trojan horse is someone with access to Top Secret information on a system with lesser-cleared users, then there's nothing on a conventional system to prevent leakage of the Top Secret information. Multiuser systems clearly need a special mechanism to protect multilevel data from leakage.
The phrase "need to know" refers to a commonly-enforced rule in organizations that handle classified information. In general, a security clearance does not grant blanket permission to look at all information classified at that level or below. The clearance is really only the first step: people are only allowed to look at classified information that they need to know as part of the work they do.
In other words, if we give Janet a Secret clearance to work on a cryptographic device, then she has a need to know the Secret information related to that device. She does not have permission to study Secret information about spy satellites, or Secret cryptographic information that doesn't apply to her device. If she is using a multiuser system containing Secret information about her project, other cryptographic projects, and even spy satellites, then the system must prevent Janet from browsing information belonging to the other projects and activities. On the other hand, the system should be able to grant Janet permission to look at other materials if she really needs the information to do her job.
A computer's operating mode determines what access control mechanisms it needs. Dedicated systems might not require any mechanisms beyond physical security. Computers running at system high must have user-based access restrictions like those typically provided in Unix and in "professional" versions of Microsoft Windows. In multilevel mode, the system must prevent data from higher security levels from leaking to users who have lower clearances: this requires a special mechanism.
Typically, an MLS mechanism works as follows: Users, computers, and networks carry computer-readable labels to indicate security levels. Data may flow from "same level" to "same level" or from "lower level" to "higher level" (Figure 2). Thus, Top Secret users can share data with one another, and a Top Secret user can retrieve information from a Secret user. It does not allow data from Top Secret (a higher level) to flow into a file or other location visible to a Secret user (at a lower level). If data is not subject to classification rules, it belongs to the "Unclassified" security level. On a computer this would include most application programs and any computing resources shared by all users.
A direct implementation of such a system allows the author of a Top Secret report to retrieve information entered by users operating at Secret or Confidential and merge it with Top Secret information. The user with the Secret clearance cannot "read up" to see the Top Secret result, since the data only flows in one direction between Secret and Top Secret. Unclassified data can be made visible to all users.
It isn't enough to simply prevent users with lower clearances from reading data carrying higher classifications. What if a user with a Top Secret clearance stores some Top Secret data in a file readable by a Secret user? This causes the same problem as "reading up" since it makes the Top Secret data visible to the Secret user. Some may argue that Top Secret users should be trusted not to do such a thing. In fact, some would argue that they would never do it because it's a violation of the Espionage Act. Unfortunately, this argument does not take into account the risk of a Trojan horse.
For example, Figure 3 shows what could happen if an attacker inserts a macro function with a Trojan horse capability into a word processing file. The attacker has stored the macro function in a Confidential file and has told a Top Secret user to examine the file. When the user opens the Confidential file, the macro function starts running, and it tries to copy files from the Top Secret user's directory into Confidential files belonging to the attacker. This is called "writing down" the data from a higher security level to a lower one.
In a system with typical access control mechanisms, the macro will succeed since the attacker can easily set up all of the permissions needed to allow the Top Secret user to write data into the other user's files. Clearly, the system cannot enforce MLS reliably if Trojan horse programs can circumvent MLS protections. There is no way users can avoid Trojan horse programs with 100% reliability, as suggested by the success of e-mail viruses. An effective MLS mechanism needs to block "write down" attempts as well as "read up" attempts.
The most widely recognized approach to MLS is the Bell-LaPadula security model (Bell and La Padula, 1974). The model effectively captures the essentials of the access restrictions implied by conventional military security levels. Most MLS mechanisms implement Bell-LaPadula or a close variant of it. Although Bell-LaPadula has accurately defined a MLS capability that keeps data safe, it has not led to the widespread development of successful multilevel systems. In practice, developers have not been able to produce MLS mechanisms that work reliably with high confidence, and some important defense applications require a "write down" capability that renders Bell-LaPadula irrelevant (for example, see the later section "Sensor to Shooter").
In the Bell-LaPadula model, programs and processes (called subjects) try to transfer information via files, messages, I/O devices, or other resources in the computer system (called objects). Each subject and object carries a label containing its security level, that is, the subject's clearance level or the object's classification level. In the simplest case, the security levels are arranged in a hierarchy as shown earlier in Figure 1. More elaborate cases involve compartments, as described in a later section.
The Bell-LaPadula model enforces MLS access restrictions by implementing two simple rules: the simple security property and the *-property. When a subject tries to read from or write to an object, the system compares the subject's security label with the object's label and applies these rules. Unlike typical access restrictions on multiuser computing systems, these restrictions are mandatory: no users on the system can turn them off or bypass them. Typical multiuser access restrictions are discretionary, that is, they can be enabled or disabled by system administrators and often by individual users. If users, or even administrative users, can modify the access rules, then a Trojan horse can modify or even disable those rules. To prevent leakage by browsing users and Trojan horses, the Bell-LaPadula systems always enforce the two properties.
The simple security property is obvious: it prevents people (or their processes) from reading data whose classification exceeds their security clearances. Users can't "read up" relative to their security clearances. They can "read down," which means that they can read data classified at or below the same level as their clearances.
The *-property prevents people with higher clearances from passing highly classified data to users who don't share the appropriate clearance, either accidentally or intentionally. User programs can't "write down" into files that carry a lower security level than the process they are currently running. This prevents Trojan horse programs from secretly leaking highly classified data. Figure 4 illustrates these properties: the dashed arrows show data being read in compliance with the simple security property, and the lower solid arrow shows an attempted "write down" being blocked by the *-property.
A system enforcing the Bell-LaPadula model blocks the word processing macro in either of two ways, depending on the security level at which the user runs the word processing program. In one case, shown in Figure 4, the user runs the program at Top Secret. This allows the macro to read the user's Top Secret files, but the *-property prevents the macro from writing to the attacker's Confidential files. When the process tries to open a Confidential file for writing, the MLS access rules prevent it. In the other case, the Top Secret user runs the program at the Confidential level, since the file is classified Confidential. The program's macro function can read and modify Confidential files, including files set up by the attacker, but the simple security property prevents the macro from reading any Secret or Top Secret files.
Although the hierarchical security levels like Top Secret are familiar to most people, they are not the only restrictions placed on information in the defense community. Organizations apply hierarchical security levels (Confidential, Secret, or Top Secret) to their data according to the damage that might be caused by leaking that data. Some organizations add other markings to classified material to further restrict its distribution. These markings go by many names: compartments, codewords, caveats, categories, and so on, and they serve many purposes. In some cases, the markings indicate whether or not the data may be shared with particular organizations, enterprises, or allied countries. In many cases these markings give the data's creator or owner more control over the data's distribution. Each marking indicates another restriction placed on the distribution of a particular classified data item. People can only receive the classified data if they comply with all restrictions placed on the data's distribution.
The Bell-LaPadula model refers to all of these additional markings as compartments. A security level may include compartment identifiers in addition to a hierarchical security level. If a particular file's security level includes one or more compartments, then the user's security level must also include those compartments or the user won't be allowed to read the file.
A system with compartments generally acquires a large number of distinct security levels: one for every legal combination of a hierarchical security level with zero or more compartments. The interrelationships between these levels form a directed graph called a lattice. Figure 5 shows the lattice for a system that contains Secret and Top Secret information with compartments Ace and Bar.
The arrows in the lattice show which security levels can read data labeled with other security levels. If the user Cathy has a Top Secret clearance with access to both compartments Ace and Bar, then she has permission to read any data on the system (assuming its owner has also given her "read" permission to that data). We determine the access rights associated with other security labels by following arrows in Figure 5.
If Cathy runs a program with the label Secret Ace, then the program can read data labeled Unclassified, Secret, or Secret Ace. The program can't read data labeled Secret Bar or Secret Ace Bar, since its security label doesn't contain the Bar compartment. Figure 4 illustrates this: there is no path to Secret Ace that comes from a label containing the Bar compartment. Likewise, the program can't read Top Secret data because it is running at the Secret level, and no Top Secret labels lead to Secret labels.
A high security clearance like Cathy's shows that a particular organization is willing to trust Cathy with certain types of classified information. It is not a blank check that grants access to every resource on a computer system. MLS access rules always work in conjunction with the system's other access rules.
Systems that enforce MLS access rules always combine them with conventional, user-controlled access permissions. If a Secret or Confidential user blocks access to a file by other users, then a Top Secret user can't read the file either. The "need to know" rule means that classified information should only be shared among individuals who genuinely need the information. Individual users are supposed to keep classified information protected from arbitrary browsing by other users. Higher security clearances do not grant permission to arbitrarily browse: access is still restricted by the need to know requirement.
Users who have Top Secret and higher clearances don't automatically acquire administrative or "super user" status on multilevel computer systems, even if they are cleared for everything on the computer. In a way, the Top Secret security level actually restricts what the user can do: a program running at Top Secret can't install an unclassified application program, for example. Many administrative tasks, like installing application programs and other shared resources, must take place at the unclassified level. If an administrator installs programs while running at the Top Secret level, then the programs will be installed with Top Secret labels. Users with lower clearances wouldn't be authorized to see the programs.
Despite strong support from the military community and a strong effort by computing vendors and computer security researchers, MLS mechanisms failed to provide the security and functionality required by the defense community. First, security researchers and MLS system developers found it to be extremely difficult, and perhaps impossible, to completely prevent information flow between different security levels in an MLS system. We will explore this problem further in the next section on "Assurance." A second problem was the virus threat: when we enforce MLS information flow we do nothing to prevent a virus introduced at a lower clearance level from propagating into higher clearance levels. Finally, the end user community found a number of cases where that the Bell-LaPadula model of information flow did not entirely satisfy their operational and security needs.
Self-replicating software like computer viruses became a minor phenomenon among the earliest home computer users in the late 1970s. While viruses weren't enough of a phenomenon to interest MLS researchers and developers, MLS systems caught the interest of the pioneering virus researcher Fred Cohen (1990, 1994). In 1984 he demonstrated that a virus inserted at the unclassified level of a system that implemented the Bell-LaPadula model could rapidly spread throughout all security levels of a system. This particular infestation did not reflect a bug in the MLS implementation. Instead, it indicated a flaw in the Bell-LaPadula model, which strives to allow information flows from low to high while preventing flows from high to low. Viruses represent a security threat that exploits an information flow from low to high, so MLS protection based on Bell-LaPadula gives no protection against it.
Viruses represented one case in which the Bell-LaPadula model did not meet the end users' operational and security needs. Additional cases emerged as end users gained experience with MLS systems. One problem was that the systems tended to collect a lot of "overclassified" information. Whenever a user created a document at a high security level, the document would have to retain that security level even if the user removed all sensitive information in order to create a less-classified or even unclassified document. In essence, end users often needed a mechanism to "downgrade" information so its label reflected its lowered sensitivity.
The downgrading problem became especially important as end users sought to develop "sensor to shooter" systems. These systems would use highly classified intelligence data to produce tactical commands to be sent to combat units whose radios received information at the Secret level or lower (see the later section on "Sensor to Shooter"). In practice, systems would address the downgrading problem by installing privileged programs that bypassed the MLS mechanism to downgrade information. While this served as a convenient patch to correct the problem, it also showed that practical systems did not entirely rely on the Bell-LaPadula mechanisms that had cost so much to build and validate. This further eroded the defense community's interest in MLS based on Bell-LaPadula products.
Members of the defense community identified the need for MLS-capable systems in the 1960s, and a few vendors implemented the basic features (Weissman 1969, Hoffman 1973, Karger and Schell 1974). However, government studies of the MLS problem emphasized the danger of relying on large, opaque operating systems to protect really valuable secrets (Ware 1970, Anderson 1972). Operating systems were already notorious for unreliability, and these reports highlighted the threat of a software bug allowing leaks of highly sensitive information. The recommended solution was to achieve high assurance through extensive analysis, review, and testing.
High assurance would clearly increase vendors' development costs and lead to higher product costs. This did not deter the US defense community, which foresaw long-term cost savings. Karger and Schell (1974) repeated an assertion that MLS capabilities could save the US Air Force alone $100,000,000 a year, based on computing costs at that time.
Every MLS device poses a fundamental question: does it really enforce MLS, or does it leak information somehow? The first MLS challenge is to develop a way to answer that question. We can decompose the problem into two more questions:
The first question was answered by the development of security models, like the Bell-LaPadula model summarized earlier. A true security model provides a formal, mathematical representation of MLS information flow restrictions. The formal model makes the enforcement problem clear to non-programmers. It also makes the operating requirement clear to the programmers who implemented the MLS mechanisms.
To address the evaluation question, designers needed a way to prove that the system's MLS controls indeed work correctly. By the late 1960s, this had become a really serious challenge. Software systems had become much too large for anyone to review and validate: Brooks (1975) reported that IBM had over a thousand people working on its ground breaking system, OS/360. In the book, The Mythical Man-Month, Brooks described the difficulties of building a large-scale software system. Project size wasn't the only challenge in building reliable and secure software: smaller teams, like the team responsible for the Multics security mechanisms, could not detect and close every vulnerability (Karger and Schell, 1974).
The security community developed two sets of strategies for evaluating MLS systems: strategies for designing a reliable MLS system and strategies to prove the MLS system works correctly. The design strategies emphasized a special structure to ensure uniform enforcement of data access rules, called the reference monitor. The design strategies further required that the designers explicitly identify all system components that played a role in enforcing MLS; those components were defined as being part of the trusted computing base, which included all components that required high assurance.
The strategies for proving correctness relied heavily on formal design specifications and on techniques to analyze those designs. Some of these strategies were a reaction to ongoing quality control problems in the software industry, but others were developed as an attempt to detect covert channels, a largely unresolved weakness in MLS systems.(back to top)
During the early 1970s, the US Air Force commissioned a study to develop feasible strategies for constructing and verifying MLS systems. The study pulled together significant findings by security researchers at that time into a report, called the Anderson report (1972), which heavily influenced subsequent US government support of MLS systems. A later study (Nibaldi 1979) identified the most promising strategies for trusted system development and proposed a set of criteria for evaluating such systems.
These proposals led to published criteria for developing and evaluating MLS systems called the Trusted Computer System Evaluation Criteria (TCSEC), or "Orange Book" (Department of Defense, 1985a). The US government established a process by which computer system vendors could submit their products for security evaluation. A government organization, the National Computer Security Center (NCSC), evaluated products against the TCSEC and rated the products according to their capabilities and trustworthiness. For a product to achieve the highest rating for trustworthiness, the NCSC needed to verify the correctness of the product's design.
To make design verification feasible, the Anderson report recommended (and the TCSEC required) that MLS systems enforce security through a "reference validation mechanism" that today we call the reference monitor. The reference monitor is the central point that enforces all access permissions. Specifically, a reference monitor must have three features:
Operating system designers had by that time recognized the concept of an operating system kernel: a portion of the system that made unrestricted accesses to the computer's resources so that other components didn't need unrestricted access. Many designers believed that a good kernel should be small for the same reason as a reference monitor: it's easier to build confidence in a small software component than in a large one. This led to the concept of a security kernel: an operating system kernel that incorporated a reference monitor. Layered atop the security kernel would be supporting processes and utility programs to serve the system's users and the administrators. Some non-kernel software would require privileged access to system resources, but none would bypass the security kernel. The combination of the computer hardware, the security kernel, and its privileged components made up the trusted computing base (TCB) - the system components responsible for enforcing MLS restrictions. The TCB was the focus of assurance efforts: if it worked correctly, then the system would correctly enforce the MLS restrictions.
The computer industry has always relied primarily on system testing for quality assurance. However, the Anderson report recognized the shortcomings of testing by repeating Dijkstra's observation that tests can only prove the presence of bugs, not their absence. To improve assurance, the report made specific recommendations about how MLS systems should be designed, built, and tested. These recommendations became requirements in the TCSEC, particularly for products intended for the most critical applications:
These activities were not substituted for conventional product development techniques. Instead, these additional tasks were combined with the accepted "best practices" used in conventional computer system development. These practices tended to follow a "waterfall" process (Boehm, 1981; Department of Defense, 1985b): first, the builders develop a requirements specification, from that they develop the top-down design, then they implement the product, and finally they test the product against the requirements. In the idealized process for developing an MLS product, the requirements specification focuses on testable functions and measurable performance capabilities while the policy model captures security requirements that can't be tested directly. (see Figure 6). shows how these elements worked together to validate an MLS product's correct operation.
Product development has always been expensive. Many development organizations, especially smaller ones, try to save time and money by skipping the planning and design steps of the waterfall process. The TCSEC did not demand the waterfall process, but its requirements for highly assured systems imposed significant costs on development organizations. Both the Nibaldi study and the TCSEC recognized that not all product developers could afford to achieve the highest levels of assurance. Instead, the evaluation process identified a range of assurance levels that a product could achieve. Products intended for less-critical activities could spend less money on their development process and achieve a lower standard of assurance. Products intended for the most critical applications, however, were expected to meet the highest practical assurance standard.(back to top)
Shortly after the Anderson report appeared, Lampson (1973) published a note which examined the general problem of keeping information in one program secret from another, a problem at the root of MLS enforcement. Lampson noted that computer systems contain a variety of channels by which two processes might exchange data. In addition to explicit channels like the file system or interprocess communications services, there are covert channels that can also carry data between processes. These channels typically exploit operating system resources shared among all processes. For example, when one process can take exclusive control of a file, it prevents other processes from accessing the file, or when one process uses up all the free space on the hard drive, other processes will "see" this activity.
Since MLS systems could not achieve their fundamental objective (to protect secrets) if covert channels were present, defense security experts developed techniques to detect such channels. The TCSEC required a covert channel analysis of all MLS systems except those achieving the lowest assurance levels.
In general, there are two categories of covert channels: storage channels and timing channels. A storage channel transmits data from a "high" process to a "low" one by writing data to a storage location visible to the "low" one. For example, if a Secret process can see how much memory is left after a Top Secret process allocates some memory, the Top Secret process can send a numeric message by allocating or freeing the amount of memory equal to the message's numeric value. The covert channel consists of setting the contents of a storage location (the size of free memory) to a value by the "high" process that is readable by the "low" one.
A timing channel is one in which the "high" process communicates to the "low" one by varying the timing of some detectable event. For example, the Top Secret process might instruct the hard drive to visit particular disk blocks. When the Secret process goes to read data from the hard drive itself, the disk activity by the Top Secret process will cause varying delays in the Secret program when it tries to use the hard drive itself. The Top Secret program can systematically impose delays on the Secret program's disk activities, and thus transmit information through the pattern of those delays. Wray (1991) describes a covert channel based on hard drive access speed, and also uses the example to show how ambiguous the two covert channel categories can be.
The fundamental strategy for seeking convert channels is to inspect all shared resources in the system, decide if any could yield an effective covert channel, and to measure the bandwidth of whatever covert channels are uncovered. While a casual inspection by a trained analyst may often uncover covert channels, there is no guarantee that a casual inspection will find all such channels. Systematic techniques help increase confidence that the search has been comprehensive. An early technique, the shared resource matrix (Kemmerer, 1983; 2002), can analyze a system from either a formal or informal specification. While the technique can detect covert storage channels, it cannot detect covert timing channels. An alternative approach, noninterference, requires formal policy and design specifications (Haigh and Young, 1987). This technique locates both timing and storage channels by proving theorems to show that processes in the system, as described in the design specification, can't perform detectable ("interfering") actions that are visible to other processes in violation of MLS restrictions.
To be effective at locating covert channels, the design specification must accurately model all resource sharing that is visible to user processes in the system. Typically, the specification focuses its attention on the system functions made available to user processes: system calls to manipulate files, allocate memory, communicate with other processes, and so on. The development program for the LOCK system (Saydjari, Beckman, and Leaman, 1989; Saydjari, 2002), for example, included the development of a formal design specification to support a covert channel analysis. The LOCK design specification identified all system calls, described all inputs and outputs produced by these calls, including error results, and represented the internal mechanisms necessary to support those capabilities. The LOCK team used a form of noninterference to develop proofs that the system enforced MLS correctly (Fine, 1994).
As with any flaw detection technique, there is no way to confirm that all flaws have been found. Techniques that analyze the formal specification will detect all flaws in that specification, but there is no way to conclusively prove that the actual system implements the specification perfectly. Techniques based on less-formal design descriptions are also limited by the quality of those descriptions: if the description omits a feature, there's no way to know if that feature opens a covert channel. At some point there must be a trade-off between the effort spent on searching for covert channels and the effort spent searching for other system flaws.
In practice, system developers have found it almost impossible to eliminate all covert channels. While evaluation criteria encourage developers to eliminate as many covert channels as possible, the criteria also recognize that practical systems will probably include some channels. Instead of eliminating the channels, developers must identify them, measure their possible bandwidth, and provide strategies to reduce their potential for damage. While not all security experts agree that covert channels are inevitable (Proctor and Neumann, 1992), typical MLS products contain covert channels. Thus, even the approved MLS products contain known weaknesses.
How does assurance fit into the process of actually deploying a system? In theory, one can plug a computer in and throw the switch without knowing anything about its reliability. In the defense community, however, a responsible officer must approve all critical systems before they can go into operation, especially if they handle classified information. Approval rarely occurs unless the officer receives appropriate assurance that the system will operate correctly. There are three major elements to this approval process in the US defense community:
In military environments, a highly-ranked officer, typically an admiral or general, must formally grant approval (accreditation) before a critical system goes into operation. Accreditation shows that the officer believes the system is safe to operate, or at least that the system's risks are outweighed by its benefits. The decision is based on the results of the system's certification: a process in which technical experts analyze and test the system to verify that it meets its security and safety requirements. The certification and accreditation process must meet certain standards (Department of Defense, 1997). Under rare, emergency conditions an officer could accredit a system even if there are problems with the certification.
Certification can be very expensive, especially for MLS systems. Tests and analyses must show that the system is not going to fail in a way that will leak classified information or interfere with the organization's mission. Tests must also show that all security mechanisms and procedures work as specified in the requirements. Certification of a custom-built system often involves design reviews and source code inspections. This work requires a lot of effort and special skills, leading to very high costs.
The product evaluation process heralded by the TCSEC was intended to provide off-the-shelf computing equipment that reliably enforced MLS restrictions. Although organizations could implement, certify, and accredit custom systems enforcing MLS, the certification costs were hard to predict and could overwhelm the project budget. If system developers could use off-the-shelf MLS products, their certification costs and project risks would be far lower. Certifiers could rely on the security features verified during evaluation, instead of having to verify a product's implementation themselves.
Product evaluations assess two major aspects: functionality and assurance. A successful evaluation indicates that the product contains the appropriate functional features and meets the specified level of assurance. The TCSEC defined a range of evaluation levels to reflect increasing levels of compliance with both functional and assurance requirements. Each higher evaluation level either incorporated the requirements of the next lower level, or superseded particular requirements with a stronger requirement. Alphanumeric codes indicated each level, with D being lowest and A1 being highest:
Although the TCSEC defined a whole range of evaluation levels, the government wanted to encourage vendors to develop systems that met the highest levels. In fact, one of the pioneering evaluated products was SCOMP, an A1 system constructed by Honeywell (Fraim, 1983). Very few other vendors pursued an A1 evaluation. High assurance caused high product development costs; one project estimated that the high assurance tasks added 26% to the development effort's labor hours (Smith, 2001). In the fast-paced world of computer product development, that extra effort can cause delays that make the difference between a product's success or failure.
To date, no commercial computer vendors have offered a genuine "off the shelf" MLS product. A handful of vendors had implemented MLS operating systems, but none of these were standard product offerings. All MLS products were expensive, special purpose systems marketed almost exclusively to the military and government customers. Almost all MLS products were evaluated to the B1 level, meeting minimum assurance standards. Thus, the TCSEC program failed on two levels: it failed to persuade vendors to incorporate MLS features into their standard products, and it failed to persuade any vendors to produce products that met the "A1" requirements for high assurance.
A survey of security product evaluations completed by the end of 1999 (Smith, 2000) noted that only a fraction of security products ever pursued evaluation. Most products pursued "medium assurance" evaluations, which could be sufficient for a minimal (B1) MLS implementation.
TCSEC evaluations were discontinued in 2000. The handful of modern MLS products are evaluated under the Common Criteria (Common Criteria Project Sponsoring Organizations, 1999), an evaluation criteria designed to address a broader range of security products.
The most visible failure of MLS technology is its absence from typical desktops. As Microsoft's Windows operating systems came to dominate the desktop in the 1990s, Microsoft made no significant move to implement MLS technology. Versions of Windows have earned a TCSEC C2 evaluation and a more-stringent EAL-4 evaluation under the Common Criteria, but it has never incorporated MLS. The closest Microsoft has come to offering MLS technology has been its "Palladium" effort announced in 2002. The technology focused on the problem of digital rights management - restricting the distribution of copyrighted music and video - but the underlying mechanisms caught the interest of many in the MLS community because of potential MLS applications. The technology was slated for incorporation in a future Windows release codenamed "Longhorn," but was dropped from Microsoft's plans in 2004 (Orlowski, 2004).
Arguably several factors have contributed to the failure of the MLS product space. Microsoft demonstrated clearly that there was a giant market for products that omit MLS. Falling computer prices also played a role: sites where users typically work at a couple of different security levels find it cheaper to put two computers on every desktop than to try to deploy MLS products. Finally, the sheer cost and uncertainty of MLS product development undoubtedly discourage many vendors. It is hard to justify the effort to develop a "highly secure" system when it's likely that the system will still have identifiable weaknesses, like covert channels, after all the costly, specialized work is done.(back to top)
As computer costs fell and performance soared during the 1980s and 1990s, computer networks became essential for sharing work and resources. Long before computers were routinely wired to the Internet, sites were building local area networks to share printers and files. In the defense community, multilevel data sharing had to be addressed in a networking environment. Initially, the community embraced networks of cheap computers as a way to temporarily sidestep the MLS problem. Instead of tackling the problem of data sharing, many organizations simply deployed separate networks to operate at different security levels, each running in system high mode.
This approach did not help the intelligence community. Many projects and departments needed to process information carrying a variety of compartments and code words. It simply wasn't practical to provide individual networks for every possible combination of compartments and code words, since there were so many to handle. Furthermore, intelligence analysts often spent their time combining information from different compartments to produce a document with a different classification. In practice, this work demanded an MLS desktop and often required communications over an MLS network.
Thus, MLS networking took two different paths in the 1990s. Organizations in the intelligence community continued to pursue MLS products. This reflected the needs of intelligence analysts. In networking, this called for labeled networks, that is, networks that carried classification labels on their traffic to ensure that MLS restrictions were enforced.
Many other military organizations, however, took a different path. Computers in most military organizations tended to cluster into networks handling data up to a specified security level, operating in system high mode. This choice was not driven by an architectural vision; it was more likely the effect of the desktop networking architecture emerging in the commercial marketplace combined with existing military computer security policies. Ultimately, this strategy was named multiple single levels (MSL) or multiple independent levels of security (MILS).
The fundamental objective of a labeled network is to prevent leakage of classified information. The leakage could occur through eavesdropping on the network infrastructure or by delivering data to an uncleared destination. This yielded two different approaches to labeled networking. The more complex approach used cryptography to keep different security levels separate and to prevent eavesdropping. The simpler approach inserted security labels into network traffic and relied on a reference monitor mechanism installed in network interfaces to restrict message delivery.
In practice, the cryptographic hardware and key management processes have often been too expensive to use in certain large scale MLS network applications. Instead, sites have relied on physical security to protect their MLS networks from eavesdropping. This has been particularly true in the intelligence community, where the proliferation of compartments and codewords have made it impractical to use cryptography to keep security levels separate.
Within such sites, the network infrastructure is physically isolated from any contact except by people with Top Secret clearances supported by special background investigations. Network wires are protected from tampering, though not from sophisticated attacks that might tempt uncleared outsiders. MLS access restrictions rely on security labels embedded in network messages. If cryptography is used at all, its primary purpose is to protect the integrity of security labels.
Standard traffic using the Internet Protocol (IP) does not include security labels, but the Internet community developed standards for such labels, beginning with the IP Security Option (IPSO) (St. Johns, 1988). The US Defense Intelligence Agency developed this further when implementing a protocol for the DOD Intelligence Information System (DODIIS). The protocol, called DODIIS Network Security for Information Exchange (DNSIX), specified both the labeling and the checking process to be used when passing traffic through a DNSIX network interface (LaPadula, LeMoine, Vukelich, and Woodward, 1990). To increase the assurance of the resulting system, the specification included a design description for the checking process; the design had been verified against a security model that described the required MLS enforcement.
In the US, MLS cryptographic techniques were exclusively the domain of the National Security Agency (NSA), since it set the standards for encrypting classified information. Traditional NSA protocols encrypted traffic at the link level, carrying the traffic without security labels. During the 1980s and 1990s the NSA started a series of programs to develop cryptographic protocols for handling labeled, multilevel data, including the Secure Data Network System (SDNS), the Multilevel Network System Security Program (MNSSP), and the Multilevel Information System Security Initiative (MISSI).
These programs yielded Security Protocol 3 (SP3) and the Message Security Protocol (MSP). SP3 protects messages at the network protocol layer (layer 3) and has been used in gateway encryption devices for the DOD's Secret IP Router Network (SIPRNET), which shares classified information at the Secret level among approved military and defense organizations. However, SP3 is a relatively old protocol and will probably be superseded by a variant of the IP Security Protocol (IPSEC) that has been adapted for the defense community, called the High Assurance IP Interface Specification (HAIPIS). MSP protects messages at the application level and was originally designed to encrypt e-mail. The Defense Message System, the DOD's evolving secure e-mail system, uses MSP.
Obviously an MLS network uses encryption to protect traffic against eavesdropping. In addition, MLS protocols can use cryptography to enforce MLS access restrictions. The FIREFLY protocol illustrates this. Developed by the NSA for multilevel telephone and networking protocols, FIREFLY uses public-key cryptography to negotiate encryption keys to protect traffic between two entities. Each FIREFLY certificate contains the clearance level of its owner, and the protocol compares the levels when negotiating keys. If the two entities are using secure telephones, then the protocol yields a key whose clearance level matches the lower of the two entities. If the two entities are establishing a computer network connection, then the negotiation succeeds only if the clearance levels match.
Despite the shortage of MLS products, the defense and intelligence communities dramatically expanded their use of computer systems during the 1980s and 1990s. Instead of implementing MLS systems, most organizations chose to deploy multiple computer networks, each dedicated to a security level they needed. This eliminated multiuser data sharing risks by eliminating the multiuser data sharing at multiple security levels. When necessary, less-classified data was copied one-way onto servers on a higher-classified network from a removable disk or tape volume.
To simplify data sharing in the MILS environment, many organizations have implemented devices to transfer data from networks at one security level to networks at other levels. These devices generally fall into three categories:
In a multilevel server, computers on a network at a lower security level can store information on a server, and computers on networks at higher levels can visit the same server and retrieve that information. Vendors provide a variety of multilevel servers, including web servers, database servers, and file servers. While such systems are popular with some defense organizations, others avoid them. Most server products achieve relatively low levels of assurance, which suggests that attackers might find ways to leak information through them from a higher network to a lower one.
One-way guards implement a one-way data transfer from a network at a lower security level to a network at a higher level. The simplest implementations rely on hardware restrictions to guarantee that traffic flows in only one direction. For example, conventional fiber optic network equipment supports bidirectional traffic, but it's not difficult to construct fiber optic hardware that only contains a transmitter on one end and a receiver on the other. Such a device can transfer data in one direction with no risk of leaking data in the other. An obvious shortcoming is that there is no efficient way to prevent congestion since the low side has no way of knowing when or if its messages have been received. More sophisticated devices like the NRL Pump (Kang, Moskowitz, and Lee, 1996) avoid this problem by implementing acknowledgements using trusted software. However, devices like the Pump can suffer from the same shortcoming as MLS servers: there are very few trustworthy operating systems on which to implement trusted MLS software, and most achieve relatively low assurance. The trustworthiness of the Pump will often be limited by the assurance of the underlying operating system.
Downgrading guards are important because they address a troublesome side-effect of MILS computing: users often end up with overclassified information. A user on a Secret system may be working with both Confidential and Secret files, and it is simple to share those files with other users on Secret systems. However, he faces a problem if he needs to provide the Confidential file to a user on a Confidential system: how does he prevent Secret information from leaking when he tries to provide a clean copy of the Confidential file? There is no simple, reliable, and foolproof way to do this, especially when using commercial desktop computers.
The same problem often occurs in e-mail systems: a different user on a Top Secret network may wish to send an innocuous but important announcement to her colleagues on a Secret network. She knows that the recipients are authorized to receive the message's contents, but how does she ensure that no Top Secret information leak out along with the e-mail message? The problem also appears in military databases: the database may contain information at a variety of security levels, but not all user communities will be able to handle data at the same level as the whole database. To be useful, the data must be sanitized and then released to users at lower classification levels. When a downgrading guard releases information from a higher security level to a lower one, the downgrading generally falls into one of three categories:
The traditional technique was manual review and release. A site would train an operator to identify classified information that should not be released, and the operator would manually review all data passing through the guard. This strategy proved impractical because it has become very difficult to reliably scan files for sensitive information. Word processors like Microsoft Word tend to retain sensitive information even after the user has attempted to remove it from a file (Byers, 2004). Another problem is steganography: a subverted user on the high side of the guard, or a sophisticated piece of subverted software, can easily embed large data items in graphic images or other digital files so that a visual review won't detect their presence. In addition to the problem of reliable scanning, there is a human factors problem: few operators would remain effective in this job for very long. Military security officers tell of review operators falling into a mode in which they automatically approve everything without review, partly to maintain message throughput and partly out of boredom.
The automated release approach was used by the Standard Mail Guard (SMG) (Smith, 1994). The SMG accepted text e-mail messages that had been reviewed by the message's author, explicitly labeled for release, and digitally signed using the DMS protocol. The SMG would verify the digital signature and check the signer's identity against the list of users authorized to send e-mail through the guard. The SMG would also search the message for words associated with classified information that should not be released and block messages containing any such words. Authorized users could also transmit files through the guard by attaching them to e-mails. The attached files had to be reviewed and then "sealed" using a special application program: the SMG would verify the presence of the seal before releasing an attached file.
The automated review approach has been used by several guards since the 1980s, primarily to release database records from highly classified databases to networks at lower security levels. Many of these guards were designed to automatically review highly formatted force deployment data. The guards were configured with detailed rules on how to check the fields of the database records so that data was released at the correct security level. In some cases the guards were given instructions on how to sanitize certain database fields to remove highly classified data before releasing the records.
While guards and multilevel servers provide a clear benefit by enabling data sharing between system-high networks running at different clearance levels, they also pose problems. The most obvious problem is that it puts all of the MLS eggs in one basket: the guard centralizes MLS protection in a single device that undoubtedly draws the interest of attackers. Downgrading guards pose a particular concern since there are many ways that a Trojan Horse on the "high" side of a guard may disguise sensitive information so that it passes successfully through the guard's downgrading filters. For example, the Trojan could embed classified information in an obviously unclassified image using steganography. Another problem is that the safest place to attach a label to data is at the data's point of origin: guards are less likely to label data correctly because they are removed from the data's point of origin (Saydjari, 2004). Guards that used an automated release mechanism may be somewhat less prone to this problem if the guard bases its decision on a cryptographically-protected label provided at the data's point of origin. However, this benefit can be offset by other risks if the guard or the labeling process are hosted on a low-assurance operating system.
During the 1991 Gulf War, the defense community came to appreciate the value of classified satellite images in planning attacks on enemy targets. The only complaint was that the imagery couldn't be delivered as fast as the tactical systems could take advantage of it (Federation of American Scientists, 1997). In the idealized state of the art "sensor to shooter" system, analysts and mission commanders select targets electronically from satellite images displayed on workstations, and they send the targeting information electronically to tactical units (see Figure 7). Clearly this involves at least one downgrading step, since tactical units probably won't be cleared to handle satellite intelligence. So far, no general-purpose strategy has emerged for handling automatic downgrading of this kind. In practice, downgrading mechanisms are approved for operation on a case-by-case basis.
The following is a list of nonmilitary applications that bear some similarities to the MLS problem. While this may suggest that there may someday be a commercial market for MLS technology, a closer look suggests this is unlikely. As noted earlier, MLS systems address a level of risk that doesn't exist in business environments. Buyers of commercial systems do not want to spend the money required to assure correct MLS enforcement. This is illustrated by examining the following MLS-like business applications.
Despite the failures and frustrations that have dogged MLS product developments for the past quarter century, end users still call for MLS capabilities. This is because the problem remains: the defense community needs to share information at multiple security levels. Most of the community solves the problem by working on multilevel data in a system high environment and dealing with downgrading problems on a piecemeal basis. While this solves the problem in some situations, it isn't practical others, like sensor to shooter applications.
The classic strategies intended to yield MLS products failed in several ways. First, the government's promotion of product evaluations failed when vendors found that MLS capabilities did not significantly increase product sales. The concept of deploying a provably secure system failed twice: first, when vendors found how expensive and uncertain evaluations could be, especially at the highest levels, and second, when security experts discovered how intractable the covert channel problem could be. Finally, the few MLS products that did make their way to market languished when end users realized how narrowly the products solved their security and sharing problems. The principal successes in MLS today are based on guard and trusted server products.
covert channel - in general, an unplanned communications channel within a computer system that allows violations to its security policy. In an MLS system, this is an information flow that violates MLS restrictions.
evaluation - the process of analyzing the security functions and assurance evidence of a product by an independent organization to verify that the functions operate as required and that sufficient assurance evidence has been provided to have confidence in those functions.
multiple independent levels of security (MILS) - a networking and desktop computing environment which assigns dedicated, system-high resources for processing classified information at different security levels. Users in a MILS environment may have two or more desktop computers, each dedicated to work at a particular security level.
security model - an unambiguous, often formal, statement of the system's rules for achieving its security objectives, such as protecting the confidentiality of classified information from access by uncleared or insufficiently cleared users.
Anderson, J.P. (1972). Computer Security Technology Planning Study Volume II, ESD-TR-73-51, Vol. II. Bedford, MA: Electronic Systems Division, Air Force Systems Command, Hanscom Field. Available at: http://csrc.nist.gov/publications/history/ande72.pdf (Date of access: August 1, 2004).
Bell, D.D. and L.J. La Padula (1974). Secure Computer System: Unified Exposition and Multics Interpretation, ESD-TR-75-306. Bedford, MA: ESD/AFSC, Hanscom AFB. Available at: http://csrc.nist.gov/publications/history/bell76.pdf (Date of access: August 1, 2004).
Byers, S (2004). Information leakage caused by hidden data in published documents. IEEE Security and Privacy 2 (2) 23-27. Available at: http://www.computer.org/security/v2n2/byers.htm (Date of access: October 1, 2004).
Cohen, F.C. (1990) Computer Viruses. Computer Security Encyclopedia. Available at: http://www.all.net/books/integ/encyclopedia.html (Date of access: February 20, 2005).
Common Criteria Project Sponsoring Organizations (1999). Common criteria for information technology security evaluation, version 2.1. Available at: http://csrc.nist.gov/cc/Documents/CC%20v2.1%20-%20HTML/CCCOVER.HTM (Date of access: October 1, 2004).
Department of Defense (1997). DOD Information Technology Security Certification and Accreditation, DOD Instruction 5200.40. Washington, DC: Department of Defense. Available at: http://www.dtic.mil/whs/directives/corres/pdf/i520040_123097/i520040p.pdf (Date of access: October 1, 2004).
Department of Defense (1985a). Trusted Computer System Evaluation Criteria (Orange Book), DOD 5200.28-STD. Washington, DC: Department of Defense. Available at: http://www.radium.ncsc.mil/tpep/library/rainbow/index.html#STD520028 (Date of access: October 1, 2004).
Federation of American Scientists (1997). Imagery Intelligence: FAS Space Policy Project - Desert Star. Available at: http://www.fas.org/spp/military/docops/operate/ds/images.htm (Date of access: August 1, 2004).
Karger, P.A. and R.R. Schell (1974). MULTICS Security Evaluation, Volume II: Vulnerability Analysis, ESD-TR-74-193, Vol. II. Bedford, MA: Electronic Systems Division, Air Force Systems Command, Hanscom Field. Available at http://csrc.nist.gov/publications/history/karg74.pdf (Date of access: August 1, 2004).
Nibaldi, G.H., (1979). Proposed Technical Evaluation Criteria for Trusted Computer Systems, M79-225. Bedford, MA: The Mitre Corporation. Available at: http://csrc.nist.gov/publications/history/niba79.pdf (Date of access: August 1, 2004).
Orlowski, A. (2004). MS Trusted Computing back to drawing board. The Register, May 6, 2004. Available at: http://www.theregister.co.uk/2004/05/06/microsoft_managed_code_rethink/ (Date of access: August 1, 2004).
Proctor, N.E., and Neumann, P.G. (1992). Architectural implications of covert channels. Proceedings of the Fifteenth National Computer Security Conference pp. 28-43. Available at: http://www.csl.sri.com/users/neumann/ncs92.html (Date of access: November 15, 2004).
St. Johns, M. (1988). Draft Revised IP Security Option, RFC 1038. Available at: http://www.ietf.org/rfc/rfc1038.txt (Date of access: October 1, 2004).
Saydjari, O.S. (2002). LOCK: an historical perspective. Proceedings of the 2002 Annual Computer Security Applications Conference pp. Available at: http://www.acsac.org/2002/papers/classic-lock.pdf (Date of access: November 15, 2004).
Smith, R.E. (2005). Observations on multi-level security. Web pages available at http://www.smat.us/crypto/mls/index.html (Date of access: October 31, 2005).
Smith, R.E. (2001). Cost profile of a highly assured, secure operating system. ACM Transactions on Information System Security 4 pp. 72-101. A draft version is available at http://www.smat.us/crypto/docs/Lock-eff-acm.pdf (Date of access: February 20, 2005).
Smith, R.E. (2000). Trends in government endorsed security product evaluations, Proceedings of the 23rd National Information Systems Security Conference. Available at: http://www.smat.us/crypto/evalhist/evaltrends.pdf. (Date of access: February 20, 2005).
Smith, R.E. (1994). Constructing a high assurance mail guard. Proceedings of the 17th National Computer Security Conference 247-253. Available at: http://www.smat.us/crypto/docs/mailguard.pdf (Date of access: February 20, 2005).
Ware, W.H. (1970) Security Controls for Computer Systems (U): Report of Defense Science Board Task Force on Computer Security. Santa Monica, CA: The RAND Corporation. Available at: http://csrc.nist.gov/publications/history/ware70.pdf (Date of access: August 1, 2004).
Weissman, C. (1969). Security controls in the ADEPT-50 time-sharing system. Proceedings of the 1969 Fall Joint Computer Conference. Reprinted in L.J. Hoffman (ed.), Security and Privacy in Computer Systems (pp. 216-243). Los Angeles: Melville Publishing Company, 1973.
Multilevel security (MLS) is an overloaded term that describes both an abstract security objective and a well-known mechanism that is supposed to achieve that objective, more or less.
Click here for a general introduction to MLS.
A system or device achieves the objective of being "multilevel secure" if it can handle information at a variety of sensitivity levels without disclosing information to an unauthorized person. In a perfect world, this yields two more specific objectives:
The information flow problem is very hard to solve through automated access restrictions. The sanitization problem is almost impossible to solve through automated access restrictions.
A device enforces MLS information flow if it can't be induced (accidentally or intentionally) to release information to the wrong person. The U.S. government has a body of laws and regulations that make it illegal to share classified information with people who do not possess the appropriate security clearance. In theory, an "MLS device" will automatically enforce those restrictions. Some defense officials refer to MLS devices as felony boxes since they can automatically commit felonies if they malfunction.
While it might seem easy to implement such a thing just by setting up the right access restrictions on files, this approach isn't reliable enough for large-scale use. For one thing, it's hard to reliably establish and review the permission settings for hundreds or even thousands of files and expect to have them all set correctly at all times. Another problem is that a malicious user could leak enormous amounts of classified information with a few simple keystrokes. Moreover, an innocent user could be tricked into releasing comparable amounts of classified information if subjected to a virus or other malicious software.
The sanitization problem is made difficult by two things. First, it assumes that software applications don't hide data from their users, and that simply isn't true. A Microsoft Word file carries all sorts of information, including reams of text that its owner may have tried to remove. There are even circumstances where cut-and-paste may copy deleted information from one Word file to another. The second problem is that it requires a certain level of intellect to distinguish more-sensitive informaiton from less-sensitive information. While classifying authorities may sometimes attempt to make it obvious which data is Top Secret, which is Secret, and which is unclassified, it is often impossible in practice to build an automated tool to identify sensitivity simply on the basis of the unmarked material.
In practice, the devices we generally recognize as implementing "MLS" are based on a hierarchical, lattice-based access control model developed for the Multics operating system in 1975. In this model, the system assigns security classification labels to processes it runs and to files and other system-level objects, particularly those containing data. The mechanism enforces the following rules:
The mechanism is also referred to as mandatory access control because it is always enforced and users could not disable or bypass it. One of the reasons why you can't reliably enforce MLS (the objective) with conventional access control mechanisms is that such mechanisms are usually under the complete control of a file's owner. Thus the owner can accidentally or intentionally violate the access control rules simply by changing the permissions on a sensitive file. The MLS mechanism is mandatory, which means that individual users can't really control it themselves. Once a file is labeled "top secret," the MLS mechanism won't permit the user to share it with merely "secret" users or to mingle its contents with that of "secret" files. Once "top secret" information is mixed with "secret" information, the aggregate becomes "top secret."
The original motivation for building MLS mechanisms was the perception that it would make applications development safe from serious security risks. The notion was that applications programmers would be saved from the bother of having to worry about security labels. Existing applications would be able to handle labeled data safely and correctly without reprogramming. Even more important, programmers would not be able to sneak data between security levels by writing clever programs: the MLS protection mechanism would prevent any attempt to violate the most important security constraints on the system.
Although this basic MLS mechanism may seem simple to implement, it proved very difficult to implement effectively. Several systems were developed that enforced the security rules, but experimenters quickly found ways to bypass the mechanism and leak data from high classification levels to low ones. The techniques were often called covert channels. These channels used operating system resources to transmit data between higher classified processes and lower classified processes. For example, the higher process might systematically modify a file name or other file system resource (like the free space count for the disk or for main memory) that is visible to the lower process in order to transmit data.
Attempts to block covert channels led to very expensive operating system development efforts. The resulting systems were hard to use in practice and often had performance problems. While it's never been conclusively proven that you can't build a high performance system that avoids or blocks covert channels, there are no working examples.
The high cost of MLS systems was also driven by the high cost of security product evaluations performed by the US government.
Although MLS (the objective) remains a requirement for many military and government systems, the objective is not met by MLS (the mechanism). The fundamental problem is that the mechanism only allows data to flow "upwards" in terms of sensitivity. So, automatic software can easily take unclassified data, mix it with secret data to make more secret data, and then with top secret data to produce more top secret data. There are a few applications in the intelligence community where this is a very useful property.
However, this upward flow works against modern notions of "information dominance" and "sensor to shooter" information flow. The modern concept is that technical intelligence assets should identify targets, pass the information to mission planners, who assemble a mission, and pass the mission details to tactical assets, who in turn share details with support and maintenance assets. The problem is that technical intelligence, mission planning, tactical assets, and support assets tend to operate at decreasing security levels. The flow of information goes the exact opposite of what the MLS mechanism allows.
This article by Rick Smith is licensed under a Creative Commons Attribution 3.0 United States License.
The LOCK project (short for LOgical Coprocessing Kernel) developed a "trusted computing system" that implemented multilevel security. LOCK was intended to exceed the requirements for an "A1" system as defined by the old Trusted Computing System Evaluation Criteria (a.k.a. the TCSEC or "Orange Book").
The project was modestly successful in that we actually deployed a couple dozen systems in military command centers. A major design feature of LOCK was type enforcement, a fine grained access control mechanism that could tie access restrictions to the programs being run.
The work was performed at Secure Computing Corporation in Minnesota.
LOCK technology, particularly type enforcement, live on in two 'children' that still exist:
Here are links to papers about LOCK and the Standard Mail Guard (the deployed version of LOCK).
I wrote the following message as part of a discussion on the old Firewalls mailing list in 1996. The message was part of a discussion on the use of MLS technology to protect Internet servers from attack. The basic concepts still apply in some ways, though the threats have evolved in many other ways.
The message opens with a summary of my background in MLS (through 1996, anyway) motivated by some questions raised in the discussion. Then it explains why, based on classical MLS reasoning, you can't use MLS to enforce separation between different Internet servers that share a common network interface. The article ends by explaining how an MLS-based Internet server can enforce the separation once we change our assumptions about how MLS works. An unspoken assumption of this discussion is that a "firewall" might be a large scale device that hosts a variety of "secure" services, like e-mail and DNS. Only a handful of firewalls (notably Secure Computing's Sidewinder) do anything like this today.
Someday I'd like to reconstruct the entire discussion from my own archives and from copies on the Internet, since it was a particularly satisfying one. It led to a paper I presented at ACSAC later that year. Click here to see the paper (PDF).
FROM: Rick Smith
DATE: 01/31/1996 15:09:50
SUBJECT: Re: Mandatory protection (was: product selection)
I think we`ve covered most of the issues so far in the Type Enforcement (TE) versus Multilevel Security (MLS) discussion pretty well, but there are two remaining issues that need clearing up.
I don`t think the unresolved topics arise from ignorance or a simple failure to communicate; we have a genuine and fully unintended culture clash.
The first is a matter of credibility. Since the relevance of anything else I say probably hinges on this, I`ll start here
"Does Rick Smith have a clue regarding MLS?"
There are several people at the National Computer Security Center and the MISSI Program Office that would be astonished by this question. Before moving to firewalls I was a key designer and the lead systems engineer on the SNS Mail Guard, one of the few MLS systems that comes close to being a turnkey device (I bring this up as evidence and not as a topic of Firewalls discussion - comment privately if you must). I`ve also done a variety of other MLS related analysis, design, and implementation tasks. So I do have some credentials.
But my background is entirely in high assurance MLS systems. Those are systems where MLS has only one meaning: obsessive protection of confidentiality in accordance with the Bell-LaPadula access control rules. Labels define barriers to information disclosure, and nothing in the platform architecture or services is permitted to compromise confidentiality. My statements on what MLS systems can and can`t do are based on the implications of highly assured confidentiality, not on some "strawman" MLS notion nor on "misconfigured" MLS systems.
That`s where the culture clash comes in. My colleagues in this discussion are using B1 MLS systems. These are systems where confidentiality protection is not pursued to such an extreme. This is *not* intended as a put-down, especially in the firewalls environment. Firewalls don`t need obsessively strong confidentiality. They need integrity protection. That`s why we put TE in Sidewinder and left out MLS -- we see MLS as a confidentiality mechanism and that`s not what we needed. But if you`re using MLS for mandatory protection and don`t have an obsessively strong confidentiality objective, then the picture changes a bit.
Here`s how this relates to the last open technical issue:
"Can MLS systems protect Internet servers from one another?"
I`ve always recognized that MLS systems can impose mandatory protection bariers between processes by using levels, categories, and compartments, but I still concluded "No." This is based on my view of high assurance MLS obsessed with confidentiality. The argument goes as follows:
I suspect our misunderstandings are tied to statement 3) above. On Sidewinder we can associate TCP/IP port numbers with separately labeled domains in the TE system. The only way you can get a similar result in an MLS system is to associate TCP/IP port numbers with MLS confidentiality labels. For example, the B1 system might define a category or compartment label for "Mail" and restrict Port 25 traffic to processes with the Mail label. If so, this changes how statement 3) is phrased, and completely changes the conclusion.
The problem is, you can`t assign MLS labels that way if you`re obsessed with confidentiality. I can think of three reasons immediately as to why not:
1) Port numbers aren`t confidentiality labels and aren`t intended to be. There`s no reason to believe that any other system you`re communicating with is going to keep traffic separte according to port numbers. This means you can`t depend on confidentiality, the prime objective. This leads to the next reason:
2) Treating a port number like a label establishes an uncontrolled data channel between processes with different labels, independent of the label enforcement rules. This is because there`s nothing in the TCP/IP operating concept to prevent one such process from opening a connection directly or indrectly to a process on the same system associated with a different port number, bypassing the MLS barriers between differently labeled processes.
3) You can almost certainly construct a nifty timing channel between two processes that have different labels/ports and share the same TCP/IP stack. So even if the rest of the network behaves, there are ways to circumvent the labels.
But the bottom line answer to the question, in the context of *firewalls* and the irrelevance to them of a high assurance obsession with confidentiality, appears to be "Yes, If."
IF the vendor puts in the trusted code to associate different port numbers with different MLS process labels, THEN their firewall *can* enforce mandatory MLS protection between Internet servers. It`s not clear that a firewall is "misconfigured" if this degree of protection is omitted, but a thorough implementation really should include it. So, if you`re buying an MLS based firewall, look for this feature.
smith at secure computing corporation
This article by Rick Smith is licensed under a Creative Commons Attribution 3.0 United States License.
I first encountered the term cross domain security (or cross domain systems, or cross domain solutions, or just CDS) at a workshop in the late 1990s. We were discussing the problem of how to share information with coalition forces even though different countries had different, treaty-based access to US defense information. Even worse, there were coalitions that contained countries who were not on the best of terms (like Japan and Korea).
These days the term has often replaced MLS in the defense community. Some argue that the term has changed in hopes that the community can lower the assurance requirements, thus putting information at risk. Time will tell if this is indeed true.
Starting in the 1980s, the US government established a program to evaluate the security of computer operating systems. Since then, other governments have established programs. In the late 1990s, major governments agreed to recognize a Common Criteria for security product evaluations. Since then, the number of evaluations have skyrocketed.
The following figure summarizes the number of government endorsed security evaluations completed every year since the first was completed in 1984. The different colors represent different evaluation criteria, with "CC" representing today's Common Criteria.
Starting in 1999, I have occasionally run projects where I have tracked down every report I could find of a security product evaluation. My first project led to some preliminary results.
At the Last National Information Systems Security Conference (23rd NISSC) in October, 2000, I presented a paper (PDF) that surveyed the trends shown in the previous 16 years of formal computer security evaluations. I also produced a summary page of those results.
In 2006, I ran another survey that yielded the chart above and a paper (PDF) reviewing current trends. This was published in Information Systems Security (v. 16, n. 4) in 2007. The work was done with the help of several undergrads at the Univerisity of St. Thomas.
For additional insight, I'd suggest looking at Section 23.3.2 of Ross Anderson's book Security Engineering, which describes the process from the UK point of view. Ross isn't impressed with the way the process works in practice; while the process may be somewhat more stringent in the US, the US process simply produces different failure modes.
In the US, cryptographic products are certified under the FIPS 140 process, administered by the National Institute for Standards and Technology (NIST). Evaluation experts are quick to point out that the process and intentions are different between FIPS 140 and the Common Criteria. For the end user, they may appear to yield a similar result: a third-party assessment of a security device or product. In practice, different communities have different requirements. In the US and Canada, there is no substitute for an up-to-date FIPS 140 certification when we look at cryptographic products. Other countries or communities may acknowledge FIPS 140 certifications or they may require Common Criteria certifications.
In any case, security evaluations and certifications simply illustrate a form of due diligence. They do not guarantee the safety of a device or system. In 2009, for example, researchers found that many self-encrypting USB drives contained an identical, fatal security flaw. All of the drives had completed a FIPS 140 evaluation that did not highlight the failure.
This article by Rick Smith is licensed under a
For a more recent view of this topic, visit my Security Evaluations page.
In 1999, I tracked down every report I could find of a product that had completed a published, formal security evaluation in accordance with trusted systems evaluation criteria. This led to some preliminary results. At the Last National Information Systems Security Conference (23rd NISSC) in October, 2000, I presented a paper (PDF) that surveyed the trends shown in the previous 16 years of formal computer security evaluations.
I collected all of my data in an Excel 97/98 spreadsheet that contained an entry for every evaluation I could find through the end of 1999. At the moment the spreadsheet includes the reported evaluations by the United States (TCSEC/NCSC and Common Criteria), United Kingdom (ITSEC and Common Criteria), Australia, and whatever evaluations were reported from Canada, France, and Germany by the US, UK, and Australian sites. I am not convinced that this is every published evaluation that took place, but it's every report I could find.
For additional insight, I'd suggest looking at Section 23.3.2 of Ross Anderson's book Security Engineering, which describes the process from the UK point of view. Ross isn't impressed with the way the process works in practice; while the process may be somewhat more stringent in the US, the US process simply produces different failure modes.
I would be thrilled if anyone interested in a weird research project would use my spreadsheet as a starting point to further analyze the phenomenon of security evaluations. There are probably other facts to be gleaned from the existing data, or other information to be collected. As noted, I stopped collecting data at the end of the last century.
Whenever your browser establishes a “secure” connection to a web site, it encrypts the data. Traditionally, the browser and site use a stream cipher called Rivest Cipher #4 (RC4), although some sites use newer techniques.
Stream ciphers use a deceptively simple mechanism: you combine the plaintext data, bit by bit, with “key” bits, using the exclusive or operation. This is often abbreviated xor, and denoted by ⊕ - a circle with a cross.
A conventional stream cipher like RC4 consists of three parts:
RC4, for example, can use 128 bits of shared, secret data to generate a random-looking bit stream. This bit stream is then combined, bit by bit, with the message being sent.
When Alice sends a message to Bob, encryption happens as follows. Ahead of time, Alice and Bob share their secret. When Alice has a message to send to Bob, she uses the shared secret and the RC4 cipher to encrypt it. Upon receipt of the encrypted message, Bob uses the shared secret and the RC4 cipher to decrypt it. This is much more convenient that a one-time pad, which requires a separate shared secret equal to the size of every message sent.
The process for generating the bit stream is the heart of the technique, and usually referred to as the cryptographic algorithm. Even if an eavesdropper (call him Peeping Tom) happens to see part of this bit stream, he should not be able to predict other parts of the bit stream. Ideally, Tom would need a copy of the shared secret in order to recover the message. There should be no way to recover the message that's easier than trying to guess the 128-bit secret through trial and error.
This make things much simpler for Alice and Bob. Before sending a message, they share a 128-bit secret. When Alice sends her message, she starts up the RC4 algorithm, feeds it her key, and encrypts her message, bit by bit, using xor. Upon receipt, Bob runs RC4, enters his copy of the shared secret, and gets back the same bit stream. He decrypts the message by applying xor to the message and the bit stream.
The security of a stream cipher depends on the quality of the algorithm, but it also depends on proper use. In particular, neither Alice nor Bob should intentionally use that shared secret ever again to send a message. In fact, if Bob replies to Alice's message, he must use a different shared secret. If he uses the same shared secret, he will encrypt his message with the same bit stream that Alice used. Then Peeping Tom can retrieve both messages scrambled together, as shown here.
The exclusive or operation - a logical function applied to binary bits, like AND, OR, and NOT - is a fundamental encryption technique. It is often used in stream ciphers, which are widely used in web browsers when connecting to secure web servers.
When used properly, this technique provides strong protection. In fact, it is the basis for the one-time pad, the only provably uncrackable encryption. However, this protection is easily eroded if the cipher is not used correctly.
Xor is a trivial operation for computer logic to perform (click here for the details). The operation often appears as a built-in machine instruction so that software can perform it in a single machine operation.
If Alice wants to send a secret message to her friend Bob, she takes the sequence of bits in the message (the plain text) and a sequence of bits known only by her and Bob - the key. To encrypt, she combines the plain text and the key, bit by bit, using xor.
In a one-time pad, Alice and Bob must use a different set of secret, randomly generated bits for every message they exchange.
In a stream cipher, Alice and Bob share a much smaller number of secret bits and use them to generate a long, hard-to-guess sequence of bits. The stream cipher relies on a cryptographic algorithmto generate that long sequence from a small, shared secret. This generated sequence is then combined with the message using xor.
Below we have the handwritten message "Send Cash" embedded in a 128 by 128-bit image. Black indicates no color, so the black text in the image contains zero bits, and the white space contains 1 bits. For a key, we have collected a 128 by 128 matrix of random bits. In fact, the bits come from a web site, random.org, that uses radio noise to generate random data for experiments like this. We will combine the two matrices using xor:
Message "Send Cash"
When we apply xor bit-by-bit to the two matrices, we get the following 128 by 128 matrix of encrypted bits:
Encrypted "Send Cash"
Yes, this looks like nothing more than a mottled gray block, and it doesn't look a lot different from the gray-block image of the encryption key. If we look closely, the actual bits are different. Here is a closeup of the upper-left corner of the key bits and the encrypted bits. Most of the bits are identical in the two closeups. The bits in the lower-right closeup are different.
In the closeup, most of the image is the same in both the key and the ciphertext. The plaintext "Send Cash" message consists of white space (one bits) except where the black letters (zero bits) appear. The xor operation combines black bits in the key with white bits in the message to yield black bits in the encrypted message.
The closeup's lower right corner captures part of the letter "S" in the message. The black plaintext combines with black key bits to yield white. White key bits are also reversed by the black plaintext.
Even though the key image and the encrypted message look similarly gray, they contain different bits. That difference hides the encrypted message.
These example images are 128 by 128 bit maps in GIF format. If you have a program that can read GIF files, save them as bit maps, and apply the xor operation bit-by-bit, then you can easily repeat this operation. The examples here were processed by Matlab, a commercial package, but there are numerous other packages that can reproduce the example.
Download these images:
If you apply the xor operation to the first two bit maps, you produce the third. If you combine the key with the encrypted message (k ⊕ e), you will reproduce original "Send Cash" message.
The following table shows how the xor operation transforms individual bits. Let m be a bit from the plain text message, and kbe a bit from the key. The ⊕ column shows the resulting bit.
We can also describe the xor operation in terms of traditional logic operations AND, OR, and NOT. Here we use "C" programming language notation: NOT = !, AND = &, OR = |.
(!m & k) | (m & !k)
To decrypt a message, Bob takes his own copy of the key bits, and applies the same xor transformation to the message, bit by bit.
The one-time pad is the only encryption technique that has been mathematically proven to be uncrackable. While hard to use, it has often been the choice for highly sensitive traffic. Soviet spies used one-time pads in the 1940s and -50s. The Washington-Moscow "hot line" also uses one-time pads. However, the technique is hard to use correctly.
To use a one-time pad, Alice and Bob must produce a huge number of random bits and share them secretly. When Alice has a message to send to Bob, she retrieves a number of random bits equal to the length of her message, and uses them to be the message’s key. She applies the exclusive or operation (xor) to the key and the message to produce the encrypted message.
The key must be exactly the same size as the message. The key must also consist of completely random bits that are kept secret from everyone except Alice and Bob.
When Bob receives the message, he retrieves the same bits from his copy of the random bit collection. He must retrieve the same random bits in exactly the same order that Alice used them. Then Bob uses the sequence of random bits to decrypt the message. He applies the xor operation to the message and the key to retrieve the plain text.
When properly used, it is mathematically impossible to crack a message encrypted by a one time pad. This was first described by Claude Shannon in the 1940s as part of the development of information theory. A one-time pad is impossible to crack because knowledge of the cipher text does not reduce uncertainty about the contents of the original, plain text message.
One time pads are not generally practical:
Web browsers and servers use conventional stream ciphers like RC4 instead of one time pads because they are much easier to use and provide very strong, if not provably impenetrable, security.
When Soviet spies used one-time pads, they used a decimal number code instead of binary bits. In binary, the xor operation is essentially an "add without carry" in which we discard the overflow: in particular, 1+1=0. In a decimal code, add without carry just discards the overflow, as in 7+7 = 4, or 8+8=6. Decryption used the opposite "subtract without borrow" with the same set of digits used as the key.
The one-time pads were printed up in tiny books, and the spies would discard pages of numbers as they were used in messages. Marcus Ranum has a photo of such a book in his One-Time PAD FAQ.
There is a lot of confusion about one-time pads (see the article). Some vendors use the term simply because one-time pads are provably secure, and they hope that the name by itself will convey impenetrability to their product. Such products are called snake oil in the crypto community. Even worse, some people on the Net will try to explain one-time pads and get it wrong. The hardest thing for many people is the notion of entropy or true randomness. No, you don't get random numbers from Excel!
Here are some other cipher terms that often appear in conjunction with one-time pads:
Ranum, M. (1995) One-Time-Pad (Vernam's Cipher) Frequently Asked Questions. web site.
Shannon, C. (1949) Communication Theory of Secrecy Systems. Bell System Technical Journal 28 (4): 656–715.
It's amazing how subtle a one-time pad really is. On one level they're deceptively simple: you simply match up the text of your message with a collection of "random bits" you share with the recipient. To decrypt, the recipient matches up a copy of those "random bits" to retrieve the message.
The trick is in the definition of "random bits."
If all the characters come from a truly unpredictable source, then you have a one-time pad. And, if you really want to use a one-time pad, you must share as many random bits as you imagine you will ever need for messages. That's a lot of random bits!
No shortcuts are allowed. If you try to 'compress' the random bits, or 'reconstruct' them using an algorithm, then it's no longer a one-time pad. If you get them from any sort of structured source then, again, it's not a one-time pad.
Another essential feature: the collection of random bits must not be shared with anyone except the intended senders and recipients of the messages. If other people can find the set of random bits - for example, if it's based on a published text of some sort - then it's not secret enough for a true one-time pad. Someone might get away with using it for a while, but it's not a really secure approach.
Moreover, the bits must never be used for more than one message. One-time pads have been cracked many times in practice, usually because the random bits were used to encrypt more than one message.
Let me run over some inaccurate examples presented in various web sites as one-time pads. I'll skip the "snake oil" encryption products that inaccurately claim unbreakability by calling themselves one-time pads. There are enough bogus examples without them.
Several web pages claim that the spy in Ken Follet's novel The Key to Rebecca uses a one-time pad. The book describes how the spy used Daphne du Maurier's classic novel Rebecca as a codebook to encrypt his messages. One particularly mistaken web site claims that the codebook was "du Maurier's Rebecca of Sunnybrook Farm" (a book actually by author Kate Douglas Wiggin).
To be fair, that particular web site provides a fine description of the encryption process, even if the process is mislabeled. Each message uses the next page in the novel as its key. To encrypt a message, the spy would do an 'add without carry' of the characters in the message with corresponding characters taken from that page of the book. To decrypt, the recipient at the Wehrmacht would take the corresponding page of the novel and use the opposite 'subtract without borrow' operation to recover the plain text message.
However, this does not describe a one-time pad. This is simply a Vigenère cipher for which the key is taken from a book.
It is not a one-time pad for the simple reason that the key itself - the text of the novel Rebecca - is not random. The key consists of English prose text which itself has numerous patterns. The key will retain detectable patterns when combined with a plain text message in German.
If the book Rebecca consisted entirely of randomly generated characters, then it would come closer to being a one-time pad, though it would be far less entertaining as a mystery-romance novel.
A few web sites claim that you can use a well-known music CD, MP3 recording, or other media file as the random bits for a one-time pad.
It should be obvious what the problem is: if the track contains random noise, then it might be more appropriate for a one-time pad. Music and other entertainment media files, by definition, contain patterns: chords, refrains, voiced words, images, etc. If random noise were entertaining, we wouldn't have needed to actually broadcast radio signals in the previous century: people would have just listened to the static between stations.
The other problem with these examples are that they all use prepackaged data as the "random data." Even if the prepackaged data were boring collections of random characters, or audio/visual static, the packages would be available to third parties. Published data is, by definition, not secret. No matter how random it might be, we eliminate the theoretical secrecy of the one-time pad by using data that is available to people besides Alice and Bob.
Take a look at the following image. You should see two different 'messages' here.
This same mistake let American cryptanalysts decode thousands of Soviet spy messages in the 1940s and -50s. The decoded messages helped uncover espionage at the Manhattan Project. The Soviets made the mistake of reusing the keys for their one-time pads.
The mistake has also cropped up with stream ciphers used on computer networks. If you use the same stream of bits to encrypt two or more different messages, an attacker can eliminate the encryption by combining the two messages. Particularly notorious examples include the tragically misnamed Wireless Equivalent Privacy (WEP) in 802.11 products, and Microsoft's first implementation of the Point to Point Tunneling Protocol (PPTP).
So why does this happen?
The problem is based on the behavior of the add without overflow operation used in one-time pads, which in the digital world involves the exclusive-or operation (called "xor" for short - click here for an explanation).
If used properly, xor is an effective way to encrypt data. However, it leaks data if not used carefully. The leakage arises from the binary logic of the combination (click here for an explanation).
In a different example, we worked with a 128 x 128 image containing this message:
"Send Cash" message
We encrypted this image by applying the xor operation. We used a random 128 by 128 bit map for the encryption key. This yielded the following gray block:
Encrypted "Send Cash"
Let's use the same encryption key to encrypt another 128 by 128 bit mapped image:
Smiley image XOR Encryption Key
Both the key and the encrypted data yield images that look like similar gray blocks. This is how it should be: a matrix randomly scattered with 0 and 1 bits should look gray. The encryption key and the encrypted data should look random, with no distinct patterns.
Thus, the encrypted Smiley yields another gray block:
Now, let's combine the two encrypted images using xor:
Encrypted Smiley XOR Encrypted "Send Cash"
Again, each looks like nothing more than a gray block. A closer look (as in this other example) will show that individual bits may differ.
The xor operation eliminates the key from both images, and leaves us with the images themselves:
In real-world cases like Venona, WEP, and PPTP, we aren't usually encrypting images. However, the underlying plain text, whether literally text or encoded network protocol data, has distinctive patterns. Skilled cryptanalysts can identify these patterns and can extract the two messages from the mixed-up data.
These example images are 128 by 128 bit maps in GIF format. If you have a program that can read GIF files, save them as bit maps, and apply the xor operation bit-by-bit, then you can easily repeat this operation.
The examples here were processed by Matlab, a commercial package, but there are numerous other packages that can reproduce the example. Download these images:
Apply the xor operation to the first two bit maps to produce the third. Combine the key with the encrypted message (k ⊕ e) to reproduce original "Send Cash" message.
Use the same process on the Smiley to produce its encrypted form. When you combine the two encrypted messages, you end up with the overlaid images. -
If you use the same encryption key for two different messages, then an eavesdropper can eliminate the encryption key (and thus, the encryption) by applying xor to the encrypted messages by themselves. In other words,
Let a, b be plaintext messages,
and let A, B be corresponding encrypted messages,
with k as the key;
If a ⊕ k = A, and b ⊕ k = B,
then a ⊕ b = A ⊕ B.
You can work this out from the description of xor provided on this other page. In terms of fundamental digital operations AND (&), OR (|) and NOT (~), the xor operation is defined as follows:
a ⊕ k = (~a & k) | (a & ~k)
If you substitute this definition in the equations above, you find that combining the encrypted messages yields the same result as combining the plain text messages.
Borisov, N., Goldberg, I., Wagner, D. (2001) Intercepting Mobile Communications: The Insecurity of 802.11. 7th Annual International Conference on Mobile Computing and Networking (ACM SIGMOBILE). Rome, Italy, July.
Benson, R. (2001) The Venona Story. web page. National Security Agency, Center for Cryptologic History. The NSA's web site has a whole section devoted to the Venona story.
Schneier, B., Mudge. (1998) Cryptanalysis of Microsoft's Point to Point Tunneling Protocol (PPTP). Proceedings of the 5th ACM Conference on Communications and Computer Security. November.
Smith, R. (1997) Internet Cryptography. Addison-Wesley.
While writing Elementary Information Security, I wanted simple and obvious reasons to introduce various obscure security topics. Initially I wrote a series of stories about those famous cryptographic protagonists, Bob and Alice.
The actual stories never made it into the textbook, so I'm posting them here.
I am providing these stories through a Creative Commons "Attribution" license. This means that anyone may republish these stories as long as they indicate that I am the original author. Please note the source as "Rick Smith of Cryptosmith."
These stories by Rick Smith of Cryptosmith are licensed under a
Bob slid his ID card through the reader. There was a click as the dormitory’s front door unlocked. He fished into his pocked and pulled out the key to his suite as he ran up the stairs.
As the door swung open, he felt a chill of annoyance. Someone was sitting in front of his computer. Bob’s own room was so crowded with hockey and lacrosse equipment that he’d moved his computer into the common area. He didn’t recognize the guy at the computer, but then he hadn’t really met all of his suite mates. He marched over to the table and glared.
The guy looked up and said, “Hey.” He resumed his typing.
“That’s not your computer, is it?” Bob kept his voice perfectly level.
“I’m Kevin,” the guy said, holding out his hand.
Bob ignored the outstretched hand. “Okay, I’m Bob. The computer belongs to me. If you don’t have your own, I know there’s a public computer lab somewhere downstairs.” Bob folded his arms and continued to glare
“Okay then,” Kevin closed the application and sauntered into his room.
Bob sat down at the computer, a “tower” system that ran Microsoft Windows, and his heart sank. The desktop background should have been the picture from the Championship last spring. Now it was that boring blue again. Even worse, the notes he’d typed up earlier from class were gone. He needed those notes for the English paper he had to write.
He strode over to Kevin’s room. “What did you do to my computer? The background pattern is changed. Where did my files go?”
Kevin looked up and said, “Nothing, man. That’s the way I found it.”
The suite door opened. Bob turned to see a young man sporting a goatee and wire rimmed glasses.
Pointing to his computer, he asked the newcomer, “Did you mess with my computer?”
The young man stared at him. After a pause, he asked, “And who are you?”
Bob glared and folded his arms. He was proud of his biceps. “I’m Bob. I live here. Now answer the question.”
“I live here too.” The young man peered at the cheap monitor and tiny system unit with obvious distaste. “It’s just sitting there. It’s totally unprotected. Even so, why would I bother?” Unstrapping his expensive messenger bag, he pulled out an ultrathin Apple laptop and strode to his room.
Muttering under his breath, Bob sat down at his (his!) computer. After rummaging around, he found another copy of his background picture. He also found a partial copy of the file he’d typed up earlier. Maybe he could get the rest of the notes from that geek girl in his English class.
There was supposed to be a place in Windows where you could activate passwords. Bob didn’t want to bother with passwords, but he also didn’t want people messing up his computer. Classes would be hard enough as it was. He thought about the geek girl again. Alice. He typed in the password twice to set it up.
Then Bob typed up a small notice. He printed it out, cut it to size, and carefully taped it to his monitor.
If you touch this computer, you’re dead.
This story highlights several details about dormitory life and the ways people react to it. These provide a framework in which we can think about identifying and assessing some simple but reasonably specific security problems. In this story, we learn about Bob's particular security problems and his initial attempts to solve them. Later stories involve other members of the school community.
Chapter 1 opens with a discussion of different security strategies: rule-based, relativistic, and rational. In the first case, we apply security because we are told what security to use. For example, many people use a collection of letters, digits, and punctuation in passwords because their site requires it and not because they are resisting a recognized threat. In the second case, we apply security to make ourselves appear less of a target than someone else. For example, if our neighbor has a short, wooden fence around the yard, we might put up a taller fence with a locking gate. A rational process uses a systematic analysis of the security situation to select security measures.
Given these three strategies, which does Bob use to protect his computer?
Bob balanced the two coffee cups atop a textbook. The other hand carried a jelly donut. He sat down gingerly and handed over the smaller cup. “A tall latte, as requested.”
Alice grabbed the cup and took a long, slow sip. Bob winced at the sound. His eye caught the hem of her denim skirt.
She peered at him over the cup’s rim. “So what happened?”
“I password protected my computer, you know? When I leave it for more than 5 minutes, or hit the ‘Lock’ icon, it demands a password to get back in.”
“That’s a good start.” Alice idly picked up Bob’s freshly printed essay for English as he continued.
“I came back after dinner, and he’d changed my desktop pattern again. And he renamed my Word files to names like ‘hacked1’ and ‘hacked2.’ I yelled at my suite mates, but I can’t tell who did it.”
The English essay now had Alice’s full attention.
“So, what do you think?” Bob tried to keep the edge out of his voice. He wondered if the girl had heard anything he said.
Alice looked up and put the paper down. “The first paragraph sucks, but the instructor will love the part about the trees.
Bob snatched the paper away. “Fine. Here’s what happened to my computer. I came back from dinner, and...”
Alice interrupted him. “It's a matter of control.” she said, as she took off her glasses and gave Bob her full attention. “What happens to your computer if I take away the hard disk?”
“It won't start. That's where all my software is. And my papers.”
“Right. But what if I tell the computer to start from a CD?”
Bob smiled. “Anyone in particular? Dar Williams? Madonna? Billie Holiday?”
Alice continued, “Okay, let’s say you put a new operating system on your PC. You get it on a CD and you start it from there. The existing OS on your hard drive never runs. Now do you get it?”
“You think they replaced my operating system?” Bob felt uncomfortable, as if someone had returned his lost wallet, and all the cards were alphabetized.
“Probably not. But they could run a different OS and use it to change other things on your hard drive. Another OS can easily change file names, and probably your desktop pattern. They just run the OS off the CD. It's their OS, so they can be administrator and do whatever they want.”
Bob looked up suddenly. “Can we make the PC only load from the hard drive?”
Alice’s eyes narrowed and she put her glasses back on. “Maybe you should start by asking why people are messing with you.”
Alice introduces Bob to the practical implications of a broken Chain of Control. For Bob's computer to enforce his password protection, the computer must bootstrap the hard drive's operating system. The Chain of Control is supposed to go like this:
A knowledgeable attacker can direct the BIOS to boot from a CD, DVD, or perhaps even a USB drive. This interrupts the Chain of Control and redirects it to software under the attacker's control.
Bob unlocked the door of the suite. He tried to hold it open while Alice squeezed past him. Her backpack grazed the door, pushing it away from Bob’s hand.
Bob followed her in. Alice was already talking one of his suite mates, a woman wearing a black jumpsuit. She was holding an impossibly thick book.
“Yeah,” Alice continued, “It’s no cookbook. Singh talks more ‘how it works’ than ‘how to use it.’”
“Back here,” Bob said. He typed control-alt-delete and entered his password in the box that popped up.
“We need to talk some time,” the jumpsuit woman said.
“You bet,” replied Alice. “Good to meet you, Tina.” The woman stood and left the common room to Bob and Alice.
Alice scanned the room. She noted the suite door, a bathroom door, a picture window, and three doors to private rooms. She sat on the couch, taking the spot Tina just left.
Bob was concentrating on his computer. “Everything seems okay.” This time. He sighed. “Do you want to take a look?”
“What are you trying to protect?”
Bob furrowed his brow. “What kind of question is that? I’m trying to protect my computer!” He paused, then let his voice drop, “Aren’t you going to look at this?”
“I can see fine from here. You have a single room, right? And the others are doubles?”
“So you’re using all of your private room, plus a chunk of the common space. The threat is the ‘disgruntled suite mate.’ Now, you can either get into an arms race with your suite mates on this, or you can defuse things by offering to share your computer.”
Bob paused. He was perfectly in his rights to use a piece of the common space, wasn’t he? And he was perfectly in his rights to exclude people from his computer. After all, it was pretty small and there wasn’t much worth sharing. But Alice had a point.
“How do we share it without other people messing with my stuff?”
Alice smiled. “That’s the easy part.”
Alice asks the key question: What are you trying to protect?
All security planning and architecture begins and ends with that question. First we figure out what we want to protect. Then we figure out how. Finally we test the system to see if the measures actually provide the protection we expect.
Alice performs a threat based assessment of the security problem: who is threatening Bob's computer and why? How do we eliminate the threat?
The right answer might not be an arms race in which we pile on additional protections. In this case, Alice suggests that Bob defuse the problem by letting his suite mates share his computer.
Bob sat uncomfortably in the chair and tried to concentrate. The piece of paper contained a long list of numbers. It was the third list he’d had to memorize. The first two were easy. This was tough.
“Turn the paper over.” The voice came through a scratchy speaker in the wall. It was the pencil-necked guy that had shown him in. Bob glanced at the cheap one-way mirror that covered most of the wall.
“Now, I will recite the list to you. If I recite a number correctly, say ‘Yes.;’ If I recite a wrong number, say ‘No.’”
After the slightest pause, the speaker rapidly shouted numbers. Bob yelled “Yes!” and “No!” a couple of times, but finally stopped when he realized he was totally left behind.
He felt sweat dampening the back of his shirt.
“Okay, we’re done!” The speaker squawked, and then the door opened. The pencil-necked guy motioned him out.
“So what are you doing with the results?” Bob tried to sound off-hand. He hated to lose.
“I’ll probably put the results on a poster for the next research show.”
Bob thought of the sports page, with individual player results blown up poster-sized. In a low voice, he said, “I’d prefer you didn’t publish my results.”
Pencil-neck smirked. “Don’t worry, I’m not allowed to publish names!”
Bob walked away, feeling foolish. He stopped short, hearing a female voice shout his name. He turned to see Tina walking towards him.
“Do you have a partner for the sociology project? I have a survey question worked out. We can combine the results on the computer in the suite.” Tina stopped to take a breath.
Bob paused, then said, “Sure, we can use my computer. But we have to keep the results secret. That Professor Chalkey was very firm. The Feds might come down on us otherwise.”
Bob wasn’t sure why the FBI would give a rip about his stupid class project, but that’s what they said.
“I’ll take care of that,” Tina said. “Just give me my own login and password.”
Bob popped the USB drive into the socket. As soon as the “Computer” window appeared, he right clicked on the drive icon, tracked down the menu, and selected “Format.”
“Dangerous,” said Alice, hovering over his shoulder.
“It’s faster,” Bob announced. His fingers were on autopilot as he deftly clicked the “Quick Format” box.
Alice shook her head. “It’s a bad habit. What if you format the wrong drive?”
Bob snorted. “What do you think I am, a doofus?” He clicked “Start” and dismissed the warning alert almost without seeing it.
Chattering erupted from the external hard drive on his desk. The “busy” lamp winked several times and went silent. The computer emitted a soft tone, and an alert popped up saying “Format Complete.”
Bob stared at the hard drive and clenched his teeth. “That had all my photos.”
Alice took pity and said, “Maybe they’re still there. It takes longer than that to wipe out an entire hard drive.”
“What do you mean?”
“All a ‘quick format’ does is wipe out some data at the very start of the hard drive. If you have a lot of files and folders, most of them might still be there.” Alice paused, then yelled, “Kevin!”
A head popped out of a bedroom. “Yeah?”
“Do you still have that recovery CD?”
“Yeah, just a sec.” Kevin disappeared into his room, and returned with a CD. It was marked with skull and crossbones.
Bob stared at the CD. “What’s that?”
“Just a bunch of disk utilities.” He elbowed the others aside and popped the disk into the CD drive. He restarted the system and typed the BIOS password.
Bob suppressed a gasp as Kevin rebooted to the CD.
After a minute of flashing text and incomprehensible color bars, an alien desktop appeared on the display. Then Kevin turned to Bob.
Alice replied, “He did a fast format on the external drive. It was FAT formatted both before and after.
“Okay.” Kevin smiled. “FAT shouldn’t be a problem. He started a command shell, typed a few commands, and finally typed:
The hard drive started clicking again.
“Leave it for a while and we’ll see what it finds.” Kevin fell into a sort of trance, staring at the display.
“But what’s it doing?” Bob demanded. He didn’t like the idea of running some weird stuff on his computer.
“If it gets your files back, do you really care what it is?” Alice asked. He couldn’t argue with the irritation in her voice.
“The quick format doesn’t really erase your drive,” Kevin whispered, as he watched patterns of text form on the screen. “The actual files are still there. All you did is erase the starting point we use to find them.”
Bob started to relax. “You know,” he said, “this is as bad as anything a hacker might do.”
“As long as the hacker doesn’t get your bank account,” Kevin whispered, almost to himself.
“Yes, I set up the BIOS password.” Bob said. “And the separate admin user. And all that other crap. But they still got in.”
Bob felt defeated. He didn’t like that.
“And when did you last use the admin login?” Alice was starting to sound like a prosecutor on a really bad courtroom show.
“How should I know?” Bob whined. “Isn’t there something else we can do?”
“Well,” said Alice, “they couldn’t bypass the BIOS, and you haven’t left the thing logged in.” Alice paused for a moment.
“I’m sick of passwords. Could we try electrodes in the chair, maybe?”
Alice ignored Bob’s outburst and moved over to the computer. “Here, let me drive.” She nudged Bob away until she was in sole possession of keyboard and mouse.
She lifted up the keyboard. No Post-its with passwords underneath. She felt around under the desk. Nothing but fossilized gum.
Alice’s eyes roamed around the suite. She started typing. Bob looked over and noticed she wasn’t logged in. What was she typing? Passwords?
Bob felt rising irritation. He hated doing nothing. He hated being on the bench. He got up and strode purposefully up and down the room. Then he stopped in front of the computer. He liked the way Alice’s hair draped over her shoulder. Alice paused, then started typing again. Then she growled. Well, part growl and part groan. Bob glanced at the screen. She had logged in.
Alice guessed his password. Bob felt a flush under his skin. She should be flattered by his choice. But she was mad.
“What a stupid password. Am I the only woman who visits you?”
Bob tried to sound unruffled, “Let’s say you’re the only one who helps me with my computer.” There was just a touch of smugness in that.
“Well, it wouldn’t take a psychic to guess it, then,” Alice replied acidly.
Bob sought a relaxed tone of voice and asked, “What’s the big deal? I needed a password and it’s a good one. Nice and short and I’m not going to forget it, am I?”
“A good password is something hard to guess, Mister Einstein. This - I - was probably the first name you suite mates guessed. They won’t know your dog’s name, but they’ll know the names of everyone who visits. Especially the women.”
Blam, bleep, bam. Electronic sounds emerged from the computer. Bob furrowed his eyes. Tense wrinkles formed across his forehead. But his wrist moved smoothly, tracking the alien shapes, dispatching them one by one with a series of deft clicks.
Bob didn’t play a lot of computer games, but this one was fun. It made a nice break while writing up homework assignments. A few days earlier, he found it running on his computer when he came back from class. Now he played whenever he sat down.
As soon as the screen cleared, Bob closed the window with an air of satisfaction. The desktop pattern appeared, showing Bob at a high school awards ceremony. Not as cool as the Championship photo, but Alice insisted. She said the desktop photo for Suitemates had to be different from Bob’s personal login. So Bob chose the ceremony pic.
Gazing briefly at the desktop windows, Bob froze. He looked closer at the files in the Suitemates “Documents” folder. It looked exactly like his own: there were the same Word files, with the same names as his own homework files. He looked closer at an Excel file, titled “g3survey.xls.” It looked just like the file containing their confidential survey results.
Bob opened the file with a quick double click. Yes, it was a copy of their data file. What was it doing here? Bob right clicked on it and chose “Delete.” To make sure, he clicked on the Recycle Bin itself and selected “Empty Recycle Bin.”
He looked up. Large, blue eyes were staring at him.
“Oh. Eve. Didn’t hear you come in.” Bob’s glance lingered briefly on a curl of his suitemate’s hair that grazed her cheek. How could such a pretty girl always give him the creeps?
Eve’s lips betrayed a little smile. “Did you zap lots of aliens?” Bob looked sheepish and said, “Yeah, and then I found an unprotected copy of my secret survey file.”
A snort arose from a doorway. It was Bruce.
“You’re in trouble if Professor Chalkley finds out,” he said. “She was very specific about protecting those files!”
“I did protect them,” Bob argued. “We set up the separate users and set up all the right file protections. There’s no way to crack that without breaking Windows.”
Bob paused, then asked, “You guys don’t know how to break Windows do you?”
“Broke one with a softball,” admitted Eve.
“You need to encrypt your files,” declared Bruce. “Do it right and they’re completely safe, even if someone breaks Windows.”
“And how do I do that?” Bob asked. Bruce’s condescension annoyed him, but he didn’t need trouble with his psych professor.
“That will be $5.36,” the cashier said.
Bob slapped his back pocket and sighed. In his mind’s eye he could see his wallet. It was sitting comfortably on his desk, in his bedroom. He fished in another pocket and brought out three crumpled dollar bills. He tried not to look toward his table, and the girl sitting there. He looked the other way and saw Kevin.
“Hey, Kevin,” Bob said, trying to sound friendly.
“Hey,” Kevin replied. He gave a noncommittal smile.
Bob’s voice dropped, “Er, could you lend me a few bucks?”
“Forgot your wallet,” Kevin asked, his voice normal.
“Yeah, yeah, like three dollars? Okay?”
Kevin opened his wallet and with a flourish pulled out three crisp dollar bills.
“Should we make it five?” he asked.
Bob grabbed the bills. “No, this is fine, thanks, man.” Three days later...
“That’s weird,” Bob muttered.
“Now what?” Bruce was impatient to get to work.
“I, er, owe Kevin some money. I write little files to remind me of stuff like that. But the file says $7 and I was sure it was only three dollars. I can almost see him waving the bills around.”
“Okay, then, something must have happened to the file. Could he have changed it?” “No, that can’t be it. I encrypted it with RC4. All it said was ‘I owe Kevin $3.’”
“You didn’t use one of your famous passwords, did you?”
Bob flushed, and said, “No way. This was something not even you would guess.” Bruce thought for a moment. “You didn’t happen to show him the file, did you?”
“Yes,” Bob admitted.
“That’s it. It’s trivial to change the contents of a stream encrypted file. That is, it’s trivial if you know what the file says. It’s simple binary logic.”
As Kevin entered the suite, Bob was concentrating on his computer. Kevin moved quietly, not wanting to awaken Bob’s interest.
Bob glanced up, warily, and then returned to staring at his screen. He didn’t look hostile at all. He looked nervous.
Kevin straighten up and sauntered over to the computer. “What’s up?” he asked.
Bob tensed up. Then he sighed. “Okay, I’m trying to look at the directories on this USB drive. And nothing makes sense.”
Kevin looked at the display. The dump window showed nothing but random data. No readable text. No zeroes, even.
“That’s not a directory.”
“But it has to be,” Bob said. “I pulled out the numbers from the boot block, added them up, and this is where I landed.”
Kevin looked at the numbers scrawled on a torn piece of paper.
“That’s crazy,” he said. “FAT never puts the root directory in the middle of the drive. Let’s see the boot block.”
Bob clicked and typed. The first sector displayed.
“That’s not a boot block. For one thing, there’s no volume label. There should at least be blanks there.” Kevin pointed at the display. “Try reformatting and do it again.”
Bob looked sheepish. “I can’t do that.”
“It’s not my drive.” Bob gestured to the bulletin board. Two USB drives now hung from paperclips. A third paperclip stood empty. “I just wanted to see if I could do what everyone else is doing.”
“Not quite everyone else,” Kevin whispered.
“Am I using the right tools? I couldn’t even get the thing to mount.” Kevin shook his head.
“You’re looking at an encrypted drive,” he explained.
“What do you mean?” Bob asked.
“Instead of encrypting files one at a time, this encrypts everything on the drive. It’s safer in some ways, but not as flexible in others.”
This is a collection of notes intended to introduce the fundamentals of file systems. This section summarizes the challenges of using hard drives and the general objectives of file systems. Subsections introduce simple file systems that are for the most part obsolete today.
When faced with a large expanse of hard drive space, one has three problems:
Over the decades, different file systems have produced different solutions to these problems. Usually the differences can be traced back to the following, sometimes mutually exclusive, objectives:
As hard drives have grown in capacity, file systems have grown in complexity. Still, the systems' weird features usually trace their origins back to the problems being solved or the particular objectives being pursued.
If we look back into ancient history, when semi-trailor-sized behemoths were being out-evolved by refrigerator-sized creatures in university computer labs, we find many comprehensible file systems.
(circa 1970-85, maybe later)
The Forth programming system was developed in the late 1960s by Chuck Moore. It provided a very powerful, text based mechanism for controlling a computer and writing programs when RAM and hard drive space were extremely tight. Early implementations were routinely restricted to 8KB of RAM. Some early implementations relied exclusively on diskette drives that stored less than a half a megabyte of data.
Starting in the 1970s, typical Forth systems treated hard drives as consisting of a linear set of numbered blocks, each 1KB in size. The first block on the drive (block 0) contained the bootstrap program to get Forth started, and a small number of subsequent blocks might also contain binary executable code that was loaded into RAM when Forth started.
Following the blocks of executable code, the remaining hard drive blocks generally contained ASCII text and were referred to by number. If a programmer needed to modify part of a Forth program, he would edit the hard drive block that contained that program, and refer to the block by its number.
Here is an assessment of Forth's file system in the context of the eight concepts noted above:
Boston University (BU) developed its own timesharing system in the 1970s for its IBM 360 and 370 mainframes. The system was based on the batch-oriented Remote Access Computing System (RACS) developed by IBM. McGill University also participated in RAX development, but their version was renamed "McGill University System for Interactive Computing" (MUSIC). Although many of the details are lost in the mists of time, both systems used some text processing tools developed at BU.
At the time, IBM had developed a few timesharing systems, but they were generally expensive and slow. IBM's standard operating systems for the 360 series had a file system; files were referred to as data sets. To put matters as charitably as possible, IBM's data set support was not suited to the dynamic nature of file access in timesharing environments. Frankly, it was a beast. So RAX really needed its own file system.
In accordance with the traditions of IBM data processing, a RAX file looked more-or-less like a deck of punched cards. Files consisted of "records" that carried individual lines of text. Unlike punched cards, trailing blanks were omitted and the individual records (lines) could vary in length. More significantly, files were either read sequentially in a single pass, or written sequentially in a single pass. There wasn't any notion of random access or of modifying the middle of a file without rewriting the whole thing. While RAX did support random access to hard drive files, the function was limited to specially allocated files (standard IBM data sets, actually) and used special operations that were only avaliable to assembly language programmers.
Each file had a unique name and was 'owned' by the user that created it. Users could modify the permissions on files to share them with other users.
The RAX system's timesharing hours were generally limited to daytime and evenings. Overnight, the CPU was rebooted with IBM's OS/360 or OS/VS1 to run batch jobs. Thus, the RAX hard drives had to be compatible with IBM's native file system, such as it was. The RAX library was implemented inside a collection of IBM data sets, each data set serving as a pool of disk blocks to use in library files. These disk blocks were called space sets and contained 512 bytes each.
A complete RAX library file name contained two parts: an 8-character index name and an 8-character file name. While this gave the illusion of there being a hierarchical file system, there was no true 'root' directory. All files not used by the RAX system programming staff resided in the "userlib" index; if no index name was given, RAX searched in userlib. The directory arrangement apparently worked as follows:
There were a small number of IBM data sets that served as library directories (indexes). A file's index name selected the appropriate data set to search for that file's directory entry. These index files were apparently set up using IBM's Indexed Sequential Access Method (ISAM). Such files were specially formatted to use a feature of the IBM disk hardware. Each data block in the file contained a key field along with space for a library file's directory entry. The "key" part contained the file name. The IBM disk hardware could be told to scan the data set until it found the record whose key contained that name, and then it would retrieve the corresponding data. This put the burden of directory searching on the hard drive, and freed up the CPU to work on other tasks.
The directory entry contained the usual timestamps (date created, accessed, modified, etc.), ownership information, access permissions, size, and a pointer to the first space set in the file.
Once the system knew the location of the file's first space set, it could retrieve the file's contents sequentially. A space set address was a 32-bit number formatted in 2 fields:
Remember that the library consisted of numerous data sets that served as pools of data blocks These pools were called lib files, and were numbered sequentially. The data blocks, or space sets, were numbered sequentially inside each lib file.
Files within the RAX library were implemented as a list of linked space sets. The first four bytes of each space set carried the pointer to the next one in the file. The pointer bytes were managed automatically by the system's read and write operations; they were invisible to user programs. The net result was that user programs perceived space sets as containing only 508 bytes, since 4 bytes were used for the link pointer.
A single library file could contain space sets from many different lib files. Since each lib file tended to represent a contiguous set of disk space, file retrieval was most efficient when all space sets came from the same lib file. In practice, however, a file would incorporate space sets from whichever lib file had the most available.
Free space was managed within individual lib files. Each lib file kept a linked list of free space sets. Space sets from deleted files were added back to the free list in the appropriate lib file.
Here is a review of the eight issues listed above:
The PDP-11 computer, build by Digital Equipment Corporation (DEC) in the late 20th century, was a classic machine of the minicomputer era. At the time of the -11's introduction, DEC really had no idea what to do about software for its machines, and wasn't even sure what was appropriate in the way of operating systems. Over the next several years, DEC (and others) introduced a flotilla of operating systems for the PDP-11. Here are DEC's contributions:
The RT-11 operating system came in several flavors, but all shared the same, simple file structure. The file system consisted of a single directory configured at a fixed location at the start of the disk volume. The directory could consist of multipe non-contiguous "segments" as defined in the directory's header.
Searching for files was very simple: when the system booted, the boot code would search the RT-11 directory for the name "MONITR.SYS" which indicated the system image to load and run.
An RT-11 directory entry consisted of the following:
When creating a directory, the operator can configure it to contain extra space for application specific file data. It would be the application's responsibility to update the additional data in a file's entry.
To create a new file, even temporarily, RT-11 had to allocate space by updating the directory. If the program specified a particular file size, RT-11 would try to fit that size. If the program did not specify a file size, RT-11 would allocate half of the largest block of free space available on the hard drive.
Once RT-11 had determined the amount of space for the new fiile, and its intended location on the hard drive, it would usually allocate the hard drive space by creating two adjacent directory entries. The first entry would be for the new file, and would contain the number of blocks desired for the new file. The second entry would be for any free space left over after allocating the desired amount of space for the file.
When first opened, a file was generally marked as "tentative" in the directory, and updated to "permanent" when and if the creating program actually closed the file.
Here is an assessment of RT-11's file system in the context of the eight concepts (Challenges and Objecives) noted earlier:
The RT-11 file system's simplicity led to its being used in various other applications. For example, the microcode disk used by the original VAX hardware used an RT-11 file system. The LOCK trusted computing base used the RT-11 file system for accessing files on its embedded SIDEARM processor.
This is an extended, less-edited version of an article appearing in IEEE Security and Privacy in December 2012. This version specifically identifies all of the textbooks I reviewed while looking at information security design principles.
Here is the citation for the published article:
Smith, R.E.; , "A Contemporary Look at Saltzer and Schroeder's 1975 Design Principles," Security & Privacy, IEEE , vol.10, no.6, pp.20-25, Nov.-Dec. 2012
The information security community has a rich legacy of wisdom drawn from earlier work and from sharp observations. Not everyone is old enough or fortunate enough to have encountered this legacy first-hand by working on groundbreaking developments. Many of us receive it from colleagues or through readings and textbooks.
The Multics time-sharing system (Figure 1 - photo by Tom Van Vleck) was an early multi-user system that put significant effort into ensuring security. In 1974, Jerome Saltzer wrote an article outlining the security mechanisms in the Multics system (Saltzer, 1974). The article included a list of five “design principles” he saw reflected in his Multics experience. The following year, Saltzer and Michael Schroeder expanded the article into a tutorial titled “The Protection of Information in Computer Systems” (Saltzer and Schroeder, 1975). The first section of the paper introduced “basic principles” of information protection, including the triad of confidentiality, integrity, and availability, and a set of design principles.
Over the following decades, these principles have occasionally been put forth as guidelines for developing secure systems. Most of the principles found their way into the DOD's standard for computer security, the Trusted Computer System Evaluation Criteria (NCSC, 1985). The Saltzer and Schroeder design principles were also highlighted in security textbooks, like Pfleeger's Security in Computing (Pfleeger, 1989), the first edition of which appeared in 1989.
Different writers use the term principle differently. Some apply the term to a set of precisely worded statements, like Saltzer and Schroeder's 1975 list. Others apply it in general to a collection of unidentified but fundamental concepts. This paper focuses on explicit statements of principles, like the 1975 list. The principles were concise and well stated on the whole. Many have stood the test of time and are reflected in modern security practice. Others are not.
In 2008, after teaching a few semesters of introductory information security, I started writing my own textbook for the course. The book was designed to cover all topics required by selected government and community curriculum standards.
Informed by an awareness of Saltzer and Schroeder’s design principles, but motivated primarily by the curriculum requirements, the textbook, titled Elementary Information Security, produced its own list of basic principles (Smith, 2012). This review of design principles arises from the mismatch between the classic list and this more recent list. The review also looks at other efforts to codify general principles, both by standards bodies and by other textbook authors, including a recent textbook co-authored by Saltzer himself (Saltzer and Kaashoek, 2009).
Saltzer and Schroeder's 1976 paper listed eight design principles for computer security, and noted two additional principles that seemed relevant if more general.
Economy of mechanism – A simple design is easier to test and validate.
Fail-safe defaults – Figure 2 shows a physical example: outsiders can't enter a store via an emergency exit, and insiders may only use it in emergencies. In computing systems, the save default is generally “no access” so that the system must specifically grant access to resources. Most file access permissions work this way, though Windows also provides a “deny” right. Windows access control list (ACL) settings may be inherited, and the “deny” right gives the user an easy way to revoke a right granted through inheritance. However, this also illustrates why “default deny” is easier to understand and implement, since it's harder to interpret a mixture of “permit” and “deny” rights.
Complete mediation – Access rights are completely validated every time an access occurs. Systems should rely as little as possible on access decisions retrieved from a cache. Again, file permissions tend to reflect this model: the operating system checks the user requesting access against the file's ACL. The technique is less evident when applied to email, which must pass through separately applied packet filters, virus filters, and spam detectors.
Open design – Baran (1964) argued persuasively in an unclassified RAND report that secure systems, including cryptographic systems, should have unclassified designs. This reflects recommendations by Kerckhoffs (1883) as well as Shannon's maxim: “The enemy knows the system” (Shannon, 1948). Even the NSA, which resisted open crypto designs for decades, now uses the Advanced Encryption Standard to encrypt classified information.
Separation of privilege – A protection mechanism is more flexible if it requires two separate keys to unlock it, allowing for two-person control and similar techniques to prevent unilateral action by a subverted individual. The classic examples include dual keys for safety deposit boxes and the two-person control applied to nuclear weapons and Top Secret crypto materials. Figure 3 (courtesy of the Titan Missile Museum) shows how two separate padlocks were used to secure the launch codes for a Titan nuclear missile.Separation of privilege – A protection mechanism is more flexible if it requires two separate keys to unlock it, allowing for two-person control and similar techniques to prevent unilateral action by a subverted individual. The classic examples include dual keys for safety deposit boxes and the two-person control applied to nuclear weapons and Top Secret crypto materials.
Least privilege – Every program and user should operate while invoking as few privileges as possible. This is the rationale behind Unix “sudo” and Windows User Account Control, both of which allow a user to apply administrative rights temporarily to perform a privileged task.
Least common mechanism – Users should not share system mechanisms except when absolutely necessary, because shared mechanisms may provide unintended communication paths or means of interference.
Psychological acceptability – This principle essentially requires the policy interface to reflect the user's mental model of protection, and notes that users won't specify protections correctly if the specification style doesn't make sense to them.
There were also two principles that Saltzer and Schroeder noted as being familiar in physical security but applying “imperfectly” to computer systems:
Work factor – Stronger security measures pose more work for the attacker. The authors acknowledged that such a measure could estimate trial-and-error attacks on randomly chosen passwords. However, they questioned its relevance since there often existed “indirect strategies” to penetrate a computer by exploiting flaws. “Tiger teams” in the early 1970s had systematically found flaws in software systems that allowed successful penetration, and there was not yet enough experience to apply work factor estimates effectively.
Compromise recording – The system should keep records of attacks even if the attacks aren't necessarily blocked. The authors were skeptical about this, since the system ought to be able to prevent penetrations in the first place. If the system couldn't prevent a penetration or other attack, then it was possible that the compromise recording itself may be modified or destroyed.
Today, of course, most analysts and developers embrace these final two design principles. The argument underlying complex password selection reflect a work factor calculation, as do the recommendations on choosing cryptographic keys. Compromise recording has become an essential feature of every secure system in the form of event logging and auditing.
Today, security principles arise in several contexts. Numerous bloggers and other on-line information sources produce lists of principles. Many are variants of Saltzer and Schroeder, including the list provided in the Open Web Application Security Project's wiki (OWASP, 2012). Principles also arise in information security textbooks, more often in the abstract sense than in the concrete. Following recommendations in the report Computers at Risk (NRC, 1991), several standards organizations also took up the challenge of identifying a standard set of security principles.
Most textbook authors avoid making lists of principles. This is clear from a review of twelve textbooks published over the past ten years. This is even true of textbooks that include the word “Principles” in the title. Almost every textbook recognizes the principle of least privilege and usually labels it with that phrase. Other design principles, like separation of privilege, may be described with a different adjective. For example, some sources characterize separation of privilege as a control, not a principle.
Pfleeger and Pfleeger (2003) presents its own set of four security principles. They are, briefly, easiest penetration, weakest link, adequate protection, and effectiveness. These principles apply to a broader level of security thinking than Saltzer and Schroeder design principles. However, the text also reviews Saltzer and Schroeder's principles in detail in Section 5.4.
The remaining few textbooks that specifically discuss design principles generally focus on the 1975 list. The textbook by Smith and Marchesini (2008) discuss the design principles in Chapter 3. The two textbooks by Bishop (2003, 2005) also review the design principles in Chapters 13 and 12, respectively.
Following Computers at Risk, standards organizations were motivated to publish lists of principles. The OECD published a list of eight guidelines in 1992 that established the tone for a set of higher-level security principles:
Accountability, Awareness, Ethics, Multidisciplinary, Proportionality,
Integration, Timeliness, Reassessment, and Democracy.
In its 1995 handbook, “An Introduction to Computer Security,” NIST presented the OECD list and also introduced a list of “elements” of computer security (NIST, 1995). Following the OECD's lead, this list presented very high level guidance, addressing the management level instead of the design or technical level. For example, the second and third elements are stated as follows:
“Computer Security is an Integral Element of Sound Management”
“Computer Security Should Be Cost-Effective”
The following year, NIST published its own list of “Generally Accepted Principles and Practices for Securing Information Technology Systems” (Swanson and Guttman, 1996). The overriding principles drew heavily from the elements listed in the 1995 document. The second and third elements listed above also appeared as the second and third “Generally Accepted Principles.”
The OECD list also prompted the creation of an international organization that published “Generally Accepted System Security Principles” (GASSP) in various revisions between 1996 and 1999 (I2SF, 1999). This was intended to provide high-level guidance for developing more specific lists of principles, similar to those used in the accounting industry. The effort failed to prosper.
Following the 1999 publication, the sponsoring organization apparently ran out of funding. In 2003, the Information System Security Association tried to restart the GASSP process and published the “Generally Accepted Information Security Principles” (ISSA, 2004), a cosmetic revision of the 1999 document. This effort also failed to prosper.
In 2001, a team at NIST tried to produce a more specific and technical list of security principles. This became “Engineering Principles for Information Technology Security” (Stoneburner, et al, 2004). The team developed a set of thirty-three separate principles. While several clearly reflect Saltzer and Schroeder, many are design rules that have arisen from subsequent developments, notably in networking. For example:
Principle 20: Isolate public access systems from mission critical resources.
Principle 33: Use unique identities to ensure accountability.
While these new principles captured newer issues and concerns than the 1975 list, they also captured assumptions regarding system development and operation. For example, Principle 20 assumes that the public will never have access to “mission critical resources.” However, many companies rely heavily on Internet sales for revenue. They must clearly ignore this principle in order to conduct those sales.
When we examine curriculum standards, notably those used by the US government to certify academic programs in information security, we find more ambiguity. All six of the curriculum standards refer to principles in an abstract sense. None actually provide a specific list of principles, although a few refer to the now-abandoned GASSP. A few of Schroeder and Saltzer's design principles appear piecemeal as concepts and mechanisms, notably least privilege, separation of privilege (called “segregation of duties” in NSTISSC, 1994), and compromise recording (auditing).
The Information Assurance and Security IT 2008 curriculum recommendations (ACM and IEEE, 2008) identify design principles as an important topic, and provide a single example: “defense in depth.” This is a restatement of NIST's Principle 16.
Co-authors Saltzer and Kaashoek published the textbook Principles of Computer Design in 2009 (Saltzer and Kaashoek, 2009). The book lists sixteen general design principles and several specific principles, including six security-specific principles. Here is a list of principles that were essentially inherited from the 1975 paper:
Here are new – or newly stated – principles compared to those described in 1975:
Neither of the uncertain principles listed in 1975 made it into this revised list. Despite this, event logging and auditing is a fundamental element of modern computer security practice. Likewise, work factor calculations continue to play a role in the design of information security systems. Pfleeger and Pfleeger highlighted “weakest link” and “easiest penetration” principles that reflect the work factor concept. However, there are subtle trade-offs in work factor calculations that may makes it a poor candidate for stating as a concise and easy-to-apply principle.
The textbook Elementary Information Security presents a set of eight basic information security principles, While many directly reflect principles from Saltzer and Schroeder, they also reflect more recent terminology and concepts. The notion of “basic principles” stated as brief phrases seems like a natural choice for introducing students to a new field of study.
The textbook's contents were primarily influenced by two curriculum standards. The first was the “National Training Standard for Information System Security Professionals,” (NSTISSC, 1994). While this document clearly showed its age, it remains the ruling standard for general security training under the US government's Information Assurance Courseware Evaluation (IACE) Program (NSA, 2012). In February, 2012, the IACE program certified the textbook as covering all topics required by the 1994 training standard. The second curriculum standard is the “Information Technology 2008 Curriculum Guidelines” (ACM and IEEE Computer Society, 2008). The textbook covers all topics and core learning outcomes recommended in the Information Assurance and Security section of the Guidelines.
To fulfill their instructional role, each principle needed to meet certain requirements. Each needed to form a memorable phrase related to its meaning, with preference given to existing, familiar phrases. Each had to reflect the current state of the practice, and not simply a “nice to have” property. Each had to be important enough to appear repeatedly as new materials were covered. Each principle was introduced when it played a significant role in a new topic, and no sooner. Students were not required to learn and remember a set of principles that they didn't yet understand or need.
This yielded the following eight principles:
Continuous Improvement - continuously assess how well we achieve our objectives and make changes to improve our results. Modern standards for information security management systems, like ISO 27001, are based on continuous improvement cycles. Such a process also implicitly incorporates compromise recording from 1975 and “design for iteration” from 2009. Introduced in Chapter 1, along with a basic six-step security process to use for textbook examples and exercises.
Least Privilege - provide people or other entities with the minimum number of privileges necessary to allow them to perform their role in the system. This literally repeats one of the 1975 principles. Introduced in Chapter 1.
Defense in Depth - build a system with independent layers of security so that an attacker must defeat multiple independent security measures for the attack to succeed. This echoes “least common mechanism” but seeks to address a separate problem. Defense in depth is also a well-known alternative for stating NIST's Principle 16. Introduced in Chapter 1.
Open Design - building a security mechanism whose design does not need to be secret. This also repeats a 1975 principle. Introduced in Chapter 2.
Chain of Control - ensure that either trustworthy software is being executed, or that the software's behavior is restricted to enforce the intended security policy. This is an analogy to the “chain of custody” concept in which evidence must always be held by a trustworthy party or be physically secured. A malware infection succeeds if it can redirect the CPU to execute its code with enough privileges to embed itself in the computer and spread. Introduced in Chapter 2.
Deny by Default – grant no accesses except those specifically established in security rules. This is a more-specific variant of Saltzer and Schroeder's “fail safe defaults” that focuses on access control. The original statement is less specific, so it applies in safety and control problems. Introduced in Chapter 3.
Transitive Trust - If A trusts B, and B trusts C, then A also trusts C. In a sense this is an inverted statement of “least common mechanism,” but it states the problem in a simpler way for introductory students. Moreover, this is already a widely-used term in computer security. Introduced in Chapter 4.
Separation of Duty – decompose a critical task into separate elements performed by separate individuals or entities. This reflects the most common phrasing in the security community. Some writers phrase it as “segregation of duty” or “separation of privilege.” Introduced in Chapter 8.
The textbook's list focused on memorable phrases that were widely accepted in the computer security community. Principles introduced in earlier chapters always resurface in examples in later chapters. In retrospect, the list is missing at least one pithy and well-known maxim: “Trust, but verify.” The book discusses the maxim in Chapter 13, but does not tag it as a basic principle.
For better or worse, three of the 1975 principles do not play a central role in modern information security practice. These are simplicity, complete mediation, and psychological acceptability. We examine each below.
There is no real market for simplicity in modern computing. Private companies release product improvements to entice new buyers. The sales bring in revenues to keep the company operating. The company remains financially successful as long as the cycle continues. Each improvement, however, increases the underlying system's complexity. Much of the free software community is caught in a similar cycle of continuous enhancement and release. Saltzer and Kaashoek (2009) call for “sweeping simplifications” instead of overall simplicity, reflecting this change.
Complete mediation likewise reflects a sensible but obsolete view of security decision making. Network access control is spread across several platforms, no one of which makes the whole decision. A packet filter may grant or deny access to packets, but it can't detect a virus-infected email at the packet level. Instead it forwards email to a series of servers that apply virus and spam checks before releasing the email to the destination mailbox. Even then, the end user might apply a digital signature check to perform a final verification of the email's contents.
Psychological acceptability, or the “principle of least astonishment” is an excellent goal, but it is honored more in the breach than in the observance. The current generation of “graphical” file access control interfaces provide no more than rudimentary control over low-level access flags. It takes a sophisticated understanding of the permissions already in place to understand how a change in access settings might really affect a particular user's access.
Version:1.0 StartHTML:0000000167 EndHTML:0000001597 StartFragment:0000000502 EndFragment:0000001581
Only a handful of Saltzer and Schroeder's original 1975 design principles have stood the test of time. Nonetheless, this represents a memorable success. Kerckhoffs, a 19th century French cryptographic expert, published a list of principles for hand-operated cipher systems, some of which we still apply to cryptosystems today. But most experts only recognize a single principle as “Kerckhoffs's Principle,” and that is his view on Open Systems: a cryptosystem should not rely on secrecy, since it may be stolen by the enemy. In addition to the Open System principle, both the principle of least privilege and of separation of privilege appeared on the 1975 list and are still widely recognized by security experts.
Perhaps lists of principles belong primarily in the classroom and not in the workplace. The short phrases are easy to remember, but they may promote a simplistic view of technical problems. Students need simplicity to help them build an understanding of a more complex reality.
ACM and IEEE Computer Society, 2008, Information Technology 2008 Curriculum Guideline, http://www.acm.org/education/curricula/IT2008%20Curriculum.pdf, (retrieved March 1, 2012).
Bishop, 2003. Computer Security: Art and Science, Boston: Addison-Wesley.
Bishop, 2005. Introduction to Computer Security, Boston: Addison-Wesley.
I2SF, 1999. “Generally Accepted System Security Principles” International Information Security Foundation.
ISSA, 2004. “Generally Accepted Information Security Principles,” Information System Security Association.
Kerckhoffs, Auguste, 1883. “La cryptographie, militaire,” Journal des sciences militaires IX.
NCSC, 1985. Trusted Computer System Evaluation Criteria, Ft. Meade, MD: National Computer Security Center.
NIST, 1995, “An Introduction to Computer Security,” NIST SP 800-12, Gaithersburg, MD: National Institute of Standards and Technology.
NSA, 2012. “IA Courseware Evaluation Program – NSA/CSS,” web page, National Security Agency. http://www.nsa.gov/ia/academic_outreach/iace_program/index.shtml (retrieved Feb 29, 2012).
NRC, 1991. Computers at Risk: Safe Computing in the Information Age, Washington: National Academy Press. http://www.nap.edu/openbook.php?record_id=1581 (retrieved Feb 29, 2012).
NSTISSC, 1994. “National training standard for information security (INFOSEC) professionals,” NSTISSI 4011, Ft. Meade, MD: National Security Telecommunications and Information Systems Security Committee.
OWASP, 2012, “Category: Principle - OWASP,” web page, Open Web Application Security Project, https://www.owasp.org/index.php/Category:Principle (retrieved Feb 29, 2012).
Pfleeger, Charles, 1997. Security in Computing 2nd ed., Wiley.
Pfleeger, Charles, and Shari Pfleeger, 2003. Security in Computing 3rd ed. ,Wiley.
Saltzer, Jerome, 1974. “Protection and the control of information sharing in Multics,” CACM 17(7), July, 1974.
Saltzer, Jerome, and Kaashoek, 2009. Principles of Computer Design, Wiley.
Saltzer, Jerome, and Schroeder, 1975. “The protection of information in computer systems,” Proc IEEE 63(9), September, 1975.
Shannon, 1949. “Communication Theory of Secrecy Systems,” Bell System Technical Journal 28(4).
Smith, Sean, and Marchesini, 2008, The Craft of System Security,
Smith, Richard, 2012. Elementary Information Security, Burlington, MA: Jones and Bartlett.
Stoneburner, Gary, Clark Hayden, and Alexis Feringa, 2004. “Engineering Principles for Information Technology Security,” SP 800-27 A, Gaithersburg, MD: National Institute of Standards and Technology.
Swanson, Marianne, and Barbara Guttman, 1996. “Generally Accepted Principles and Practices for Securing Information Technology Systems,” SP 800-14, Gaithersburg, MD: National Institute of Standards and Technology.
Forouzan, 2008. Cryptography and Network Security, McGraw-Hill.
Gollmann, 2006. Computer Security 2nd ed., Wiley.
Newman, 2010. Computer Security: Protecting Web Resources, Jones and Bartlett.
Stallings, 2003, Network Security Essentials, Prentice-Hall.
Stallings, 2006. Cryptography and Network Security, Prentice-Hall.
Stallings and Brown, 2008. Computer Security: Principles and Practice, Prentice-Hall.
Stamp, 2006. Computer Security: Principles and Practice, Wiley.
Whitman and Mattord, 2005. Principles of Information Security 2nd ed., Thomson.
Visit the textbook site eisec.us for the latest supporting materials and study guides.
Draft materials are provided below:
Jones & Bartlett Learning, November, 2011.
The only textbook verified by the US Government to conform fully to the Committee on National
Security Systems' national training standard for information security professionals (NSTISSI 4011)
This comprehensive, accessible Information Security text is ideal for the one-term, undergraduate college course. The text integrates risk assessment and security policy throughout the text, since security systems work best at achieving goals they are designed to meet, and security policy ties real-world goals to security mechanisms. Early chapters in the text discuss individual computers and small LANS, while later chapters deal with distributed site security and the Internet. Cryptographic topics follow the same progression, starting on a single computer and evolving to Internet-level connectivity. Mathematical concepts throughout the text are defined and tutorials with mathematical tools are provided to ensure students grasp the information at hand.
See below for sample contents. Sample chapters are also available from the publisher's site.
1.1. The Security Landscape
1.2. Process Example: Bob’s Computer
1.4. Identifying Risks
1.5. Prioritizing Risks
1.6. Ethical Issues in Security Analysis
1.7. Security Example: Aircraft Hijacking
2.1. Computers and Programs
2.2. Programs and Processes
2.3. Buffer Overflow and The Morris Worm
2.4. Access Control Strategies
2.5. Keeping Processes Separate
2.6. Security Policy and Implementation
2.7. Security Plan: Process Protection
3.1. The File System
3.2. Executable Files
3.3. Sharing and Protecting Files
3.4. Security Controls for Files
3.5. File Security Controls
3.6. Patching Security Flaws
3.7. Process Example: The Horse
3.8. Chapter Resources
4.1. Controlled Sharing
4.2. File Permission Flags
4.3. Access Control Lists
4.4. Microsoft Windows ACLs
4.5. A Different Trojan Horse
4.6. Phase Five: Monitoring The System
4.7. Chapter Resources
5.1. Phase Six: Recovery
5.2. Digital Evidence
5.3. Storing Data on a Hard Drive
5.4. FAT: An Example File System
5.5. Modern File Systems
5.6. Input/Output and File System Software
6.1. Unlocking a Door
6.2. Evolution of Password Systems
6.3. Password Guessing
6.4. Attacks on Password Bias
6.5. Authentication Tokens
6.6. Biometric Authentication
6.7. Authentication Policy
7.1. Protecting the Accessible
7.2. Encryption and Cryptanalysis
7.3. Computer-Based Encryption
7.4. File Encryption Software
7.5. Digital Rights Management
8.1. The Key Management Challenge
8.2. The Reused Key Stream Problem
8.3. Public-key Cryptography
8.4. RSA: Rivest-Shamir-Adleman
8.5. Data Integrity and Digital Signatures
8.6. Publishing Public Keys
9.1. Securing a Volume
9.2. Block Ciphers
9.3. Block Cipher Modes
9.4. Encrypting a Volume
9.5. Encryption in Hardware
9.6. Managing Encryption Keys
10.1. The Network Security Problem
10.2. Transmitting Information
10.3. Putting Bits on a Wire
10.4. Ethernet: A Modern LAN
10.5. The Protocol Stack
10.6. Network Applications
11.1. Building Information Networks
11.2. Combining Computer Networks
11.3. Talking Between Hosts
11.4. Internet Addresses in Practice
11.5. Network Inspection Tools
12.1. “Smart” Versus “Dumb” Networks
12.2. Internet Transport Protocols
12.3. Names on the Internet
12.4. Internet Gateways and Firewalls
12.5. Long Distance Networking
13.1. The Challenge of Community
13.2. Management Processes
13.3. Enterprise Issues
13.4. Enterprise Network Authentication
13.5. Contingency Planning
14.1. Communications Security
14.2. Crypto Keys on a Network
14.3. Crypto Atop the Protocol Stack
14.4. Network Layer Cryptography
14.5. Link Encryption on 802.11 Wireless
14.6. Encryption Policy Summary
15.1. Internet Services
15.2. Internet Email
15.3. Email Security Problems
15.4. Enterprise Firewalls
15.5. Enterprise Point of Presence
16.1. Hypertext Fundamentals
16.2. Basic Web Security
16.3. Dynamic Web Sites
16.4. Content Management Systems
16.5. Ensuring Web Security Properties
17.1. Secrecy In Government
17.2. Classifications and Clearances
17.3. National Policy Issues
17.4. Communications Security
17.5. Data Protection
17.6. Trustworthy Systems
The goal of this textbook is to introduce college students to information security. Security often involves social and organizational skills as well as technical understanding. To solve practical security problems, we must balance real-world risks and rewards against the cost and bother of available security techniques. The text uses continuous process improvement to integrate these elements.
Security is a broad field. Some students may excel in the technical aspects, while others may shine in the more social or process-oriented aspects. Many successful students fall between these poles. The text offers opportunities for all types of students to excel.
If we want a solid understanding of security technology, we must look closely at the strengths and weaknesses of underlying information technology itself. This requires a background in computer architecture, operating systems, and computer networking. It’s hard for a typical college student to achieve breadth and depth in these subjects and still have time to really study security.
Instead of leaving a gap in students’ understanding, this book provides introductions to essential technical topics. Chapter 2 explains the basics of computer operation and instruction execution. This prepares students for a description of process separation and protection, which illustrates the essential role of operating systems in enforcing security.
Chapter 5 introduces file systems and input/output in modern operating systems. This lays a foundation for forensic file system analysis. It also shows students how a modern operating system organizes a complex service. This sets the stage for Chapter 10’s introduction to computer networking and protocol software.
Introducing Continuous Process Improvement
The text organizes security problem-solving around a six-phase security process. Chapter 1 introduces the process as a way of structuring information about a security event, and presents a simple approach to risk analysis. Chapter 2 introduces security policies as a way to state security objectives, and security controls as a way to implement a policy. Subsequent chapters introduce system monitoring and incident response as ways to assess system security and improve it.
Each step in the process builds on earlier steps. Each step also provides a chance to assess how well our work addresses our security needs. This is the essence of continuous process improvement.
In order to give students an accurate view of process improvement, the text introduces document structures that provide cross references between different steps of the process. We use elements of each earlier phase to construct information in the following phase, and we often provide a link back to earlier data to ensure complete coverage. While this may seem like nit-picking in some cases, it allows mastery of essential forms of communication in the technical and professional world.
When used as a textbook, the material is intended for lower division undergraduates, or for students in a two-year community college program. Students should have completed high school mathematics. Typical students should have completed an introductory computing or programming course.
Instructors may want to use this book for either a one or two semester course. A one semester course would usually cover one chapter a week; the instructor may want to combine a couple of earlier chapters or skip the final chapter. Some institutions may find it more effective to teach the material over a full year. This gives the students more time to work with the concepts and to cover all topics in depth.
Following the style of my earlier books, this text focuses on diagrams and practical explanations to present fundamental concepts. This makes the material clearer to all readers and makes it more accessible to the math-phobic reader. Many concepts, particularly in cryptography, can be clearly presented in either a diagram or in mathematical notation. This text uses both, with a bias towards diagrams.
Many fundamental computing concepts are wildly abstract. This is also true in security, where we sometimes react to illusions perceived as risks. To combat this, the text incorporates a series of concrete examples played out by characters with names familiar to those who read cryptographic papers: Bob, Alice, and Eve. They are joined by additional classmates named Tina and Kevin, since different people have different security concerns.
The material in this text fulfills curriculum requirements published by the US government and the Association for Computing Machinery (ACM). In particular, the text covers all required topics for training information systems security professionals under the Information Assurance Courseware Evaluation Program (NSTISSI #4011) established by the US National Security Agency (NSA). The text also provides substantial coverage of the required topics for training senior system managers (CNSSI #4012) and for system administrators (CNSSI #4013).
The text also covers the core learning outcomes for information security education published in the ACM’s “IT 2008” curricular recommendations for Information Technology education. As a reviewer and contributor to the published recommendations, the author is familiar with its guidelines.
Students who are interested in becoming a Certified Information System Security Professional (CISSP) may use this book as a study aid for the examination. All key areas of the CISSP Common Body of Knowledge are covered in this text. Certification requires four or five years of professional experience in addition to passing the exam.
Information security is a fascinating but abstract subject. This text introduces students to real-world security problem solving, and incorporates security technology into the problem-solving process. There are lots of “how to” books that explain how to harden a computer system.
Many readers and most students need more than a “how to” book. They need to decide which security measures are most important, or how to trade off between alternatives. Such decisions often depend on what assets are at stake and what risks the computer’s owner are willing to take.
The practitioner’s most important task is the planning and analysis that helps choose and justify a set of security measures. In my own experience, and in that of my most capable colleagues, security systems succeed when we anticipate the principal risks and design the system to address them. In fact, once we have identified what we want for security (our policy requirements), we don’t need security gurus to figure out what we get (our implementation). We just need capable programmers, testers, and administrators.
As the chapter unfolds, we encounter certain key terms indicated in bold italics. These highlight essential terms that students may encounter in the information security community. Successful students will recognize these terms.
The Resources section at the end of each chapter lists the key terms and provides review and exercises. The review questions help students confirm that they have absorbed the essential concepts. Some instructors may want to use these as recitation or quiz questions. The problems and exercises give students the opportunity to solve problems based on techniques presented in the text.
Here is an incomplete collection of reading materials associated with the textbook. Visit the Elementary Infosec site for a complete set of reading and study materials.
Some links lead to on-line articles published by professional societies like the ACM (Association for Computing Machinery) or IEEE (Institute of Electrical and Electronic Engineers). Serious computing experts often join one or both of these, and sign up for electronic library subscriptions. Many college and university libraries may also provide free access to these for students and faculty.
Visit the Quizlet web site to find study aids for the chapter's acronyms and terms by following the links below. Many students prefer to study the acronyms first.
The following blogs provide readable reports and commentary on information security.
In Japan, the term kaizen embodies the continuous improvement process.
During World War II, military enterprises poured vast resources into various techincal projects, notably radar, codebreaking, and the atomic bomb. Those successes encouraged the peacetime military to pursue other large scale technical projects. A typical project would start with a very large budget and a vague completion date a few years in the future. But in practice, many of these projects vastly exceeded both the budget and the predicted time table.
A handful of projects, notably the Polaris Missile project, achieved success while adhering closely to their initial budget and schedule estimates. Pressure on defense budgets led the US DOD to identify features of the successful projects so they might be applied to future work. This was the genesis of systems engineering. The DOD's Defense Aquisition University has produced an introduction to systems engineering (PDF format) in the defense community.
The International Council on Systems Engineering (INCOSE) provides a more general view of systems engineering. NASA also provides on-line training materials on their Space Systems Engineering site.
Chapter 1 introduces the first three of eight basic principles of information security.
Different authorities present different lists of principles. International standards bodies, including NIST in the US, tend to produce very general lists of principles, reflecting notions such as "be safe," "keep records," and other generalizations (for example, see NIST's SP 800-14: "Generally Accepted Principles and Practices for Securing Information Technology Systems"). These principles represent basic truths about security, but few are stated in a way that helps one make security decisions.
Saltzer and Schroeder produced a now-classic list based on experience with the Multics time sharing system in the 1970s: "The Protection of Information in Computer Systems," Proceedings of the IEEE 63, 9 (September, 1975). Some of these principles reflect features of the Multics system while others reflect some well-known shortcomings with most systems of that time. Copies exist online at Saltzer's own web site and at the University of Virginia.
There is also a Cryptosmith blog post that compares the textbook's list of principles with those in Saltzer and Schroeder.
A high-level security analysis provides a brief summary of a security situation at a given point of time. The next section provides a checklist for writing a high-level security analysis.
The risk assessment processes noted in the textbook are all avaliable online:
The fundamental reference for everything related to the events of September 11, 2001, is the Final Report of the National Commission on Terrorist Attacks Upon the United States, a.k.a. "The 9/11 Commission Report," published in 2004.
Following 9/11, the BBC published a brief history of airline hijackings. The 2003 Centennial of Flight web site provides a more general summary of violent incidents in aviation security. In 2007, New York Magazine published a more detailed hijacking time-line in conjunction with breaking news on the D. B. Cooper hijacking case. The US FBI web site contains a lot of information about the D. B. Cooper case, including a 2007 update.
The textbook presents a high-level security analysis as a short writing exercise that summarizes a security situation. The analysis generally describes a situation at a particular point in time. For example, the 9/11 discussion in the textbook describes air travel security before 9/11. The analysis describes the six phases of the security process:
Here is a checklist of the basic properties of a high-level analysis:
Note that a complete security plan will also cover the six phases, but it is not limited to this length. A complete plan covers each phase thoroughly.
Visit the Quizlet web site to find study aids for the chapter's acronyms and terms by following the links below. Many students prefer to study the acronyms first.
Here is a 26-minute video of middle school students visiting a "walk through" computer to learn the basics of computer operation. Taken from the PBS TV series "Newton's Apple," 1990. Although the technology is over 20 years old, the fundamental components remain the same, except for speed speed, size, and capacity.
There are, of course, countless images and videos available through on-line searching that show specific elements of computer systems.
Students who have not yet studied these topics in detail will want to visit web sites that provide an introduction to binary and hex. YouTube user Ryan of Aberdeen has created a video tutorial (9 minutes). There are also written tutorials:
A faculty member at NC State University maintains a site that provides an overview of the Morris worm.
Eugene Spafford (aka spaf) wrote a report describing the worm, its operations, and its effects (PDF), shortly after the incident.
Eichin and Rochlis of MIT published a report of the worm incident from the MIT perspective (PDF). This was presented at the IEEE Symposium on Security and Privacy the following year.
In 1990, Peter Denning published a book that brought together several papers on the Morris worm and other security issues emerging at that time, titled Computers Under Attack: Intruders, Worms and Viruses.
Spafford also maintains an archive of worm-related information at Purdue University.
Auguste Kerckhoffs' original paper on cryptographic system design recommended that cryptographic systems be published and that secrecy should reside entirely in a secret key. The paper was published in French in 1883. Portions of Kerckhoffs' paper are available on-line including partial English translations.
Claude Shannon's "Communication Theory of Secrecy Systems" (Bell System Technical Journal, vol. 28, no. 4, 1949) contains his assumption "the enemy knows the system being used" (italics his). Bell Labs provides general information about Shannon's work and publications.
Eric Raymond published a famous essay on the benefits of open design and of sharing program source code in general, called "The Cathedral and The Bazaar." This essay has inspired many members of the Open Source community.
Butler Lampson introduced the access matrix in his 1971 paper, "Protection." Lampson has posted a copy of his paper on-line in several formats.
This is, unfortunately, a lot harder than it seems. As RAM has grown smaller and I/O grown more complex, motherboard components have changed dramatically in size and appearance. Here are suggestions on identifying key features in older and newer motherboards.
The most reliable way to identify a motherboard's contents is to locate a copy of its installation manual. These are usually posted on the web by the motherboard's manufacturer. Most boards clearly include the manufacturer's name and the board's model number.
If the manufacturer and model number aren't obvious, it may be possible to identify the motherboard using Google Images. Enter the word "motherboard" as a search term along with other textual items on the board. Compare the images displayed with the color and layout of the motherboard in question. Keep in mind everything should match when you find the correct board. Missing or misplaced features indicate that the boards don't match. A popular motherboard may appear many times, but most images will lead to pages that indicate the manufacturer and model. In some cases, the image may lead directly to the manufacturer's own pages.
Visit the Quizlet web site to find study aids for the chapter's acronyms and terms by following the links below. Many students prefer to study the acronyms first.
Security consultant Fred Cohen performed much of the pioneering analysis of computer viruses. His web site contains several useful articles on virus technology. Even though some of the material is 30 years old, the basic technical truths remain unchanged.
Some anti-virus vendors provide summaries of current anti-virus and malware activities:
Here is a list of malware briefly described in the textbook, plus links to in-depth reports on each one. Check recent news: security experts occasionally make progress in eradicating one or another of these, but the botnets sometimes recover. Many of these are PDFs.
Videos: Ralph Langer, a German expert in control systems security, gave a TED talk describing Stuxnet (~11 minutes). Bruce Dang of Microsoft also gave a detailed presentation about Stuxnet (75 minutes) at a conference.
Butler Lampson introduced the access matrix in his 1971 paper, "Protection" (PDF). Lampson has posted a copy on-line in several formats.
Although most modern systems use resource-oriented permissions to control access rights, there are a few cases that use capabilities, which associate rights with active entities like programs and users. Jack Dennis and Earl Van Horn of MIT introduced the notion of capabilities in their 1965 report "Programming Semantics for Multiprogrammed Computers," which was published in Communications of the ACM in 1966.
Marc Stiegler has posted an interesting introduction to capability based security that ties it to other important security concepts. The EROS OS project has also posted an essay that explains capability-based security. For a thorough coverage of capability based architecture circa 1984, see Henry Levy's book Capability-Based Computer Systems. He has posted it on-line.
Microsoft has posted an article that describes access control on Windows files and on the Windows "registry," a special database of system-specific information.
Electrical engineers have relied on state diagrams for decades to help design complicated circuits. The technique is also popular with some software engineers, though it rarely finds its way into courses on computer programming. Any properly-constructed state diagram may be translated into a state table that provides the same information in a tabular form. Tony Kuphaldt's free on-line textbook Lessons in Electric Circuits explains state machines in the context of electric circuits in Volume IV, Chapter 11: Sequential Circuits-Counters.
Upper-lever computer science students may encounter state diagrams in a course on automata theory in which they use such diagrams to represent deterministic finite automata. Such mechanisms can handle the simplest type of formal language, a regular grammar. Most people encounter regular grammars as regular expressions, an arcane syntax used to match text patterns when performing complicated search-and-replace operations in text editors.
Students introduced to modern structured design techniques using the Unified Modeling Language (UML) often use state machine diagrams or state charts (a diagram in table form). On-line tutorials about UML state machines appear at Kennesaw State University and the Agile Modeling web site.
In the US, there are several organizations that track and report on information security vulnerabilities. Many of these organizations provide email alerts and other data feeds to keep subscribers up to date on emerging vulnerabilities. Some organizations provide their services to particular communities (e.g. government or military organizations, or customers of a vendor's products) while others provide reports to the public at large.
The SANS Internet Storm Center also provides a variety of on-line news feeds and reports, as well as a continuously-updated "Infocon Status" to indicate unusual changes in the degree of malicious activity on the Internet. Click on the image below to visit the Internet Storm Center for further information on current vulnerabilities and malicious Internet activity.
In 2000, Arbaugh, Fithen, and McHugh wrote an article describing a life-cycle model of information security vulnerabilities titled "Windows of Vulnerability: A Case Study Analysis", (IEEE Computer 33, December 2000). The authors have posted a copy of the article online (PDF).
The library at Stanford posted a brief history of the Trojan War. Although Homer's Iliad tells the story of the Trojan War, it says very little about the Greek trickery that led to the city's fall. The story is more the province of Virgil's Aeneid.
In the 1970s, Guy Steele at MIT started collecting bits of jargon used in the computer community. This yielded "The Jargon File," which Steele maintained for several years until it was passed on to Eric Raymond. According to the Jargon File, the term Trojan horse entered the computing lexicon via Dan Edwards of MIT and the NSA.
US-CERT has published a two-page guide on how to deal with a Trojan horse or virus infection on a computer (PDF).
There are numerous on-line tutorials on Unix and/or Linux file permissions, including ones provided by:
ACLs first appeared in the Multics timesharing system, as described in the paper "A General-Purpose File System For Secondary Storage," by R. Daley and Peter Neumann (Proc. 1965 Fall Joint Computer Conference) and on the Multicians web site.
Since ACLs could provide very specific access restrictions, they became recommended features of high-security systems. When the US DOD developed the "Trusted Computer System Evaluation Criteria," (PDF) (a.k.a. the TCSEC or Orange Book) ACLs were an essential feature of higher security systems. Modern security products are evaluated against the Common Criteria.
While traditional Unix systems did not have ACLs, more advanced versions of Unix incorporated them, partly to meet high security requirements like those in the Orange Book. This led to the development of POSIX ACLs as part of a proposed POSIX 1003.1e standard. The standards effort was abandoned, but several Unix-based systems did incorporate POSIX ACLs. Here are examples:
The ACL user interface on Mac OS-X is very simple. In fact, the OS-X ACLs are based on POSIX ACLs and may incorporate more spohisticated settings and inheritances than we see in the Finder's "Information" display. These features are available through special ACL options of the chmod shell command. One developer has produced an application called Sandbox that provides a more extensive GUI for managing the ACLs.
It can be challenging to find accurate online information about Windows ACLs, because the computer-based access controls are often confused with network-based access controls. The MS Developer Network provides general information about ACLs.
Researchers at CMU evaluated the Windows XP version of ACLs in a series of experiments documented in "Improving user-interface dependability through mitigation of human error," Intl J. Human-Computer Studies 63 (2005) 25-50, by Maxion and Reeder.
Here is a summary of memory size names and their corresponding address sizes. Many people memorize this type of information naturally through working with computer technology over time or during a professional career.
If you want to memorize these values, visit the Quizlet page. The page tests your knowledge of the smaller sizes (K, M, G, T), how these sizes are related (i.e. a terabyte is a thousand billion bytes), and how they relate to memory sizes (a TB needs an address approximately 40 bits long).
Here is a simple shortcut for estimating the number of bits required to address storage of a given size.
103 ~ 210
To put this into practice, we do the following:
Let's work out an example with a terabyte: a trillion-byte memory.
Cryptographers develop new hash functions every few years because cryptanalysts and mathematicians find weaknesses in the older ones. Valerie Aurora provides a graphic illustration of this.
This section of the textbook provides details not otherwise addressed by the main text.
Two educational standards in information system security refer to closely-related models of information system security. First, we have a US government training standard:
Second, we have an academic curriculum standard:
1.1 The Security Landscape
Not Just Computers Any More
1.1.1 Making Security Decisions
1.1.2 The Security Process
1.1.3 Continuous Improvement: A Basic Principle
The Roots of Continuous Improvement
1.2 Process Example: Bob’s Computer
1.3 Assets and Risk Assessment
Fine Points of Terminology
1.3.1 What Are We Protecting?
1.3.2 Security Boundaries
Least Privilege: A Second Basic Principle
Example: Boundaries in a Dorm
Analyzing the Boundary
The Insider Threat
1.3.3 Security Architecture
Defense In Depth: A Third Basic Principle
1.3.4 Risk Assessment Overview
1.4 Identifying Risks
1.4.1 Threat Agents
1.4.2 Security Properties, Services, and Attacks
1.5 Prioritizing Risks
1.5.1 Example: Risks to Alice’s Laptop
Step 1: Identify Computing Assets
Step 2: Identify Threat Agents and Potential Attacks
Step 3: Estimate the Likelihood of Individual Attacks
Step 4: Estimate the Impact of Attacks over Time
Step 5: Calculate the Impact of Each Attack
1.5.2 Other Risk Assessment Processes
1.6 Ethical Issues in Security Analysis
Laws, Regulations, and Codes of Conduct
1.6.1 Searching for Vulnerabilities
1.6.2 Sharing or Publishing Vulnerabilities
1.7 Security Example: Aircraft Hijacking
1.7.1 Hijacking: A High-Level Analysis
1.7.2 September 11, 2001
1.8.1 Review Questions
1.8.2 Exercises and Problems
2.1 Computers and Programs
Parallel Versus Serial Wiring
2.1.2 Program Execution
Separating Data and Control
2.2 Programs and Processes
2.2.1 Switching Between Processes
Observing Active Processes
2.2.2 The Operating System
2.3 Buffer Overflow and The Morris Worm
The ‘finger’ program
2.3.1 The ‘finger’ Overflow
The Worm Released
2.3.2 Security Alerts
2.4 Access Control Strategies
2.4.1 Puzzles and Patterns
Open Design: A Basic Principle
Cryptography and Open Design
Pattern-based Access Control
2.4.2 Chain of Control: Another Basic Principle
Controlling the BIOS
Subverting the Chain of Control
2.5 Keeping Processes Separate
Evolution of Personal Computers
Security on Personal Computers
Operating System Security Features
2.5.1 Sharing a Program
2.5.2 Sharing Data
2.6 Security Policy and Implementation
Constructing Alice’s Security Plan
Writing a Security Policy
2.6.1 Analyzing Alice’s Risks
2.6.2 Constructing Alice’s Policy
2.6.3 Alice’s Security Controls
Alice’s Backup Procedure
2.7 Security Plan: Process Protection
Policy for Process Protection
Functional Security Controls
The Dispatcher’s Design Description
The Design Features
The Dispatching Procedure
Security Controls for Process Protection
2.8.1 Review Questions
2.8.2 Problems and Exercises
3.1 The File System
File and Directory Path Names
3.1.1 File Ownership and Access Rights
File Access Rights
Initial File Protection
3.1.2 Directory Access Rights
3.2 Executable Files
3.2.1 Execution Access Rights
Types of Executable Files
3.2.2 Computer Viruses
3.2.3 Macro Viruses
3.2.4 Modern Malware: A Rogue’s GAllery
Conficker, also called Downadup
3.3 Sharing and Protecting Files
3.3.1 Policies for Sharing and Protection
Underlying System Policy
User Isolation Policy
User File Sharing Policy
3.4 Security Controls for Files
3.4.1 Deny by Default: A Basic Principle
The opposite of Deny by Default
3.4.2 Managing Access Rights
Capabilities in Practice
3.5 File Security Controls
3.5.1 File Permission Flags
System and Owner Access Rights in Practice
3.5.2 Security Controls to Enforce Bob’s Policy
3.5.3 States and State Diagrams
3.6 Patching Security Flaws
The Patching Process
Security Flaws and Exploits
Windows of Vulnerability
3.7 Process Example: The Horse
3.7.1 Troy: A High-Level Analysis
3.7.2 Analyzing the Security Failure
3.8 Chapter Resources
3.8.1 Review Questions
4.1 Controlled Sharing
Tailored File Security Policies
Bob’s Sharing Dilemma
4.1.1 Basic File Sharing on Windows
4.1.2 User Groups
4.1.3 Least Privilege and Administrative Users
Administration by Regular Users
User Account Control on Windows
4.2 File Permission Flags
4.2.1 Permission Flags and Ambiguities
4.2.2 Permission Flag Examples
Security Controls for the File Sharing Policy
4.3 Access Control Lists
Modern ACL Implementations
4.3.1 POSIX ACLs
4.3.2 Macintosh OS-X ACLs
4.4 Microsoft Windows ACLs
4.4.1 Denying Access
Determining Access Rights
Building Effective ACLs
4.4.2 Default File Protection
Moving and Copying Files
4.5 A Different Trojan Horse
A Trojan Horse Program
Transitive Trust: A Basic Principle
4.6 Phase Five: Monitoring The System
Catching an intruder
4.6.1 Logging Events
A Log Entry
The Event Logging Mechanism
Detecting Attacks by Reviewing the Logs
4.6.2 External Security Requirements
Laws, Regulations, and Industry Rules
External Requirements and the Security Process
4.7 Chapter Resources
4.7.1 Review Questions
5.1 Phase Six: Recovery
Incidents and Damage
5.1.1 The Aftermath of an Incident
Fault and Due Diligence
5.1.2 Legal Disputes
Resolving a Legal Dispute
5.2 Digital Evidence
The Fourth Amendment
5.2.1 Collecting Legal Evidence
Collecting Evidence at The Scene
Securing the Scene
Documenting the Scene
5.2.2 Digital Evidence Procedures
Authenticating a Hard Drive
5.3 Storing Data on a Hard Drive
Magnetic Recording and Tapes
Hard Drive Fundamentals
5.3.1 Hard Drive Controller
5.3.2 Hard Drive Formatting
High Level Format
5.3.3 Error Detection and Correction
Cyclic Redundancy Checks
Error Correcting Codes
5.3.4 Hard Drive Partitions
Partitioning to Support Older Drive Formats
Partitioning in Modern Systems
Partitioning and Fragmentation
Hiding Data with Partitions
5.3.5 Memory Sizes and Address Variables
Address, Index, and Pointer Variables
Memory Size Names and Acronyms
Estimating the Number of Bits
5.4 FAT: An Example File System
5.4.1 Boot Blocks
5.4.2 Building Files from Clusters
An Example FAT File
FAT Format Alternatives
5.4.3 FAT Directories
Long File Names
Undeleting a File
5.5 Modern File Systems
File System Design Goals
Conflicting File System Objectives
5.5.1 Unix File System
5.5.2 Apple’s HFS Plus
5.5.3 Microsoft’s NTFS
5.6 Input/Output and File System Software
File System Software
5.6.1 Software Layering
5.6.2 A Typical I/O Operation
Part A: Call the operating system
Part B: OS constructs the I/O operation
Part C: The driver starts the actual I/O device
Part D: The I/O operation ends
5.6.3 Security and I/O
Restricting the devices themselves
Restricting Parameters in I/O Operations
File Access Restrictions
5.7.1 Review Questions
6.1 Unlocking a Door
6.1.1 Authentication Factors
6.1.2 Threats and Risks
Attack Strategy: Low Hanging Fruit
6.2 Evolution of Password Systems
Password Hashing in Practice
6.2.1 One-way Hash Functions
Modern Hash Functions
A Cryptographic Building Block
6.2.2 Sniffing Credentials
6.3 Password Guessing
DOD Password Guideline
Off-line Password Cracking
6.3.1 Password Search Space
6.3.2 Truly Random Password Selection
6.3.3 Cracking Speeds
6.4 Attacks on Password Bias
Bias and Entropy
6.4.1 Biased Choices and Average Attack Space
Average Attack Space
Biased Password Selection
Measuring Likelihood, not Certainty
Making Independent Guesses
Example: 4-digit luggage lock
6.4.2 Estimating Language-Based Password Bias
Klein’s Password Study
6.5 Authentication Tokens
Passive Authentication Tokens
6.5.1 Challenge-Response Authentication
Another Cryptographic Building Block
Direct Connect Tokens
6.5.2 One-time Password Tokens
A Token’s Search Space
Average Attack Space
Attacking One-Time Password Tokens
Guessing a Credential
6.5.3 Token Vulnerabilities
6.6 Biometric Authentication
6.6.1 Biometric Accuracy
6.6.2 Biometric Vulnerabilities
6.7 Authentication Policy
6.7.1 Weak and Strong Threats
Effect of Location
6.7.2 Policies for Weak Threat Environments
A Household Policy
A Workplace Policy: Passwords Only
A Workplace Policy: Passwords and Tokens
6.7.3 Policies for Strong and Extreme Threats
Passwords Alone for Strong Threats
Passwords Plus Biometrics
Passwords Plus Tokens
Constructing the Policy
6.7.4 Password Selection and Handling
Strong But Memorable Passwords
The Strongest Passwords
6.8.1 Review Questions
7.1 Protecting the Accessible
7.1.1 Process Example: The Encrypted Diary
7.1.2 Encryption Basics
Categories of Encryption
A Process View of Encryption
Shared Secret Keys
7.1.3 Encryption and Information States
Illustrating Policy with a State Diagram
Proof of Security
7.2 Encryption and Cryptanalysis
7.2.1 The Vignère Cipher
7.2.2 Electromechanical Encryption
7.3 Computer-Based Encryption
The Data Encryption Standard
The Advanced Encryption Standard
Predicting Cracking Speeds
7.3.1 Exclusive Or: A Crypto Building Block
7.3.2 Stream Ciphers: Another Building Block
Generating a Key Stream
An Improved Key Stream
7.3.3 Key Stream Security
Pseudo-Random Number Generators
The Effects of Ciphertext Errors
7.3.4 The One-time Pad
Soviet Espionage and One-time pads
Practical One-time pads
7.4 File Encryption Software
7.4.1 Built-in File Encryption
7.4.2 Encryption Application Programs
Ensuring Secure File Encryption
Protecting the Secret Key
7.4.3 Erasing a Plaintext File
Risks That Demand Overwriting
Preventing Low Level Data Recovery
Erasing Optical Media
7.4.4 Choosing a File Encryption Program
Software Security Checklist
File Encryption Security Checklist
Cryptographic Product Evaluation
7.5 Digital Rights Management
The DVD Content Scrambling System
7.6.1 Review Questions
8.1 The Key Management Challenge
Levels of Risk
Key Sharing Procedures
Distributing New Keys
8.1.2 Using Text for Encryption Keys
Taking Advantage of Longer Passphrases
Software Checklist for Key Handling
8.1.3 Key Strength
8.2 The Reused Key Stream Problem
8.2.1 Avoiding Reused Keys
Changing the Internal Key
Combining the Key with a Nonce
Software Checklist for Internal Keys Using Nonces
8.2.2 Key Wrapping: Another Building Block
Key Wrapping and Cryptoperiods
Software Checklist for Wrapped Keys
8.2.3 Separation of Duty: A Basic Principle
Separation of Duty with Encryption
8.2.4 DVD Key Handling
8.3 Public-key Cryptography
Attacking Public Keys
8.3.1 Sharing a Secret: Diffie-Hellman
Perfect Forward Secrecy
Variations of Diffie-Hellman
8.3.2 Diffie-Hellman: The Basics of the Math
8.3.3 Elliptic Curve Cryptography
8.4 RSA: Rivest-Shamir-Adleman
8.4.1 Encapsulating Keys with RSA
8.4.2 An Overview of RSA Mathematics
Brute Force Attacks on RSA
The Original Challenge
The Factoring Problem
Selecting a Key Size
Other Attacks on RSA
8.5 Data Integrity and Digital Signatures
8.5.1 Detecting Malicious Changes
One-way Hash Functions
8.5.2 Detecting a Changed Hash Value
8.5.3 Digital Signatures
8.6 Publishing Public Keys
8.6.1 Public-Key Certificates
8.6.2 Chains of Certificates
Web of Trust
Trickery with Certificates
8.6.3 Authenticated Software Updates
8.7.1 Review Questions
9.1 Securing a Volume
9.1.1 Risks To Volumes
Discarded Hard Drives
9.1.2 Risks and Policy Trade-offs
Identifying Critical Data
Policy for Unencrypted Volumes
Policy for Encrypted Volumes
9.2 Block Ciphers
Building a Block Cipher
The Effect of Ciphertext Errors
9.2.1 Evolution of DES and AES
DES and Lucifer
The Development of AES
9.2.2 The RC4 Story
RC4 Leaking, Then Cracking
9.2.3 Qualities of Good Encryption Algorithms
Explicitly designed for encryption
Security does not rely on its secrecy
Available for analysis
Subjected to analysis
No practical weaknesses
Choosing an Encryption Algorithm
9.3 Block Cipher Modes
9.3.1 Stream Cipher Modes
9.3.2 Cipher Feedback Mode
9.3.3 Cipher Block Chaining
9.4 Encrypting a Volume
Choosing a Cipher Mode
Hardware Versus Software
9.4.1 Volume Encryption in Software
Files as Encrypted Volumes
9.4.2 Adapting an Existing Mode
Drive Encryption with Counter Mode
Constructing the Counter
An Integrity Risk
Drive Encryption with CBC Mode
Integrity Issues with CBC Encryption
9.4.3 A “Tweakable” Encryption Mode
9.4.4 Residual Risks
Looking for plaintext
9.5 Encryption in Hardware
Recycling the Drive
9.5.1 The Drive Controller
9.5.2 Drive Locking and Unlocking
9.6 Managing Encryption Keys
9.6.1 Key Storage
Working key storage in hardware
Persistent Key Storage
Managing removable keys
9.6.2 Booting an Encrypted Drive
9.6.3 Residual Risks to Keys
Eavesdrop on the encryption process
Sniffing keys from swap files
Cold boot attack
Recycled Password Attack
The “Master Key” Risk
9.7.1 Review Questions
10.1 The Network Security Problem
10.1.1 Basic Network Attacks and Defenses
Example: Sharing Eve’s Printer
10.1.2 Physical Network Protection
Protecting External Wires
10.1.3 Host and Network Integrity
Botnets in Operation
The Insider Threat
10.2 Transmitting Information
10.2.1 Message Switching
10.2.2 Circuit Switching
10.2.3 Packet Switching
Mix and Match Network Switching
10.3 Putting Bits on a Wire
Synchronous versus Asynchronous Links
10.3.1 Wireless Transmission
Frequency, Wavelength, and Bandwidth
AM and FM Radio
Radio Propagation and Security
10.3.2 Transmitting Packets
Network Efficiency and Overhead
10.3.3 Recovering a Lost Packet
10.4 Ethernet: A Modern LAN
10.4.1 Wiring a Small Network
10.4.2 Ethernet Frame Format
MAC Addresses and Security
10.4.3 Finding Host Addresses
Addresses from Keyboard Commands
Addresses from Mac OS
Addresses from Microsoft Windows
10.4.4 Handling Collisions
10.5 The Protocol Stack
10.5.1 Relationships Between Layers
10.5.2 The OSI Protocol Model
The Orphaned Layers
10.6 Network Applications
Network Applications and Information States
10.6.1 Resource Sharing
10.6.2 Data and File Sharing
Delegation: A Security Problem
10.7.1 Review Questions
11.1 Building Information Networks
Network Topology: Evolution of the Phone Network
11.1.1 Point-to-Point Network
11.1.2 Star Network
11.1.3 Bus Network
11.1.4 Tree Network
11.2 Combining Computer Networks
Traversing Computer Networks
The Internet Emerges
11.2.1 Hopping Between Networks
Routing Internet Packets
11.2.2 Evolution of Internet Security
Protecting the ARPANET
Early Internet Attacks
Early Internet Defenses
11.2.3 Internet Structure
Starting an ISP
11.3 Talking Between Hosts
Socket API capabilities
11.3.1 IP Addresses
IP Version 6
11.3.2 IP Packet Format
11.3.3 Address Resolution Protocol
The ARP Cache
11.4 Internet Addresses in Practice
IPv4 Address Classes
11.4.1 Addresses, Scope, and Reachability
11.4.2 Private IP Addresses
Assigning Private IP Addresses
Dynamic Host Configuration Protocol
11.5 Network Inspection Tools
11.5.1 Wireshark Examples
Address Resolution Protocol
11.5.2 Mapping a LAN with nmap
The Nmap Network Mapper Utility
Use Nmap with Caution
11.6.1 Review Questions
12.1 “Smart” Versus “Dumb” Networks
The End-to-End Principle
12.2 Internet Transport Protocols
User Datagram Protocol
End-to-End Transport Protocols
12.2.1 Transmission Control Protocol
Sequence and Acknowledgement Numbers
12.2.2 Attacks on Protocols
Internet Control Message Protocol
12.3 Names on the Internet
The Name Space
12.3.1 Domain Names in Practice
Using a Domain Name
12.3.2 Looking Up Names
12.3.3 DNS Protocol
Resolving a Domain Name Via Redirection
12.3.4 Investigating Domain Names
12.3.5 Attacking DNS
DOS Attacks on DNS Servers
DOS Attacks and DNS Resolvers
DNS Security Improvements
12.4 Internet Gateways and Firewalls
12.4.1 Network Address Translation (NAT)
Configuring DHCP and NAT
12.4.2 Filtering and Connectivity
12.4.3 Software-based Firewalls
12.5 Long Distance Networking
12.5.1 Older technologies
Analog broadcast networks
Circuit-switched telephone systems
Analog-based digital networks
Analog two-way radios
12.5.2 Mature technologies
Dedicated digital network links
12.5.3 Evolving technologies
Optical fiber networks.
Bidirectional satellite communications
12.6.1 Review Questions
13.1 The Challenge of Community
13.1.1 Companies and Information Control
Reputation: Speaking with One Voice
Companies and Secrecy
Need To Know
13.1.2 Enterprise Risks
Insiders and Outsiders
13.1.3 Social Engineering
Thwarting Social Engineering
13.2 Management Processes
13.2.1 Security Management Standards
Evolution of Management Standards
13.2.2 Deployment Policy Directives
13.2.3 Management Hierarchies and Delegation
Profit Centers and Cost Centers
Implications for Information Security
13.2.4 Managing Information Resources
Managing Information Security
13.2.5 Security Audits
13.2.6 Information Security Professionals
Information Security Training
Information Security Certification
13.3 Enterprise Issues
Education, Training, and Awareness
13.3.1 Personnel Security
Employee Life Cycle
Administrators and Separation of Duty
13.3.2 Physical Security
Information System Protection
13.3.3 Software Security
Software Development Security
Repeatability and Traceability
Formalized Coding Activities
Avoiding Risky Practices
Software-based access controls
13.4 Enterprise Network Authentication
13.4.1 Direct Authentication
13.4.2 Indirect Authentication
Properties of Indirect Authentication
13.4.3 Off-Line Authentication
13.5 Contingency Planning
13.5.1 Data Backup and Restoration
Full Versus Partial Backups
File-Oriented Synchronized Backups
File-Oriented Incremental Backups
RAID as Backup
13.5.2 Handling Serious Incidents
Examining a Serious Attack
13.5.3 Disaster Preparation and Recovery
Business Impact Analysis
Contingency Planning Process
13.6.1 Review Questions
14.1 Communications Security
14.1.1 Crypto By Layers
Link Layer Encryption and 802.11 Wireless
Network Layer Encryption and IPsec
Socket Layer Encryption with SSL/TLS
Application Layer Encryption with S/MIME or PGP
14.1.2 Administrative and Policy Issues
Internet Site Access
14.2 Crypto Keys on a Network
The Default Keying Risk
Key Distribution Objectives
Key Distribution Strategies
Key Distribution Techniques
14.2.1 Manual Keying: A Building Block
14.2.2 Simple Rekeying
New Keys Encrypted With Old
14.2.3 Secret-Key Building Blocks
Key Distribution Center (KDC)
Shared Secret Hashing
14.2.4 Public-Key Building Blocks
Secret Sharing with Diffie-Hellman
Wrapping a Secret with RSA
14.2.5 Public-Key versus Secret-Key Exchanges
Choosing Secret-Key Techniques
Choosing Public-Key Techniques
14.3 Crypto Atop the Protocol Stack
Privacy Enhanced Mail
Pretty Good Privacy
Adoption of Secure Email and Application Security
14.3.1 Transport Layer Security - SSL and TLS
The World Wide Web
Secure Sockets Layer/Transport Layer Security
14.3.2 SSL Handshake Protocol
14.3.3 SSL Record Transmission
Message Authentication Code
Application Transparency and End-to-End Crypto
14.4 Network Layer Cryptography
Components of IPsec
14.4.1 The Encapsulating Security Payload (ESP)
ESP Packet Format
14.4.2 Implementing a VPN
Private IP Addressing
Bundling Security Associations
14.4.3 Internet Key Exchange (IKE) Protocol
14.5 Link Encryption on 802.11 Wireless
Wi-Fi Protected Access: WPA and WPA2
14.5.1 Wireless Packet Protection
Decryption and Validation
14.5.2 Security Associations
Establishing the Association
Establishing the Keys
14.6 Encryption Policy Summary
Apply Encryption Automatically
14.7.1 Review Questions
15.1 Internet Services
Traditional Internet Applications
15.2 Internet Email
Message Formatting Standards
The To: Field
The From: Field
15.2.1 Email Protocol Standards
POP3: An Example
Port Number Summary
15.2.2 Tracking an Email
#1: From UC123 to USM01
#2: From USM01 to USM02
#3: From USM02 to MMS01
#4: From MMS01 to MMS02
15.2.3 Forging an Email Message
15.3 Email Security Problems
Classic Financial Fraud
Evolution of Spam Prevention
MTA Access Restriction
Filtering on Spam Patterns
Tracking a Phishing Attack
15.3.3 Email Viruses and Hoaxes
Email Chain Letters
Virus Hoax Chain Letters
15.4 Enterprise Firewalls
Evolution of Internet Access Policies
A Simple Internet Access Policy
15.4.1 Controlling Internet Traffic
15.4.2 Traffic Filtering Mechanisms
15.4.3 Implementing Firewall Rules
Example of Firewall Security Controls
Additional Firewall Mechanisms
Firewall Rule Proliferation
15.5 Enterprise Point of Presence
Internet Service Providers
Intrusion Prevention Systems
Data Loss Prevention Systems
15.5.1 POP Topology
Single Firewall Topology
Bastion Host Topology
15.5.2 Attacking an Enterprise Site
15.5.3 The Challenge of Real-Time Media
16.1 Hypertext Fundamentals
Formatting: Hypertext Markup Language
Cascading Style Sheets
Hypertext Transfer Protocol
Retrieving data from other files or sites
16.1.1 Addressing Web Pages
Hosts and Authorities
Default Web Pages
16.1.2 Retrieving a Static Web Page
Building a Page from Multiple Files
Web Servers and Statelessness
Web Directories and Search Engines
Crime via Search Engine
16.2 Basic Web Security
Client Policy Issues
Policy Motivations and Objectives
Internet Policy Directives
Strategies to Manage Web Use
The Tunneling Dilemma
Firewalling HTTP Tunnels
16.2.1 Security for Static Web Sites
16.2.2 Server Authentication
Mismatched Domain Name: May be Legitimate
Untrusted Certificate Authority: Difficult to Verify
Expired Certificate: Possibly Bogus, Probably Not
Revoked Certificate: Always Bogus
Invalid Digital Signature: Always Bogus
16.2.3 Server Masquerades
Bogus Certificate Authority
Misleading Domain Name
Stolen Private Key
Tricked certificate authority
16.3 Dynamic Web Sites
Web Forms and POST
16.3.1 Scripts on the Web
Client Side Scripts
Client Scripting Risks
“Same Origin” Policy
16.3.2 States and HTTP
16.4 Content Management Systems
16.4.1 Database Management Systems
Structured Query Language
16.4.2 Password Checking: A CMS Example
Logging In to a Web Site
An Example Login Process
16.4.3 Command Injection Attacks
A Password-Oriented Injection Attack
Inside the Injection Attack
Resisting Web Site Command Injection
16.5 Ensuring Web Security Properties
Serve confidential data
Collect confidential data
16.5.1 Web Availability
16.5.2 Web Privacy
17.1 Secrecy In Government
Hostile Intelligence Services
17.1.1 The Challenge of Secrecy
The Discipline of Secrecy
Secrecy and Information Systems
Exposure and Quarantine
17.1.2 Information Security and Operations
Intelligence and Counterintelligence
17.2 Classifications and Clearances
Legal Basis for Classification
Minimizing the Amount of Classified Information
17.2.1 Security Labeling
Sensitive But Unclassified
17.2.2 Security Clearances
17.2.3 Classification Levels in Practice
Working with classified information
Higher levels have greater restrictions
17.2.4 Compartments and Other Special Controls
Sensitive Compartmented Information
Example of SCI processing
Special Access Programs
Special Intelligence Channels
Enforcing Access to Levels and Compartments
17.3 National Policy Issues
Federal Information Security Management Act
NIST Standards and Guidance for FISMA
Personnel roles and responsibilities
Threats, vulnerabilities, and countermeasures
17.3.1 Facets of National System Security
Life Cycle Procedures
17.3.2 Security Planning
System Life Cycle Management
Security System Training
17.3.3 Certification and Accreditation
NIST Risk Management Framework
17.4 Communications Security
Key Leakage Through Spying
17.4.1 Cryptographic Technology
Classic Type 1 Crypto Technology
17.4.2 Crypto Security Procedures
Controlled Cryptographic Items
Key Management Processes
Data Transfer Device
Electronic Key Management System
17.4.3 Transmission Security
17.5 Data Protection
Media Sanitization and Destruction
17.5.1 Protected Wiring
17.6 Trustworthy Systems
Trusted Systems Today
17.6.1 Integrity of Operations
Achieving Nuclear High Assurance
17.6.2 Multilevel Security
Rule- and Identity-Based Access Control
Other Multilevel Security Problems
17.6.3 Computer Modes of Operation
System High Mode
Compartmented or Partitioned Mode
17.7.1 Review Questions
This provides brief articles on certain topics relevant to CNSS training standards.