Weaving a Web of Trust

Rohit Khare and Adam Rifkin

This paper is a working draft. It is significantly different from the archival version that appeared unrefereed in the summer 1997 issue of the World Wide Web Journal (Volume 2, Number 3, Pages 77-112). Comments are welcome.


Abstract

To date, "World Wide Web Security" has been publicly associated with debates over cryptographic technology, protocols, and public policy. This narrow focus can obscure the wider challenge of building trusted Web applications. Since the Web aims to be an information space that reflects not just human knowledge but also human relationships, it will soon reflect the full complexity of trust relationships among people, computers, and organizations. Within the computer security community, Trust Management has emerged as a new philosophy for protecting open, decentralized systems, in contrast to traditional tools for securing closed systems. Trust Management is an essential approach, because the Web crosses many trust boundaries that old-school computer security cannot even begin to handle.

In this paper, we consider how this philosophy could be applied to the Web. We introduce the fundamental principles, principals, and policies of Trust Management, as well as Web-specific pragmatic issues. In so doing, we develop a taxonomy for how trust assertions can be specified, justified, and validated. We demonstrate the value of this framework by considering the trust questions faced by the designers of applications for secure document distribution, content filtering, electronic commerce, and downloadable-code systems. We conclude by sketching the limits to automatable Trust Management, demonstrating how trust on the Web will adapt to the trust rules of human communities and vice versa.

1. Motivation

Clara Customer fires up her favorite Web browser one morning and connects to her Bank to pay her rent. The Bank's computer duly opens up an encrypted session, and Clara fills out the payment form from her landlord. Later that morning, Bob Banker has to approve the transaction.

Should Bob move the money from Clara to her landlord? The Bank uses Secure Sockets Layer (SSL) [Freier et al., 1996] or IETF Transport Layer Security (TLS) [Dierks and Allen, 1997] to encrypt the whole conversation (to the strongest degree allowed by law), so no attacker can scramble Clara's payment order. Since the cryptography checks out, the transaction is secure, right?

Wrong. Although cryptography has been well-studied [Schneier, 1996] and well-publicized, traditional secure applications use many other implicit rules as well. For example, we might assume that the transaction can be initiated only from a secure location, such as an Automated Teller Machine (ATM). Or, we might assume that the authorized user has a unique physical token to establish her identity, such as an ATM card. Or, we might assume that the transaction takes place in real-time, rather than being faked later.

These implicit assumptions do not hold for a Web-based 'virtual ATM'. Although custom-built secure applications can enforce specific roles such as "Secret-level users should not be able to read Top Secret documents," the cryptography exists solely as a tool for verifying these assumptions.

Unfortunately, many users and designers behave as though strong cryptography can paper over security holes in the World Wide Web. The Web is powerful precisely because it is such a generic platform for developing applications such as this "virtual ATM" scenario. Often, intoxicated with the power and ease of building Web gateways, we overlook the consequences of the transition. Since developers no longer control both ends of the connection as they did in traditional client-server systems, many implicit rules are invalidated by the shift to open Web clients.

Bob Banker's job is actually much harder in the Web-wide world than it was in the world of ATM machines; for example:

These are just some of the questions Bob faces before trusting Clara; as tough as they are, they pale in comparison to Clara's challenge of determining whether or not to trust the Bank! After all, "How do [banks] convey trustworthiness without marble?" [The Economist, 1997]

The key word in that sentence is the word "trust." Instead of asking the cryptographic question, "Has this instruction been tampered with?", we can learn more by asking the more appropriate question "Does the Bank trust this instruction?". In fact, the trust questions raised in this scenario are far more profound than the cryptographic ones:

Cryptography cannot directly answer these "why" questions. Tinkering deep in the bowels of our favorite protocols cannot even guide us to the right "why" questions, since they will depend on the semantics of the application itself. However, thinking carefully about the trust each party grants, we can translate them into "how" questions which cryptography can answer (using encryption, certification authorities, and digital signatures):

Since the Bank's very raison d'être is to be a trusted broker for all parties, this banking scenario is actually fairly simple: the Bank manages the trust. Suppose we venture into murkier territory:

Sara Surfer hears about a whiz-bang new financial applet from Jesse Jester. She hops over to FlyByNight.Com and downloads their latest and greatest auto-stock-picker.

What is Sara's next move? Can she trust the applet to suggest good stock trades? Can she trust it to read her portfolio database? Can she trust it to make stock trades on her behalf? Can she be sure the applet will not maliciously reformat her hard drive? Can she be confident that the applet will not leak her private financial data to the world? Should she decide to run this applet, because Jesse endorsed it, because she trusts FlyByNight.Com, or because some certifying agency claims FlyByNight.Com is a good developer? Can she even be sure an attacker has not modified the applet while she was downloading it?

Like the Banking scenario, this financial applet scenario is about protecting computers and the data contained on them. Sara's private data can be recovered by keeping logs, running regular audits, and maintaining backups.

What if, instead, the trust scenario involved something that could not be undone? For example, suppose the trust question is, "Do I trust my child to see this content?" Matters of sex, religion, and violence are also Trust Management challenges on the World Wide Web. The Platform for Internet Content Selection (PICS) content-labeling system [Resnick and Miller, 1996] represents just one credential in a complicated "trust calculation" that a Web browser might need to do to determine if a parent trusts that some Web page is appropriate for his child. The page might be suitable because it is from a certain author, because it is rated (by some trusted agency) above a certain threshhold, or because it is not on a black-list.

Each of these scenarios revolves around assertions and judgment calls from several parties, and not from cryptographic proofs alone. Welcome to the frontier of Trust Management [Blaze et al., 1996], a relatively new approach to the classic challenges of computer security that generalizes existing access control list and capability-granting approaches. The philosophy behind Trust Management is accessible and powerful, whether or not one favors the new, generic authorization tools Trust Management proposes such as PolicyMaker [Blaze et al., 1996b] and REFEREE [Chu et al., 1997], or whether one continues to build special-purpose secure applications, the philosophy behind Trust Management is an accessible and powerful one. Asking the basic question "Who has to trust whom for what to take this action?" will clarify security issues, not just for cryptographers, but for webmasters, application developers, business people, and consumers.

In this paper, we consider the potential of Trust Management within the context of the World Wide Web. Because the Web is as much a social phenomenon as a technical one, it provides an ideal laboratory for several competing trust models. After all, trust relationships bottom out to human relationships, represented by people, computers, and organizations. Section 2 maps out some of the fundamental principles for codifying trust; section 3 describes tools for vouchsafing the principals (people, computers, and organizations) concerned; section 4 discusses several common trust policies; and section 5 presents the pragmatics of managing trust in the Web. In section 6, we offer several application areas that could potentially drive the deployment of Trust Management tools: secure document distribution, content filtering, electronic commerce, and downloadable-code systems. As more and more trust relationships are mirrored on the Web, we will run into the limits of trust managment and learn how real world relationships might change to adapt to those limits, as discussed in section 7. As we adopt this philosophy in designing Web security technology and new applications, we will weave a web of trust together, as summarized in section 8.

2. Principles

Oh! What a tangled web we weave,
When first we practice to deceive...

-- Sir Walter Scott

Perhaps we can learn as much about Trust Management from breaking trust as by establishing it: a "tangled web" blooms like cracks in shattered glass around the central lie. For example, "If Joe would lie to me about bumping my car in the lot, can I trust him to work up the departmental budget?" Although humans are naturally suspicious, our personal "Trust Management" schemes are often vague, which can foil our plans for security systems to be executed by our notoriously literal-minded assistants. Likewise, holistic judgments about integrity and honesty of character are not so easily conveyed to computers. On the other hand, digital Trust Management can be much more exacting and dogged about establishing the authority to take a certain action. This sort of rigorous automation is promising because, more often than not, we are led astray by good feelings toward other people...

2.1 Be Specific

"When I use a word," Humpty Dumpty said, in a rather scornful tone, "it means just what I choose it to mean --- neither more nor less."
-- Lewis Carroll, Through the Looking Glass

This leads us to the most fundamental principle behind Trust Management: Be specific. Who, precisely, is in the trusted group, and what exact actions do we trust them to take? Blanket statements such as "I trust my Bank" are woefully inadequate for useful analysis --- and real life bears that out, because banks never offer such contracts! A direct-deposit agreement, for example, specifies precisely the account into which the employer will forward funds, the times payments will be made, and the expected Bank recourses in case of error. We are used to making broad claims such as, "I trust my husband absolutely" --- but for what? To purchase the right groceries? To feed the cats? To remain faithful? Without specifically quantifying the bounds of our trust relationships, we cannot expect those bonds to be worth anything.

Admittedly, it is difficult to be specific with the crude tools available on the World Wide Web today. A typical HTTP server might offer to "protect" some URLs by requesting a username/password challenge before granting access. Even putting aside the absurd weakness of sending passwords in the clear for BASIC authentication in HTTP, this kind of access privilege is still overbroad. It requires the server administrator to be vigilant about what content is in the "protected" and open areas; it does not typically restrict the range of methods available (GET, PUT, POST, and so on) in the protected area; and it does not establish the identity or credentials of each user. Essentially, the security policy here can talk only about the "form" of a request (that is, its method and URL), but not the "substance", or contents, of that request. In the "virtual ATM" scenario in section 1, one can imagine that the legacy banking application would have detailed security instructions about who could view statements and who could modify them. If all the Web server can see are CGI-generated HTML pages [Gundavaram, 1996], it cannot do much to secure them based on the meaning of the information represented within the confines of the firewall.

Better yet, consider what happens on the client side. Web surfers eagerly swat aside the "Show alert each time unsecured data is sent?" dialog box. Such surfers receive no guidance as to whether the alert might be a vital warning prior to sending credit card data, but useless for other Web transactions. Such inattentiveness while Web surfing causes Web security experts to issue blanket warnings such as, "Think very carefully before accepting a digitally signed program; how competent and trustworthy is the signer?" [Felten, 1997].

It is often arduous to be specific with existing downloadable code systems [Wallach et al., 1997]. The user might desire the functionality of the statement, "I trust the Foolodex applet to read and index .foo files on my disk, but never to write to my disk." Unfortunately, the user has little chance to enforce such a clause, however useful since today's typical ActiveX and Java environments do not expose such specific management interfaces. Although this particular effect could be simulated by a sandbox leveraging operating system-level file protection, it also compounds the room for error. A similarly unimplementable policy is, "Jane is not allowed to access .foo files directly, but only by using the Foolodex applet." In fact, even the operating system-level "file read" permission might be too broad for comfort; for example, perhaps a user wants his boss to be able to read only the work-related fields of his Foolodex.

Given the difficulty of implementing specific rules with today's "security", the principle of specificity seems foolishly pedantic. However, when we turn to realize Trust Management concepts, even with today's insufficient tools, the benefits of rigorous planning are evident. For example, by segregating users into different categories and levels, and by separating differently secured information and functions into different URL-prefixes, a webmaster can implement site security reliably with HTTP servers. Deploying the improved Digest authentication scheme [Franks et al., 1997], HTTP/1.1 [Fielding et al., 1997] can increase the server's confidence that it is conversing with an approved user with a fresh request, not a replay. Cutting-edge certificate-based authentication and the "client-authenticate" modes of SSL and TLS can help Webmasters manage groups of people and also help them delegate authority. Furthermore, as Trust Management technology advances forward, specific trust declarations will become more useful.

Unfortunately, we are not used to factoring our actions into different sets and granting trust to only certain domains; the human mind does not explicitly perpetually ask itself the general who/what/when trust question, "Does X have permission to do Y during time period Z." PolicyMaker and REFEREE, for example, make this compartmentalization explicit because the only questions one can ask the system are "Does X have permission to do Y to resource Z?" --- not "Do I trust this applet" but "Can I allow this applet written by this person to check my financial data for the rest of this month?" We will explore the general who/what/when trust question throughout the rest of this paper. Whereas section 2 (on principles) is about general principles for the whole statement, section 3 (on principals) will investigate rules about the Xs in the aforementioned trust question. Continuing this train of thought, section 4 (on policies) is about what kinds of Ys are in the system and how Ys are granted; and section 5 (on pragmatics) pulls these points together to describe how Xs, Ys, and trust assertions are represented, transported, and deployed on the Web.

We note that there is another dimension of trust which is difficult to specify exactly for computers: confidence in the trust statement itself. In reality, all forms of trust come in shades of gray. Unfortunately, computers cannot bridge the gap between different shades so successfully; instead, computers view systems with several binary switches that yield discrete intermediate shades. For example, the PGP Web of Trust [Garfinkel, 1994] is based on introductions from trusted friends. When a keyring accepts a friend's key, the keyring owner is determines whether to always, sometimes, or never automatically trust introductions to strangers through that friend. Here, the fuzzy confidence we might have had in real life must be reduced to those three levels (always, sometimes, or never); in addition, that confidence spans two dimensions: whether she is a reliable go-between and whether she is computer-savvy enough to protect her own key.

2.2 Trust Yourself

This above all: to thine own self be true.
-- William Shakespeare, Hamlet, Act I, scene iii

A second principle of Trust Management is that each principal is the master of its own domain. All trust begins and ends with the self: Trust no one but yourself! This is a plea for existential philosophy --- it is a mechanical constraint which tidies up loose ends. Any trust decision should be logically derived from the axioms you yourself believe prima facie. For example, even the simplest assertion that "My credit card number is xxxx" cannot be taken for granted as external truth. To be certain, you implicitly work backwards: "I believe that CreditCorp told me that xxxx is my number; and I believe CreditCorp is who it says because my Bank stands behind them; and I trust that my Bank said so, because its public key matches the one I was given when I opened my account." Although you may sometimes derive shaky axioms, those axioms are still a priori beliefs that belong to you. Over time, "My credit card number is xxxx" might become an axiom of its own, as you use it a few times to buy widgets, thereby building confidence in it as an entry in and of itself.

A similar rationale holds for trusting Internet domain names: we do not blindly accept that xent.w3.org is bound to 18.29.0.61 because some deity has designated it from a mountaintop; rather, we trust the mapping because a sysadmin decided to trust the local Domain Name System server, which in turn relies on other DNS servers up the line, ending in the root server authorized by the Internet Assigned Numbers Authority, care of Jon Postel. Working inexorably down the trust chain, each Internet user must choose to trust Jon or not; we cannot foist that choice off on someone else.

This principle was inspired by two recent public key management proposals which directly mandate it. To set the stage, recall that the basic role of a public key infrastructure is to validate the binding of a principal's name to a key. Rather than require each user to maintain a vast table of name-key pairs for all other users, most Public Key Infrastructure (PKI) proposals allow Certification Authorities (CAs) to intermediate the table. The Telephone Company is a good analogy: we trust the White Pages as a name-to-phone number mapping because the company has validated each telephone user and stands behind the directory. We can also call directory service for up-to-date information if the directory becomes stale.

Unfortunately, most public key infrastructure proposals require users to trust some external, omnipotent deity to sit atop the pyramid of CA's. After all, which Telephone Company should we ultimately trust, and who gave it a unique name in the first place? Whether the legitimacy comes from a "root CA" (like the government), or from popular approval (see section 4), two recent proposals both enshrine the rule that you have to be your own meta-root. In the Simple Distributed Security Infrastructure (SDSI) [Lampson and Rivest, 1996], each person is the root of his own namespace; users explicitly choose to delegate authority for identifying CreditCorp employees to Credit.Com. The Simple Public Key Infrastructure (SPKI) [Ellison, 1996] relies on explicit authorization chains that form directed cycles beginning and ending at the user. SPKI's Simple Public Key Certificates [Ellison, 1997] help establish relationships by assigning authorization attributes to digital principals. Work is currently underway to merge SDSI and SPKI into SDSI 2.0, providing a unified treatment of certificates, a coherent treatment of names both for individuals and for groups, an algebra of tags for describing permissions and attributes, and a flexible way to denote cryptographic keys. Other solutions, like the PGP web of trust [Garfinkel, 1994], have also been founded upon the principle of trust yourself.

2.3 Be Careful

Trust in haste, Regret at leisure.
-- poster on the wall of Mr. Lime's Office at Information Retrieval, Brazil

We close out our triad of trust principles with some common sense: Be careful. Rigorously justify every single trust decision in your application. By logically proving each trust decision, you can automatically explain each action the application takes. Any time you short circuit that logic, the consequences can spread immediately; after all, permit just one Trojan Horse applet past the firewall, and the security perimeter is toast. There is no substitute for careful planning and execution; without meticulous scrutiny, even the most logical policies can be correctly implemented yet still leave the system wide open for attack.

For example, the World Wide Web Consortium has three levels of protected access to its web site: Public, Member, and Team. Digital Equipment Corporation is a Member, so every machine in the Digital.Com domain has Member access. One of them is Scooter, the Web spider that fetches documents for Alta Vista's Web search index. Unfortunately, since "Scooter" is a Member, Alta Vista ended up reporting search hits that revealed Member-only information to the Public! Even though W3C was quite specific about its policy --- we trust each Member to disseminate Member information within the organization, but not to redistribute it to the public --- without being careful, security holes can propagate. Our Web tools cannot implement the specific not-for-redistribution policy today, so sensitive information was exposed. In this case, "careful execution" required synchronizing policies in several completely different systems: our password database, file system, HTTP servers, and Robot Exclusion configuration file [Koster, 1996].

3. Principals

The three principles underlying Trust Management discussed in section 2 imply that each decision should be backed with a direct inference chain from the axioms all the way to the request. We now note that there are only three kinds of beads we can string together to create these rosaries: statements by people, computers, and organizations. People are represented by their names, and can make trust assertions using digital signatures. Computers and other devices can mechanically verify the integrity of data transmissions, thereby vouching for their addresses. And, organizations can represent collections of people or computers by granting memberships and credentials with digitally signed certificates that intermediate the binding of names and addresses.

To make a trust decision to "run this ActiveX applet" using the original Microsoft Authenticode scheme [Microsoft, 1997], we rely on statements made by each type of principal. The first step is a machine-to-machine verification: "I got this applet from another, known computer and it arrived uncorrupted, encrypted under a random session key chosen by both." Then, we find a digital signature attached, implying the approval of the author, which is a personal or corporate name. In this case, the personal name binding is intermediated by two Certifying Authorities --- the VeriSign and Microsoft organizations --- and their respective certificates.

The third and final step is dissecting the certificate granted by VeriSign for two credential bits: one bit flipped to indicate that the author has signed an Applet Publisher Pledge, the other bit flipped to the setting "author is a commercial enterprise." Thus, because Authenticode had a policy to run any commercial publisher's applet, permission is granted. In the end, these three principals correspond to the three basic reasons for granting permission to take an action: it could depend on which person is asking, what computer is asking, or which organization vouches for the requestor.

3.1 People

I don't trust [Lando Calrissian] either, but he is my friend.
-- Han Solo, The Empire Strikes Back

Ultimately, trust reflects our belief that someone will act in a certain way, based both on our past history and on the expectations of others. Trust is a faith placed in humans, even though sometimes this faith may be manifested in such a way as to appear that we are only trusting some device. Even a bank ATM's failures are ultimately backed by a Board of Directors. Perhaps this observation is just a trivial converse of the fact that you can sue only people, not computers!

As a consequence, most computer security systems "bottom out" in a definition of human beings. People are ultimately accountable for computers' actions, because people have persistent identities as names that are meaningful in the real world. Although an application designer's first instinct is to reduce a noble human being to a mere account number for the computer's convenience, at the root of that account number is always a human identity. This has two implications: first, that the name should be established as a permanent identifier outside any particular application; and second, that people are "born" as a tabula rasa --- that is, people cannot be automatically trusted for anything.

The canonical mechanism for establishing a name is issuing a digital certificate. The beauty of public-key cryptography compared to its predecessors is the ease with which we can create and distribute these identities. We create a secret key, entrusted to a human user, and then immediately use it to sign a declaration that "Waldo has this public key." Then, as long as the user protects his secret key, he can speak as Waldo on the Internet. Of course, based on the principle trust only yourself enunciated in section 2.2, Waldo is the only person who believes in his new identity so far! To convince other people that he speaks for Waldo, he must get them to trust his identity certificate. Whereas he could go around "door-to-door" (so to speak), organizations can simplify the acceptance process among the entire community of people who trust that organization, as described in section 3.3.

Organizations are also critical for assigning credentials to people. Another way of putting this is that people need to adopt a role within an application. For example, Clara Beekham has no privileges at the Bank; indeed, there is not even any reason for the Bank to believe her name is such! By contrast, when Clara Beekham reveals herself as Clara Customer from section 1, she gains the privileges of a bank-account holder. In between, Clara and the Bank may have set up another explicit, named identity to act as a proxy for the role of bank-account holder: the bank-account number. The Bank agrees to set up an account number for her human identity under her signature; in the future, Clara, the Bank, and merchants cashing her checks all agree to refer to the account number instead. Later, Clara can delegate this role with its specific capabilities (for example, she could entrust her lawyer to act as Clara Customer).

It is worth noting that there is a faction in the computer security community that dissents strongly from the view that humans are principals. The alternative credo is Keys are principals; that is, the most basic element is the private key, and nothing else can be reliably determined about the key-holder. This is absolutely true, since the cryptography of a digital signature can prove only "someone had access to the message and the secret key" --- and not what the fingerprints or pen-and-ink signatures could say about a specific person [DesAutels et al., 1997]. This credo is very useful when formally verifying secure systems, but we choose to set it aside in this paper, because trust can be exchanged only among people, and not between people and keys. Furthermore, pseudonymous identities are a clever compromise to reconcile these alternatives.

3.2 Computers

Artoo Deetoo, you know better than to trust a strange computer!
-- See-Threepio, The Empire Strikes Back

Computers are where the going gets weird. By contrast, most of the ideas in this paper are fairly generic comments about the human social phenomenon of trust. The Bank, for example, uses the same principals to safeguard its internal humans-and-paperwork procedures as it does for a Web-based virtual ATM. Computers alter this equation by substituting the explicit power of cryptography for the implicit power of psychology. This does not address the fundamental question present from the outset, though: how can we even trust computers? Or, in the terms of classical computer security, what is the Web's Trusted Computing Base (TCB) [Department of Defense / NCSC, 1985]? Indeed, many of the challenges raised in section 1 stem from the fact that the Bank used to control its own, trusted hardware at the client: the ATM machine. The Web client replacing it is more appealing in terms of flexibility and accessibility, but less powerful and definitely not to be trusted automatically. Even before considering concerted, malicious attacks, there are too many operational points of failure on a random customer's personal computer, such as buggy operating systems, network sniffers, and viruses.

Naturally, we have well-developed tools for establishing trust within the computing devices themselves. Although these mechanical principals cannot be sued in court, they can certainly establish a reliable identity through number crunching and cryptography. Various protocols exist to verify remotely that a correspondent computer has a working clock, a good random number generator [Park and Miller, 1988], and a unique address. Building on these primitive operations, we entrust these devices to take some limited actions through their peripheral devices: print a document, place a phone call, dispense money, or fire a (supposedly therapeutic) X-ray beam at a patient [Leveson and Turner, 1993]. Note that these examples are "irreversible" actions that affect the real world, which happen to be the most sensitive and protected actions "computers" can take. Traditional security tools such as logging and auditing can be used to detect and rollback fraud "inside the box" after the fact, in lieu of the strict trust calculations that protect "out of the box" actions.

Many approaches to computer security do not distinguish between people and computers, since both merely represent a key pair. As long as the human or device protects its secret, either can play the same roles in various cryptographic protocols. From the perspective of Trust Management, though, it is important to qualify precisely for what, ultimately, you can trust a principal. In our opinion, the potential legal and moral liabilities are the critical difference. No matter how many tests you inflict on a remote computer, you can trust only that it as a device. You can trust that it is connected to a working X-ray beam, but you can never trust it to decide on its own to fire it: that is a decision best reserved to people (and organizations, as described in section 3.3).

"Cloned" cellular phone fraud is a classic example of conflating trust in computers with people. Today's cellular networks already use a "secure" challenge to identify each telephone in a network's coverage area based on that telephone's supposedly-unique fingerprint (serial number). This proves the device's address --- but the network allows calls to be placed (and billed) to the owner, because the operators assume that this authentication is enough to prove that it is actually the subscriber's own phone. In fact, only a personal secret --- a PIN --- can prove a one-to-one owner relationship; without a PIN to close the loop with the subscriber, the system can fail as soon as the address-authentication can be cracked. Unfortunately for the U.S. cellular industry, this proved all too easy for some moderately skilled hackers. When the Europeans led the development of the General System for Mobile (GSM) phones as a successor to the U.S.-led Analog Mobile Phone Standard (AMPS), they designed authentication that reaches beyond the phone to a smart card inserted into the unit by the owner [Brookson, 1994]. A smart card establishes the proper trust in the subscriber because the smart card's address can substitute for the subscriber's name, since it really is a unique, unclonable physical token.

Checksums, channel security, clock-challenge authentication, and other related techniques, can prove only that we are talking to a working device with a consistent address --- an identity limited to the scope of the immediate application. People and their delegates have names --- permanent identities of beings (a la Rene Descartes) that go on living beyond any particular application. Establishing an SSL connection, for example, creates a secret, ironclad connection between two endpoints, but only as computers, and only for this session. Many Web applications today make an implicit (and mistaken) assumption that an SSL connection is a secret between two people or organizations. In fact, SSL can prove that you are talking to one machine, perhaps even the same machine to which you were talking a moment ago, but nothing in SSL alone can prove you are talking to AirlineCo. You have to establish trust separately that public key P is the sole property of AirlineCo --- that some computer is really the authorized delegate of some person. As for proving P is AirlineCo's key --- that remains the task of the last part of the principals triad, organizations.

3.3 Organizations

Badges? We don't need no steenkin' badges!
-- Blazing Saddles parodying Treasure of the Sierra Madre

Just as people are separate from computers because their human identities have wider and more permanent scopes than transient devices, organizations are a third force because their existences have even larger scope. Organizations are critical to the trust relationships in our everyday lives and on the Web, precisely because they outlive their individual members. Western legal systems even institutionalized this notion through the process of incorporation, literally transforming the incorporeal body into a legal entity in the eyes of the court. In fact, in many technical ways, organizations are treated exactly like people: they both use the same kinds of certificates and they both need to protect their private keys.

The critical difference between organizations and people is that organizations can issue credentials that tie together people and computers --- with enough scale and a high enough level of abstraction to make the world manageable. Imagine what a nightmare it would be if a user had to set up pairwise mutual authentication with every possible telephone caller with whom he or she might ever speak. The Telephone Company makes the entire enterprise possible because it brokers the transaction. Both the caller and the callee trust the Telephone Company to correctly connect their calls, to protect the integrity of their addresses (phone numbers), and to protect the binding of people to phones. In one fell swoop, N-factorial one-to-one trust relationships collapse to 2N thanks to this organization. This materially reduces the effective transaction cost of making a call --- the same Nobel-prizewinning economic insight that justifies the existence of any organization. Life is easier, cheaper, and faster with intermediaries. Would anyone rather buy a 747 from the thousands of parts manufacturers instead of one-stop-shopping from Boeing?

Credentials issued by organizations are powerful because the whole is more than the sum of its parts. Driver's licenses, black belts, Consumer Reports scores, and bond ratings are all familiar credentials, trusted precisely because they are backed by universal standards that presumably are not subject to the whims of individuals. Look no further than the trust placed in sovereign governments as it compares with the trustworthiness of individual politicians...

Establishing unbroken trust chains between people and devices requires credentials to link them together. The cell-phone smartcard that prevents fraud in section 3.2 actually only works because the Telephone company stands behind the unique link between the card and the subscriber. By issuing a smartcard, the credential takes a physical form, which in turn elevates the computer-to-computer trust between components of the phone system ("This is the unique phone") into person-to-organization trust ("This call is billable to a known subscriber").

Of course, individuals can also grant credentials to other people and devices. In fact, there may be a legion of such structures that make sense only to that individual for bookkeeping within their own trusted environment ("I trust my dog to bring back the right newspaper"). As soon as one person has to share these facts with another, though, the seed of an organization has been planted. Settling a dispute with the neighbors may rely on the newspaper's subscriber label to establish ownership, and a service center may need to attest that your system is virus-free. As soon as more than one person or computer has to share trust, organizations will surely follow. As soon as we scale up to community identity, the very nature of the actor changes from person to organization, which completes our triad of principals.

4. Policies

Our policy is, when in doubt, do the right thing.
-- Roy Ash, Office of Management and Budget director under Richard Nixon

We can decide to permit or deny various actions by stringing together a proof from several principals' statements. For each action, there are specific policies that govern what statements suffice as justification. Trust Management engines like PolicyMaker and REFEREE literally take a subject, action, statements about the subject, and a policy controlling the action as input. Some policies are simple ("Approve the loan if any officer above the rank of Branch Manager says so"), whereas others are not ("Approve the loan only if the assets are above A, debt is below B%, returns are C% after taxes"), and some are just too ethereal to be expressed mechanically ("Approve the loan if the applicant is the Manager's secret high-school sweetheart"). In the realm of computer security, Trust Management tools are restricted to the former kinds of policies: automatable rules about chaining together statements from principals.

In many computer-security applications, the rules are compiled in, so they are harder to recognize. The initial release of Microsoft Authenticode, as described in section 3, had a built-in policy that requires the various cryptographically proven "facts" about each principal to be combined into a sentence of a certain form. While it is easier to verify the correct implementation of a hardcoded policy ("Does this code ever leak Top Secret information to Secret users?"), that inflexibility can be harmful in the market --- as Microsoft learned with its policy sentences which favored commercial software publishers.

Since the Web encompasses such a wide range of applications, we should expect to see many, many different kinds of security policies. Sometimes a user will be in control, as when deciding what types of applets to execute; other times a publisher will be in control to enforce intellectual property rights, (for example, "Do not display this font without a digitally signed proof-of-purchase"). Nevertheless, we can make a stab at systematizing a framework for thinking about policies, and categorizing the most popular approaches from traditional computer security.

A textbook treatment of authorization control centers around a simple table with the principals along one axis, objects along the other, and permissible actions at the intersections. table 1 below is a sample access authorization matrix for a Bank with three kinds of principals acting on three different kinds of objects.


Credit Line
Savings Account
Vault

Branch Manager

Create, Read/Write
Create, Read/Write
Deposit, Withdraw

Teller

Read
Read/Write
Deposit

Guard

None
None
Withdraw
Table 1. A sample Bank access authorization matrix.


Though the complete table represents one, specific way to manage a Bank, it can expressed in several equivalent ways by focusing on the principals, objects, or actions in it.

Principals
We can characterize the privileges granted to individual users and groups of users by explicitly listing each of the actions those individuals are permitted to apply to each object. A typical way to implement this is by labeling the principals with a strictly ordered security classification, and then labelling each object and action by the minimum-level or maximum-level authorized user.
Objects
By focusing on the individual information resources, we can label each permissible action with an explicit list of authorized users. This popular approach attaches an access-control list (ACL) to each object of interest.
Actions
Finally, we can choose to manage the actions taken within the system. "Capability security" labels each action or group of actions with a token. Then, principals and objects are granted specific capabilities, which may or may not be further delegated onward. Several programming environments enforce this style of security.

These three ways of expressing policies also define three styles for determining policies. Choosing a language also influences your trust model: authorization based on who you know, what you have, or what you can do, respectively. The next three subsections explain the ramifications of these choices, including possible policies for the Bank in table 1.

4.1 Principal-Centric Policies

In popular culture, principal-centric security is far and away the most dramatic (and dramatically fallible) way to protect secrets. Spy novel conventions have familiarized us all with the model of sorting characters into "Confidential," "Secret," and "Top Secret" bins and compartmentalizing all information on a need-to-know basis. Agents could then read information at a lesser or equal clearance level, but write reports only at their own level (or above, leading to the comic consequence of not being cleared to read one's own reports). This model can be extended to many more actions than read or write; for example, we can require minimum or maximum clearances for entering locked rooms or firing nuclear weapons.

In the mid-1970's, computer security theorists began to formalize this model and proved many of the basic insights of the field. Dorothy Denning rigorously analyzed the flow of information in such systems [Denning, 1976] to prove that, for instance, Top Secret information could never leak out though a Secret report. Such proofs are possible as long as there is a well formed lattice --- a kind of hierarchy --- among all the clearances [Denning, 1982]. Around the same time, Bell and LaPadula published their famous model of secure computing systems along these lines [Bell and LaPadula, 1976]. Over the years, these principles were enshrined in the U.S. Department of Defense's (as well as many other countries') requirements for trusted computer systems. Real operating systems evolved along with the theory, from Bell and LaPadula's original work with Multics to the 1980s' Orange Book certified A1-level secure computers [Department of Defense / NCSC, 1985] and the 1990's revised standards [NIST/NSA, 1992]

In our banking scenario in table 1, we can easily identify a few levels of clearances. The Branch Manager has the combined power of the Teller and the Guard plus others. These are three separate roles because the principle of least privilege suggests that each role should be restricted to the minimum authority to do its job.

In summary, the Bank which chooses an identity-centric model tends towards a policy that only certain people can be trusted, and enforces it by checking the clearance of each principal who proposes some action on an object.

4.2 Object-Centric Policies

A locked vault represents a simple security policy: anyone who has the combination has free reign. Without a guard to check clearances or limit the actions of entrants, that is indeed its only policy. In this case, trusted access to one object depends on control of another, or sometimes a set of other objects, as in the case of a safe-deposit box that requires two separate keys, turned simultaneously by the customer and the bank.

Cryptography moves us beyond the physical realm by substituting numbers for keys --- even if those numbers are stored back on smart cards. The principle remains the same, though. Consider the savings account in table 1. An object-oriented security policy might simply require that one key is required to access any particular account, and another key to create one in the first place. The overall security policy is then enforced by giving the former key to Tellers and Managers, but reserving the latter key for managers alone.

Sometimes the objects are not unique, or even secrets. In the PGP web of trust, personal aquaintances sign each other's keys to establish identity within their own ad-hoc community. When a random correspondent sends you a PGP message for the first time, you need to establish trust that the signature key is the claimed person's --- the standard identity certification problem discussed in section 3.1. The default PGP policy is to rummage through your pile of (already-trusted) friends' keys (that is, the objects you have) to construct an unbroken path ("Bob vouches for Clara; Clara vouches for Sara"); if this path construction is successful, it then has one more new binding to store away.

Finally, access to an object does not always mean free reign. The vault, for example, could have a deposit-only slot within a door, each with separate keys. Microsoft's Common Object Model (COM) [Chappell, 1996] is an example that implements a similar differentiation in software. The savings account, if implemented as an ActiveX object, could have two interfaces, one for creation and another for read/write operations. Rather than using a generic pointer to the memory where the account data is stored, COM uses handles that indicate which interfaces' actions are accessible. These handles are then passed between program components as needed; any part of the program that has the handle can take those actions on the target object (see section 4.3 for further discussion of this example).

Handles, keys, and combinations are the essence of object-centric security policies. With this approach, security managers invest trust in a small number of objects, and then carefully control their distribution. In the case of a Bank with its vaults and locked ledgers, this model respects a policy that only certain objects are trusted, and enforces it by checking that a principal has an object representing permission to execute actions on another object.

4.3 Action-Centric Policies

The Community Boating program in Boston will let anyone who can swim 75 yards borrow a sailboat and take their chances on the muddy Charles. Oh, there is a membership fee, and courses, and additional certification for solo ratings and particular craft, but the basic offer suffices to illustrate the nature of action-centric policies. In this case, the ability to swim out of harm's way is seen as sufficient to entrust a principal with access to a boat.

Upon further inspection, though, swimming is not one of the actions traditionally applied to boating. Why not tests in knot-tying, basic physics, and navigation instead? After all, those are part of the range of actions on a boat, just as reading and writing deposit entries and cash management are activities within a bank. One would imagine, just as principal-centric policies ascribe trust levels to each user and object-centric policies ascribe trust policies to each artifact, that action-centric policies should attach a list of trusted users and applicable objects to each and every action.

On closer inspection, though, identity-based policies evolved immediately into labeled compartments which grouped together many similar principals. Object-centric policies are most reasonable to manage where there are a few types of objects with many instances (for example, one key to access the ledgers for a thousand accounts). By extension, action-centric policies make the most sense when the myriad actual actions in a table can be simplified to a handful of broad capabilities. In fact, the choice of policy approach depends mainly on the ease of abstraction along each axis for the application at hand, as discussed in section 4.4.

Our Bank could collapse its actions in table 1 into three categories: Account Bookkeeping, Account Creation, and Vault Access (which might also be split, into deposit and withdraw access). Bookkeeping is the abstract ability to read and write transactions to the ledger for an account --- either savings or a credit line. Account Creation, naturally, would cover the means for creating any account, and this kind of generalization is especially useful as the list of objects changes over time (say, by adding checking accounts). The security manager would take the care to assign these capabilities in the right combinations to each principal, and also configure each object to demand evidence of same before executing an action. This model represents a policy that only certain actions are trusted, and enforces it by checking the permissibility of any action by any principal to any object.

This all may work fine for our Bank, but it still does not explain how a swimming test proves seaworthiness (or Charlesworthiness as the case may be). Even Community Boating would admit that the ability to swim does not directly justify any of constituent skills of sailing. Policy, however, is in the eye of the beholder. In this case, it may not be obvious at first glance that CB's policy is not one that ensures members can sail, but one that ensures that members will not be injured. To fulfill the latter policy, proving the ability to swim across the Charles eliminates most risk, thus CB will grant trusted access to boats.

Of course, CB might entrust you with a boat, but you might not trust yourself without those additional sailing capabilities. A small reminder that in any security analysis, there may be several principals' policies at work. To safely entrust yourself at the helm, you might also take some of those free classes and qualify for that solo rating. This demonstrates how often secure systems conflate capabilities with organizational credentials. Board-licensed beauticians, doctorates in computer science, and certified public accountants can all testify to the value of a trusted credential. Sometimes, however, the trust may have more to do with the clout of the issuing organization, as would be the case with computer science departments or the local electricians' union.

These examples all illuminate the philosophy of action-centric security: execution privileges independent of the particular principal or object involved. Sometimes it can be difficult to differentiate a capability from the two other kinds of permissions assigned to principals or objects. The ability to purchase and consume alcohol is not assigned to individual people, since there is no registration process. It also is not a property of alcoholic beverages themselves, because bottles do not have any access controls. That capability is intrinsic to anyone over 21 years old in the U.S.

4.4 Implementing Policies

As different as these three styles of policymaking can seem, they are equally powerful. The implementation details of any particular system usually drive designers toward one of the three approaches. The limitations of today's Web security tools also bias many applications' policies. Before considering the Web, let's walk through one more real-world example of Trust Management: how we entrust people to hurtle through long narrow spaces at high speed, cocooned in massive explosive devices aimed at each other. That is, driving an automobile:

  1. Angela wants to let only certain people drive her car. Angela's car is labeled with an access-control list of names people, or categories of approved people from a hierarchy (friends, coworkers, and in-laws). Angela's car has an ignition with driver's license reader and a fingerprint scanner to prove identity.
  2. Brenda will let anyone with her keys drive her car. Brenda might carefully check people out before handing over the keys, but the car itself does not care who turns the ignition.
  3. Carol is a member of the Wyoming Citizen's Car Collective. WC3 owns all of the cars in the state; all it takes to drive off with one is a valid license and insurance policy in the smart card slot.

The pragmatic details of car ownership and operations deem Angela and Carol's policies to be fantasies. Brenda's world reflects society's best balance of privacy, flexibility, and safety in view of the implementation costs. Angela uses centralized identity registration and complex programmable car-computers, and yet she still permits unsafe drivers (since she uses the license only as an identity token). Carol's state does not have private cars, but is quite flexible (if there is a car to be found!) and safe, because she uses licenses as capabilities. Brenda's system, however, is the simplest to implement: each car comes with a key, and that is that. She can delegate the car to anyone she wants at any time, even unsafe drivers. Society complements her policy with its own rules (and police) to check safety and to recover stolen cars.

Precisely because each of the three policy approaches can yield different flavors of trusted systems, implementation concerns are the deciding factor favoring a primary policy framework for an application. The goal is to write the simplest, most compact policies, since they are easier to validate and extend as the system evolves. This often translates into choosing policies centered along the simplest axis to characterize. Our Bank has a myriad of products and many different kinds of processes, but only a few levels of trusted personnel; hence an identity-centric policy may be cheapest to implement and audit. It can also be important to consider which axis needs to be most dynamic, and avoid that one; driving access is extended to many principals, but the target is a single class of object, a car. A third concern is efficiency: it takes less effort to do an on-the-spot swimming test than an exhaustive background check, so Community Boating opts for an action-centric policy. Conversely, the police's action-centric policy does not assess driving talent during a traffic stop; it checks the validity of a capability token (license) that was "precomputed" during a drivers' test.

Real organizations with real policies never end up neatly within this taxonomy, though. Since policy is not the access matrix alone, but rather, the reasons why those decisions were made, two equivalent systems can be described using radically different policies.The identity-centric policies "Any Manager can approve a credit line" and "Any Mormon can approve a credit line" might both work for Bob the Mormon manager, but the former would be more directly responsive to the Bank's mission than the latter. Occam's Razor slices in favor of the policy which best upholds the principles outlined in Section 2.

For a Bank with an identity-centric scheme, "be specific" implies precisely labeling all people, objects, and actions with their proper clearances and specifically checking identity on each access. Trouble often lurks when policy styles overlap: a hotel chain was once rebuked by a credit card company for using its cards (which merely prove the ability to pay) to open hotel door locks (incorrectly using it as proof of identity). Returning to a previous example, the Bank's policy must trust only the Bank's own clearances and controls. Finally, the Bank's operations must be carefully audited and enforced in depth. After all, even a Manager still needs a unique key to open a vault and can withdraw money only under video surveillance.

Even today's state-of-the-art HTTP servers do not have the full expressive range described in this section, though. They are limited to identity-centric access controls that sort humans and computers into groups by their passwords or IP addresses, respectively. These groups are then permitted certain actions (for example, "GET but not DELETE") for certain objects (for instance, "Only the /Drafts/ directory"). Web servers inherited this model from their underlying operating systems' file protections. The classic computer security solution for this was to have users log in, use those user IDs to label ownership of files, and then label each file with permitted actions (for example, "Only the owner can write"). As a result, today's Web does not have the rich semantics for action-centric or object-centric security. When every transaction looks like a POST to program in the /cgi-bin/ directory, it is difficult to separate the capability to deposit from the capability to transact a withdrawal. When every resource is just an opaque stream of bits, it is difficult to protect "all files containing next year's budget figures." Soon, though, the Web will evolve beyond its roots as a file-transfer tool, and cast off such pragmatic limits.

We received some helpful comments from Ross Anderson we include here:

Banks do not use Bell-LaPadula. The model that best describes their activities is Clark-Wilson. See

"A Comparison of Commercial and Military Computer Security Policies," in Proceedings of the 1987 IEEE Symposium on Security and Privacy, pages 184-194.

or any reasonable computer security textbook, such as Maoroso. Also, in medicine, one needs a different model again. See "Security in Clinical Information Systems," at http://www.cl.cam.ac.uk/users/rja14/#Med

On a more general level, I think that you both overstate and understate the complexity of trust management. In the great majority of cases, the customer will have a direct relationship with the bank or merchant, just as is the case now, and there will be no need for trusted third parties or complex authentication structures; the merchant's bank will validate whatever token the customer was issued by his bank, and the banks will talk to each other on a private network.

However, there are some applications such as medicine where the trust relatiuonships are extremely complicated, simply because the trust that already exists is collegiate rather than bureaucratic, and essentially local, while still possessing complex interlocking hierarchies.

In short, the way to make progress here is to look in detail at real applications, not to try and conjure with vague system level concepts.

5. Pragmatics

Doveryay, no proveryay...
Trust, but verify.
-- Ronald Reagan, quoting Gorbachev quoting Lenin, on signing the 1987 Intermediate Nuclear Forces treaty

The mechanics of Web security are complicated by the open, distributed nature of its (un)trusted computing base. The topological shift away from a single secure node to a network of separately administered domains is driving the development of innovative Web-specific Trust Management protocols, format, and tools:

These efforts will result in facilities for efficient, automatable trust calculations --- worldwide! --- thanks to common programming interfaces, which in turn will affect how we identify principals, how we assign labels to resources on the Web, and how we codify policies to manage both.

5.1 Identifying Principals

The first step in constructing a secure system is usually to identify the system's users, authorized or otherwise. In a few rare systems, we can forego this step by presuming physical security instead, but by default we identify those principals' names or addresses with cryptographic secrets. Passwords are a common solution for closed systems, such as login programs. Public-key cryptography is a far more secure way of managing such secrets, but the Web's radically decentralized trust model is catalyzing new systems for identifying the people, computers, and organizations holding such keys.

A digital certificate [Kohnfelder, 1978] ensures the binding between cryptographic keys and its owner as a signed assertion. It is the missing link between the mathematics, which computers can verify, and the principals, who are controlled by policy. The challenge is in deciding who should sign that assertion and why. The traditional answer is that a Certificate Authority (CA) vouches for the binding, after investigating all of the germane details. With a certificate declaring "Joe Doaks' public key is 42, signed, UC Irvine", anyone else who also trusts UC Irvine will trust that messages encrypted by 42 are Joe Doaks'. Someone outside of the UC Irvine community could instead consult the next level of verification, searching for a Certificate Authority assertion declaring "The State of California trusts UC Irvine, whose public key is 23, to certify students' identities." The result is a classical hierarchical pyramid of CAs, an approach promoted by ISO's X.509 certificate format, X.400 addressing, and X.500 directory service.

The utility of a Certificate Authority is directly proportional to its reach: the size of the community willing to trust that CA. Climbing up the pyramid, CAs have ever-greater reach, but with less specificity; UC Irvine can certify students, but the State of California may be able to certify only generic "citizens." This increases the CA's liability, as discussed below, leading to the reductionist nomination of sovereign governments as top-level CAs, with bilateral international cross-certification.

There is an opposite alternative: what if everyone just certified themselves ("self-signed certificates"), and others introduced themselves one-on-one, just like in real life? The PGP web of trust is a living example of such an anarchic certification system. The same pitfall resurfaces in a complimentary guise; in this case, the absence of a central trusted broker makes it very difficult to scale to large user groups such as "all UC Irvine students."

These scalability concerns are inevitable because general-purpose identity certification violates the principles of Trust Management discussed in section 2. Without knowing the application at hand, a person or organization vouching for Joe Doaks' identity cannot be specific about the degree of trust involved. Does it mean that we can trust that anyone using the key 42 can now check out a library book, enter a locked dormitory, or accept a student loan? If someone tricks the certifier, who is liable, and for how much? Second, relying on hierarchical CAs weakens the principle of trusting yourself, since it requires blanket trust in very large-scale CAs, with corresponding conflicts-of-interest. If an MCI-er wants to talk privately to a Bell-head without being overheard by a Sprint-ee, he would be stuck constructing a certificate chain routed through some common parent CA such as VeriSign. This might be acceptable even if all three companies are VeriSign's clients, but what if VeriSign is a competitor? Third, the logistical challenges of centralized certification make it difficult to be careful using these certificates. To verify any certificate, a validator must rummage through every CA's Certificate Revocation Lists (CRL) to be sure the key has not been cracked or canceled. As the number of certified users scales upward, the frequency of changes to the database, invalidated keys, and so on, will increase, too, creating a logistical bottleneck.

The two new decentralized PKI proposals discussed in section 2.2, SDSI and SPKI, are better suited to the Web; furthermore, they respect the principles of Trust Management. As a result, they may actually be able to scale alongside the Web. First, they use application-specific certificates which identify exactly for what each key is authorized. Second, both systems literally construct a trust chain that must loop back to the user, instead of being diverted to some omnipotent CA (in fact, these systems directly inspired the trust yourself principle). The path might hop between individuals, like PGP, or between organizations, like X.509, but the final link must lead back directly to the user. Finally, both systems offer simple, real-time certificate validation, which makes perfect sense for an online Web. This decision also leverages the shift towards online directory services for locating users in the world wide haystack, such as the Lightweight Directory Access Protocol (LDAP) [Howes and Smith, 1995].

Moving to the decentralized Web will have other pragmatic implications for managing principals. The sheer scale of some applications, combined with the insecurity of client operating systems, may force the deployment of smart cards or biometric sensors (such as fingerprints or voiceprints) to protect keys. Of course, the Web can assimilate these innovations only if popular standards emerge as a common denominator for representing keys, certificates, and identities. The Web's key evolutionary advantage, though, is that we can recursively invoke the Web to describe these artifacts using labels.

5.2 Labeling Resources

The second step in constructing a secure system is describing the authorization matrix given in table 1. As described in section 4, three primary styles are available. Principal-centric implementations need to label each principal with its clearance level, and to label every object and action with its maximum or minimum clearance. Action-centric and object-centric implementations similarly label each with the required capability or access key. Whichever style of policy we choose, every element in the system ends up associated with security metadata that enforces access limits.

Traditionally secure computing systems wire these critical bits directly into data structures and files. For example, in UNIX-like operating systems with principal-centric policies, every running process is limited to the privileges of the owners recorded in the process table. Every file on disk is stamped with its user and group ownership, as well as an access control list declaring users' read, write, and execute permissions. Today's Web servers are a thin wrapper around these underlying security tools, as discussed in section 4.4.

According to the principle of specificity from section 2.1, we often need to label Web resources more explicitly than these tools allow. Rather than accepting an identity certificate per se, we may need to restrict it to a particular role. Rather than just offering blanket "execute" access, an applet may have particular restrictions attached to its runtime environment (see section 6.4). Also, rather than implicitly grouping protected objects by their place in the file system's directory hierarchy, we may want to directly label "medical records" wherever they may lurk.

While we could build mission-specific Web clients and servers with more embedded classification bits, a more flexible solution would be to use separate security labels with general-purpose label handling. Each of the conditions just mentioned could be captured as a separate statement bound by URL to a Web resource. Furthermore, the conditions themselves could be categorized into a systematic scale that is readable by machines [Khare, 1997]. The result then seems indistinguishable from our null hypothesis: more mission-specific classification bits, except now they are pulled out into a label.

There are three critical differences that indicate that security labels are better suited to the Web than are traditional security attributes. First, the label can reflectively be considered a Web resource of its own: the label has a name, and thus it can be made available in several ways. The original resources' owner can embed a label within the resource; send it along with a resource; or, obtain labels from entirely separate third parties. Second, the scales, or rating schemas, can reflectively be considered a Web resource. That is to say, the grammar of a machine-readable label can itself be fetched from the Web for use. It, in turn, can be further described by hypermedia Web documents, bridging the gap between machine-readable and human-readable information. Finally, labels are a safe choice because even though the security attributes are "floating around" separately from the resources, labels can be securely bound to Web resources by name and by hash. As a result of these three differences, labels are an excellent tool for trust management systems [Khare, 1996; Blaze et al., 1997].

Security labels must address several pragmatic pitfalls to adapt to the Web, though. These concerns motivate convergence on a single metadata platform for a range of related Web applications. For example, language-negotiation and content-negotiation can affect which variant of a Web resource a user recieves. It may be acceptable to ignore this effect to rate "pages from Playboy.Com have sexual content", but a digitally signed assertion "(c) 1997 Playboy Enterprises" requires an exact digital hash of the a particular page, English or French, JPEG or PNG. W3C and its partners a developing a common link syntax for spcifying such variants based on the eXtensible Markup Language (XML) [Bray and Sperberg-McQueen, 1997]. Another benefit of reusing a common metadata format is its support for collections of labels, such as a combined manifest declaring copyright on all four pages variants at once.

XML is also an excellent choice for generic metadata, because it embodies the philosophy of self-description. Just as PICS labels can be described in terms of their schemas, XML tags are part of a Document Type Definition (DTD). Rather than putting a part number in bold, for example, XML purchase orders could use the <PART> tag, and explain what that tag means elsewhere. The strategy of interleaving machine-readable self-documentation and human-readable policy background is an excellent solution for encouraging automatable assertions and policies.

As hinted at in section 5.1, though, Web documentation of the Web itself is the key. The need for self-reference is most acute at the boundaries of machine understanding, where the human judgment of policies comes into play.

5.3 Codifying Policies

The final step in implementing a secured system is populating the security access-matrix with authorization decisions according to some policy. This stage is the least amenable to standardization in practice, because Web design philosophy upholds the credo famously enshrined by the X Consortium: Mechanism, Not Policy. As a generic platform, the Web should be flexible enough to accommodate a wide range of applications with varying trust policies. One way to achieve that is not to talk about policy at all; rather, policy decisions are isolated as purely local "black boxes." The complementary strategy is to use unified trust management systems at those endpoints. Such Trust Management tools are an effective way to adapt to different policies, especially to integrate policies set by different sources.

The three policy styles discussed in section 4 are only broad outlines. In practice, policies can recombine elements from all three, as well as incorporate completely independent criteria like the time of day. We can also expect policies to be codified in a variety of languages. One PICS filtering policy may be a simple numerical gamut; another may weight the opinions of several ratings in different systems; and yet a third may incorporate a Turing-complete content analyzer.

Language-independence is only one half of achieving Mechanism, Not Policy. The other key to preserving the flexibility of Web tools is to exchange assertions only on the wire, and not policy. For example, in the payment-selection system developed for the Joint Electronic Payments Initiative (JEPI), both sides exchange lists of payment instruments they could accept or will reject. Both merchants and consumers have their own policies for their payment preferences: lower commissions on one; frequent flier miles with another; and no purchases above $2 with a third. The alternative JEPI design would have been to exchange preference lists (that is, policies); this solution is more restrictive, since it requires classifying all possible axes of preference in advance, permanently constraining the range of policies. JEPI's actual implementation is more verbose, but it can adapt to whichever rules eventually emerge for electronic commerce.

The REFEREE trust management system incorporates these lessons. It does not use a single policy language; instead, users load interpreters for as many languages as needed. This mix-and-match approach works because there is a single, unified API for all trust decisions: given a pile of facts, a target, a proposed action, and its policy, we can determine if it is always, sometimes, or never allowed. Another benefit of an integrated policy engine is the ease with which multiple authorities' policies can be composed. The decision of whether a Web page is appropriate for a child could be the intersection of the controls of the publisher, the parents, and the child's school. Negotiating the level of encryption across a national boundary is not just up to the correspondents, since both governments may impose limits as well.

5.4 Automating the World Wide Web of Trust

Trust Management is actually an opportunity to return more control to the user as a trust-granter: depending on your policy, you can seek out, collect, and manipulate more kinds of data in pursuit of a decision. Rather than using a closed content selection system like Antarctica Online's policy ("All our content is okay, trust us"), users have the full power of PICS labels, from multiple sources, according to different systems, with personalized policies. Decentralized principal identification, the integration of security attributes with the Web metadata, and policy flexibility, all advance the goal of automatable trust management through machine-readable assertions.

All of these projected shifts in the pragmatics of Web security will ultimately justify developers' investments to make verifiable, quantifiable, machine-readable trust assertions. These efforts will be repaid as we move toward a more "mechanized" vision of Trust Management, since the APIs to tools like PolicyMaker and REFEREE reflect this way of thinking. In the next section, we investigate some applications that leverage this vision.

6. Applications

In theory, there is no difference between theory and practice. But, in practice, there is.
-- Jan L.A. van de Snepscheut

The World Wide Web is emerging as a platform for a new wave of secure applications that highlight the need for "Trust Management" thinking to supplement cryptography. These applications were traditionally supported within closed, secure environments, but must now cope with an open, distributed Web. Furthermore, there is an accompanying shift away from closed, known user communities to open, publicly-accessible service models, exacerbating the security analysis. On the other hand, the Web has been portrayed as a passive "library" of information so far --- this new generation of "trusted" applications will move malls, banks, and city halls onto the Web, too.

In the following sections, we discuss four kinds of applications that are already capturing the imagination of Web developers. By extending these archetypal systems to the Web, developers will also have to work further up the value chain to convince end-users that these will be trustworthy systems. This places a premium on usability: making their built-in security policies comprehensible to end-users. Taken together, these examples also provide a compelling synergy argument for implementing a unified Trust Management interface.

Web-based Trust Management is changing peoples' perception of the problem, both by clarifying relationships and by pointing to new technology. Web applications must leap across tall organizational trust boundaries when they become open, so more actors need to establish interdependent trust in system components as systems grow outward to encompass citizens, parents, customers, and end-users. This fosters a symbiotic coevolution between systems and their clients.

6.1 Secure Document Distribution

The publication of a Presidential press release could be almost as complicated as the story of how a bill becomes a law. First, there is a proposal, drafted by the Press Secretary's staffers. Secrecy is of paramount importance; even the existence of possible activity on an issue is a sensitive matter. An ongoing editing cycle can draw, ad hoc, upon the entire White House staff. Once it has the Secretary's approval, a final draft is reviewed by other Cabinet-level officials. Finally, the press release can be made available only on www.whitehouse.gov after the President has affixed his digital signature.

This is a classic secure application: a controlled set of principals with different access levels (viewing and editing), acting on a long list of protected documents. While the old process might have been implemented on a single mainframe system using operating system-level security, it is not flexible, scalable, or rich enough to handle today's document production cycle. Replacing it with a secured Web server seems like an excellent alternative, offering more accessible clients, better collaboration tools, and an expressive format for linking it all together.

On initial inspection, though, there are too many complications that crop up when considering the Web as a drop-in replacement for a secure authoring environment. At the very minimum, there is an open-systems integration challenge of replacing the old monolithic username/password database with an interoperable public key certificate infrastructure. On top of that, extending the trusted computing base to all those staffers' desktop PCs adds the potential risks of leaked documents left behind in users' caches, weak points for eavesdropping viruses, and insecure key management.

The marginal benefit of editing a position paper from a Cyber-Café instead of a hardwired terminal in the office hardly seems worth it. The real benefit is that the web of trust surrounding these documents expands outward from authoring to distribution, access, and readers' evaluation. As the Web is used to cross those organizational boundaries, from White House to newspaper to ISP to citizen, it can leverage a common Trust Management infrastructure to identify speakers and to make assertions.

Within the White House, secured Web Distributed Authoring and Versioning (WebDAV) [Slein et al., 1997] servers are already slated to include metainformation management components for labeling documents "Secret," tracking revisions, and managing workflow among team members. These components will then need to tie into existing identity infrastructures for HTTP, such as White House smart cards and digital certificates. With these in place, a few hours before public release, the White House can pass an embargoed copy to the newspapers' distribution network. Unlike the old closed system, in this scenario the trust boundary now includes a commitment from the newspapers not to redistribute the information before the stated time.

The newspapers need to operate their own trusted meshes of cache and mirror sites to bring the data nearer to their users for the big night. Or, consider the newsstand, being trusted by both parties to sit in the middle and shuffle bits around an efficient cache tree. And now, a new kind of trust role emerges because of intermediation. We trust intermediation companies such as ZippyPush.Com to work on behalf of its subscribers by getting fresh, safe copies of tasty bits within a reasonable amount of time; and, we trust ZippyPush.Com to work on behalf of its publishers by getting their copies out on time and collecting subscription fees. We do not normally think of "trusted" newsagents, but that is only because it is so expensive to fake a story in the real world; after all, there are only so many trees.

The next step in the chain is access. People need to trust their servers; with a subscription model, there is mutual authentication of many, many people. As a result, there is a significant need for a scalable PKI for identifying principals, as well as e-commerce issues such as payment protocols and proof of privacy (discussed further in section 6.3). Tools like SSL can help in the establishment of computer-to-computer trust.

Ultimately, the publishers have to keep the subscribers happy; this requires that readers believe not only the integrity of published information, but also believe in the meaning of the information. For the President and the publishers, this whole process is valuable only if it increases the public's trust in what is and is not the President's official word (and the words of his supporters and opponents, helping to establish trust in Web content in general). In today's world of spoofable systems, anyone can masquerade, and it may take a while to discover such false identities. In the future, especially with the malleability of digital media (such as hackable video, photographs, and even whole Web sites), digital signatures and assertions will become vastly more important.

This all culminates in very practical social consequences: how can we trust what our officials say? Worse, how can we trust any public document? Or, consider the infamous video of the Rodney King beating: when we surf over to CNN.Com and download the DVD stream, how can we calibrate our outrage at the LAPD? And, if we believe that the LAPD beat up a drunk driver, perhaps the Zapruder film is not so farfetched; in this way, trust can diffuse like falling stacks of dominoes. Furthermore, these critical questions cannot be addressed by any single one of the classic computer security solutions, with their fixed trust templates. In fact, mere citizens were not even part of the "box." Each citizen has the right to establish trust in his or her own way. Some trust their neighbors (community filtering), some their religious leaders, some get tidbits from family, and so on. Some will trust the King tapes because of the identity of the cameraman (perhaps he was a member of the same fraternity, so we can trust our "brother"), because of his camera (perhaps the stenographic integrity check is okay, it has not been tampered with since the video was taken, and we trust the Sony camera hardware), or because of an organization (perhaps the U.S. government accepts it as legal evidence; more chillingly, perhaps a private corporation such as CNN declares its verity). Note that in all cases, trust is established through principles --- via a person, computer, or organization --- as discussed in section 3.

The benefit of the Trust Management philosophy is its provisions for digging beyond a kneejerk "we need encrypted and authenticated server access" reaction to ask why we actually need these things. It might be "so the public can trust what it reads" --- and yet still come to trust information on its own terms. In essense, we are vastly widening the scope of trust, by redrawing the box around "trusted systems." A synergy is omnipresent across the entire value chain, from authoring to publishing to distributing to accessing to reading.

6.2 Content Filtering

Selecting the appropriate content from the Internet represents another kind of trust question: "Do I trust my 9-year-old not to visit Playboy.Com?" Asking that question across the entire universe of network-accessible information, however, requires developing intermediated trust relationships; no closed system can hope to assess the whole exponentially-growing Web on its own.

Unfortunately, many vendors today unsuccessfully endeavor to do just that. Black-list screening software can consult a database of "bad" URLs and rules such as "Ban URLs with xxx in them." Church, state, citizen's groups have historically supported such schemes and enforced them in many traditional media by targeting the few trusted agents at the top: broadcasters, publishers, and filmmakers. The problem with this is that the Web's distributed control and rapid growth continuously obsolete such lists. Those factors also make it difficult to maintain white-lists of known "good" sites. For example, online services promise kid-safe Web service by only linking to a small set of resources. With URLs advertised everywhere from buses to fortune cookies these days, users are aware of many more safe sites than any organization can catalog. Thus, we end up with either high false-negative or high false-positive blocking rates. Furthermore, these two schemes can only handle the most clear-cut judgment calls: if .edu is fine, and sex is verboten, what does this imply about sex.edu?

To tackle a problem expanding as rapidly as the Web itself, we must harness the Web itself. After all, for every fact on the Web, there is an equal and opposite opinion --- just codify and distribute them. The label pragmatics such as those afforded by PICS (discussed in section 5.2) can scale, because labels can be put on objects by the author or by a third-party. The key is bootstrapping the meaning of each label. Machine-readable metadata labels themselves can leverage the Web through self-description --- in fact, this ratings cycle is the key to being specific, by offering a measurable, concrete rating that other trust calculators can then reuse.

Those calculators embody the trust yourself principle. Instead of a closed-system black-list or white-list approach, each user's tools can decide to grant access depending on the opinions of several services (as described in section 5.3). We also need more flexibility as to where these filters are placed within the network. Traditional end-to-end security places such filtering exclusively at the periphery because there are only two players: publisher and reader. To properly represent the trust relationships in this application, we must account for entire households, schools, libraries, offices, and governments who have a say in what constitutes acceptable content. Filtering technology must be available within the network, controlled by administrators of such organizations.

Sometimes, though, the trust relationships are reversed: the publishers of intellectual property need to select who can access their resources. The same labeling infrastructure can be built into client software to, say, block the use of fonts that have not been purchased. Rights management and privacy labels could form the basis for a new kind of purely electronic commerce in information goods that is catalyzed by the Web, as described in the next section.

6.3 Electronic Commerce

Electronic commerce, contrary to media reports, is not a new idea [Sirbu, 1997]. Many of the whiz-bang issues being debated today would be quite familiar to Richard Sears at the dawn of mail-order catalog sales over a century ago. Establishing trust in an invisible merchant, committing transactions over an insecure medium, mass-individualized customer service, privacy, and taxation policy are not new risks; they merely exemplify the kind of factors free economies account for in stride.

What is new about e-commerce is its scale, not in terms of business volume, but in the size of businesses. This round of making commerce more efficient is not leading to bigness, but rather to small, fast, disintermediated components. The bleeding edge of fin-de-siècle "virtual corporations" are outsourcing their R&D, marketing, production, even leasing back their own employees. We believe that the cause for this turnabout is that distributed, secure systems make it easier to whittle away at the "critical mass" of a trusted operation. Advertising, for example, might be a task kept within the firm's trust umbrella as an integral department in the age of paper. Today, secure email, courier services, and electronic information systems for billing and bidding have obsoleted in-house advertising departments.

Today, electronic commerce systems are inspired by the tenets of classical computer security: closed systems, with implicit rules built-in to the system. When WalMart wants to integrate its suppliers' manufacturing capacity forecasts with its seasonal sales data, it has to actively manage access to a centralized data warehouse in Bentonville, Arkansas [Caldwell, 1996]. It might be acceptable to offer login IDs for a handful of partners, but the resulting Trust Management mushrooms in complexity. After all, WalMart's goal of automating the restocking orders highlights the risk of asking suppliers how much of their product to buy. The result is a laundry list of small, focused electronic commerce collaboration tools. Vertical market acceptance of Electronic Data Interchange reflects a similar growth pattern. EDI purchase orders and invoices were foisted on the 50,000-odd firm automotive supplier chain under pressure from the Big Three. The result is "point-to-point" e-commerce, where EDI is used precisely for reducing paperwork.

Moving business relationships onto the Web opens the prospect of more intimate cooperation on a broader scale. Merely by deploying the Web as an information service, businesses can reduce their paperwork for publishing catalogs, soliciting bids, and providing technical support. Furthermore, escalating to "transactional" Web services will deliver even greater benefits. For example, collaborative product development becomes much easier if a team across multiple companies can conveniently toss up a "virtual war room" with secure access for team members --- and not other parts of the same company, a common paradox in today's coop-etitive industries. Whereas EDI today can deliver trusted purchase documents only across a single organizational boundary, tomorrow's distributed e-commerce systems will offer trusted collaboration tools, shared information services, secure payment, and settlement systems, which are expressly designed to knit a single trusted enterprise from separate parts. Consider CommerceNet's Southeastern chapter Real Estate pilot project: it aims to seamlessly integrate the experience of buying a home by drawing on separate buyers' and sellers' agents, mortgage brokers, appraisers, and termite inspectors.

Trust is much more central to the retail consumer experience because business-to-business trust is often a matter among equals [Shaw, 1997]. The "little guy" has the most to lose in dealing with the big company, which is the most vested in brands, guarantees, and solidity.

Consumers' fears are being addressed on several fronts. Electronic payments players are migrating today's familiar tools to the Web, including e-Checks, e-Credit Cards, and e-Debit Cards. The World Wide Web Consortium is working with CommerceNet build Web-integrated solutions using JEPI. The key to such solutions is, again, reliable metadata: by labeling a Web site in exactly the way that today's storefronts are labeled with Mastercard, Visa, and American Express decals.

Another hot-button is consumer privacy, currently being investigated by the W3C, with the Internet Privacy Working Group (IPWG). TRUSTe and BBBonline are trying to license the use of a logo to businesses commercially, and W3C's Privacy Preferences Platform is based on specific, trusted labels of site policy. The latter can be automated in software screens, and it can be made available from third parties; this essentially puts the trust decision in the hands of the user, a clear victory for Trust Management thinking over the traditional closed-system thinking.

Finally, one of the most exciting frontiers for retail electronic commerce, intellectual property distribution, will also drive Trust Management tools and infrastructure. Articles, songs, and movies all offer content for the e-commerce market, and the foundations for "pay-per-view" reside in cryptographic innovation: new watermarking algorithms, hiding license information in the content (steganography); packaging together multiple contents in a lockbox; and integrating content with trusted hardware and secure coprocessors [Cox, 1996]. However, since e-commerce involves fostering an agreement between a buyer and a seller, each of whom are masters of their own domains, the issues of this problem become much clearer if it is approached as a Trust Management challenge.

The market for finite-use intellectual property will not be enabled by a magic way of metering each original sale; rather, it will most likely require an ongoing trust relationship in which the buyer agrees not to redistribute the content, or agrees not to use the content more than a specified number of times without paying more. For example, Corel is implementing its entire WordPerfect Suite in Java, with the hope that eventually people will be able to download an application from the Web, and use the application for a paid amount of time. Enforcement of this policy requires a significant trust relationship between Corel and the users downloading its code.

6.4 Downloadable Code

The moral is obvious. You cannot trust code that you did not totally create yourself.
-- Ken Thompson, Turing Award Lecture [Thompson, 1984]

According to the media hype about the Web, perhaps the only phenomenon more wondrous, dangerous, and novel than "electronic commerce" is "mobile code." The risks of entrusting downloadable code, though, just represent more back-to-the-future experiences to computer security experts. Once again, the critical difference is the shift from closed to open environments. Within secured systems, the primary threat was malfunctioning or malicious code; as a result, no code even entered a system without both review and an administrator's explicit authorization. The advent of mobile code does not particularly exacerbate execution risks, as many of the security precautions mentioned below are unmodified from their original formulations. What it did ignite, though, was a wave of executable programs hopping across organizational trust boundaries. On behalf of nothing more serious than surfing to some random Web page, your browser might take the opportunity to download, install, and execute objects and scripts from unknown sources.

What is the matter here? This "ready, fire, aim" approach violates all three Trust Management principles in section 2! Truth be known, users were never explicitly expected to entrust new applets themselves. Once invoked, however, applets have wide-open access, because there are no specific limits on their trust. Worse, the initial tools shipped without any particular care to log and audit downloaded codes' activities, nor to defend against simple "system bugs" such as self-modifying virtual machines, unprotected namespaces, and loopholes in filesystem access. The risks were evident right off the bat, triggering the wave of press hysteria which continues unabated. Let's take a look at some popular solutions to the problem of trusting code across organizational barriers.

As discussed in section 2.1, Microsoft does not have any runtime limits on its ActiveX components; in fact, the only control a user has is the initial decision to install it. Naturally, as the largest commercial consumer software publisher in the world, Microsoft is interested in offering some security policy to users. To that end, their Authenticode provides identity-centric policy based on VeriSign publisher certificates. Java, on the other hand, is a bytecode-interpreted language with wider scope for "sandboxing" which privileges an applet can access. JavaSoft and Netscape both promote policies based on granting or denying such capabilities. Their systems encourage developers to declare exactly what type of access they require, by analogy to some common models users might be familiar with: "none" (draw, no disk), "video-game" (draw, access just a few files), and "word processor" (full access) are a few examples.

In practice, no single security policy will suffice for safely using downloaded code, any more than a single policy can capture an entire household's morals for content filtering. Even the earliest secure operating systems, like Multics, had the foresight to combine identity and capability limits for executing code. Managing the trust placed in downloadable components will draw on the same list of Trust Management tools suggested throughout this paper: identity certification of authors and endorsers; machine-readable metadata about the trustworthiness of principals, objects, and actions; and flexible Trust Management engines that can compose the policies of end-users, adminsitrators, and publishers policies.

Deploying a common base of Trust Management tools seems like a far-off goal, but the intense industrial interest in the area will accelerate the development of standards in this area. W3C's Digital Signature Initiative has worked to ensure industry consensus over the last year, because it anticipated this interest. As a result, the Consortium was able to harness the energy surrounding the "Java vs. ActiveX safety debate" [Felten, 1997] to propose and promote reusable, general-purpose tools. The very title of the Digital Signature project is a good indication: W3C frames the problem as "helping users decide what to trust on the Web," and not just downloadable code [Khare, 1997]. The initial target is a digitally signed PICS label, which form the technological foundation for any of the applications in section 6.

7. Limits of Trust

The problem is not trust... the problem is how he will implement what has been agreed upon.
-- Yasir Arafat on Benjamin Netanyahu's trustworthiness, Newsweek, page 6, June 19, 1997

It would appear that even with the best of intentions, the interleaved elements of Trust Management still weave quite a tangled web. It seems much simpler merely to protect our information services in the name of Web Security, and leave these "trust" decisions in their original (human) hands. In the long run, though, the Web and its descendents will evolve into a mirror of our communities in the real world. Trust relationships are the essence of human community, so automatable Trust Management in its many guises will become an integral part of those systems. By considering the ultimate limits of trust, it also becomes clearer that it is not enough for the Web to adapt to reflect today's social organizations; society will surely have to adapt to the Web as well. We need to work together as developers, webmasters, business people, users, and citizens, to explore and settle this new frontier.

7.1 Limits of Web Security

"Pone seram, cohibe."
Sed quis custodiet ipsos custodes? Cauta est et ab illis incipit uxor.


"Bolt her in, keep her indoors."
But who is to guard the guards themselves? Your wife arranges accordingly and begins with them.
-- Juvenal (c.60-130 AD), Satires

Amid the headlong rush toward new protocols, ciphers, patches, and press releases to which the Web Security industry seems addicted, it is easy to lose sight of the fact that conventional security technology, even if implemented perfectly, does not add up to Trust Management. Narrowly protecting Web transactions by securing them in a closed system, will never realize the full potential of the Web to reach out and integrate its users' trusted applications. Certainly, the Web of Trust will be built atop conventional Web Security (a good summary of which appears in [Rubin et al., 1997]), exploiting its services to the hilt: signatures, certificates, secure channels, and so on. Nevertheless, Web Security alone, in the form of crypto-savvy servers and clients, cannot match the criteria set forth in this paper.

Fundamentally, the limitations to Web Security depend entirely on the weaknesses of the individual services themselves. The Web as an information system does not publish political press releases, corrupt youth, sell books, or reprogram computers. It is just a request-response protocol for importing and exporting bags of bits from exotic locations across the network. If you draw a circle around Web clients and Web servers, you actually capture very little of the value people are deriving from the Web, since so much of the Web's power is hidden behind the curtains: CGI-BIN fill-in-the-form handlers, content databases, filesystems, caches, and ever-expanding browsers. "Securing a Web transaction" proves only that a pile of bits has moved from one machine to another without anyone peeking.

There are three levels at which we can protect Web transactions: in the transport layer underlying HTTP, within the HTTP messages themselves, or in the content exchanged atop HTTP [Khare, 1996b]. The transport layer can provide only a secure channel; it cannot be used to reason about the protection of individual "documents" or the protection of different HTTP "actions," since it is oblivious to them. Those decisions are properly part of the application layer, driving the development of security within the HTTP messages. Finally, application developers can circumvent Web security entirely, and build their own end-to-end solutions, by using HTTP as a black-box for exchanging files.

In the transport layer, SSL and its successor TLS can provide channel security only between two processes. A temporary session key is set up for each cryptographic handshake between any client and server. The emphasis, however, is on any: the only way to be sure that the device on the other end speaks for some person or organization is through patches that exchange X.509 hierarchical identity certificates. These protocols alone cannot further establish trust in those principals, because it is an "out-of-band" certification problem. Also, these protocols do not respect the trust topology, as evidenced by their ability to be spoofed with man-in-the-middle attacks in practice [Felten et al., 1997].

At the application layer, where such decisions ought to reside, security features are even weaker. With the lukewarm market acceptance of Secure HTTP (S-HTTP), today's developers have few options for securing Web semantics. Only recent fixes address the trust topology, authenticating to proxies in between as well as original servers and clients. However, many desirable security mechanisms are still needed. For example, S-HTTP was able to cloak entire transactions, but with its passing, there is no longer a way to hide URLs.

Above the application layer, security is flexible, but its gains are minimal without its underlying layers being secure. Similar problems occur with other "generic" tools; for example, firewalls and tunnels to form Vitual Private Networks cannot overcome security loopholes in the underlying infrastructure.

Web servers exhibit many potential security weaknesses with respect to the trust taxonomy we have outlined in this paper:

  1. Principles. Web servers often disobey all of the principles in section 2. Servers are often overbroad, and they have difficulty fully establishing trust in any action because they use the Web only as a carrier of information. HTTP servers cannot be specific, because they cannot accurately identify the particular privileges that different groups might have. Also, if you do not know who and how and what extremely well, you cannot trust yourself: Web servers often outsource work to the operating system, which is quite risky in and of itself. Furthermore, such outsourcing makes Web servers overly reliant on other subsystems' security, causing them to be vulnerable at multiple points of entry: the corruptible file system can be clobbered on the FTP, Telnet, and Email channels; viruses can make the operating system repeatedly miserable; and poor extensibility features such as servlets might unintentionally reveal security flaws as well. Adding insult to injury, Web servers have trouble being careful, too. Their logging features are rudimentary (that is, they flood the client with information, without any intelligent anamoly detection). Rollback is virtually nonexistent, due to the loose coordination with information sources; in response, WebDAV is attempting to retrofit the server with rollback in ONE aspect: versioning content.
  2. Principals. Web servers cannot reliably identify any of our three types of principals. Worse, we rely on very weak security when Web servers need to make assertions today: passwords for users are often crackable, IP addresses spoofable, and DNS entries corruptible. Sometimes, we do have the right tool for determining principals, but the implementation is in the wrong layer entirely: SSL client-authorization does not propagate up, and passwords for one-time logins instead have to be reaffirmed for every (stateless) transaction.
  3. Policies. Web servers offer very little flexibility in policy. As discussed in section 4.4, the security policies implementable in today's secure Web servers have difficulty obeying the principles set out in section 2.

Likewise, Web clients manifest many unsafe characteristics, too:

  1. Principles. Web browsers have difficulty being specific; they work exactly the same for all of the locations in Web space. Users do not get to set policies specifically based on the subject of their conversations, although Microsoft has started providing very primitive "zones" in Internet Explorer 4. Also, Web clients behave the same way for all active content, with no specific limits on the capabilities of components. At the very best, you have a little unbroken key icon, which contains nothing specific about the company to whom you are talking, as we have today with corporate logos and buildings. The net result is that tools offer no means by which to trust yourself. In fact, Web browsers often put us at the mercy of the trust of others: if BigSWPub.Com thinks its 0.99beta is trusted to ship, you have no choice but to trust them that that the code is intact and ready to run. Users cannot actively establish trust with any of the sites with which they regularly correspondend. Web browsers cannot be careful, again because clients are not integrated with hosts, so there is lots of potential for subsystems leaks. And that speaks nothing to caches possibly sharing information with other "users," bugs in the so-called sandboxed virtual machines, and difficulties with logging activities.
  2. Principals. User machines often have no idea of principals. Regarding humans, for example, Windows 95 has a weak concept of a uniquely-identifiable user and user IDs were easily cracked in early Windows NT. Identifying computers is difficult too, since the user interface often hides the what little location cues there are, leading to spoofer sites [Felten et al., 1997] that complete the illusion by using Javascript to disable the parts of the user interface that indicate with which computer the browser is talking. In fact, the same attack works to subvert users' ability to identify organizations: today, spoofing seems to depend more on realistic counterfeit gif images than anything else. Legitimate DNS battles over trademark status also cause confusion, such as People for the Ethical Treatment of Animals' fight with People Eating Tasty Animals for the Web site peta.org.
  3. Policies. Web clients can barely even claim to having security policies: often what little protection there is, is compiled in and not user-configurable. As discussed in section 4.4, the results violate all three principles offered in section 2.

Recapitulating the security lessons of the last twenty years is not a very ambitious mark for the Web. We can do better, and we must do better, because computers are not islands any more. Instead, the Web seamlessly interweaves computers with our "real world" activities: production (business), assembly (politics), and friendship (socializing). Web computing, whether we like it or not, is becoming an increasingly important element in the social contract of trust.

7.2 Trust as a Social Contract

Trust is the result of a risk successfully survived.
-- Andy Gibb

Trust is not a decision; it is an ongoing process. Sara Surfer will have to try out her new auto-stock-picker and test its recommendations. Community Boating will have a word or two with you if you capsize a sailboat. Clara Customer can sue the Bank for a penny out of place. In real life, principals build up trust over time, learning from shared experiences; for example, over the time you have read this paper, we the authors have been building trust with you, the reader. These relationships can then be codified as formal and informal contracts on specific issues, with their own measures for success and redress. Finally, communities can emerge from principals who share a trust relationship, ranging from a basketball team to a multinational corporation like MCI. As society coevolves with the Web, Trust Management tools must also automate the learning process, and not just the decisions.

Studying the process of trust-building would seem to be as sticky a tar pit as the philosophy or morality of trust. In fact, though, mathematicians and economists have been studying formal models of trust-building as the field of Game Theory, ever since von Neumann invented it in the 1940s [von Neumann, 1947]. Through simulation and analysis of game trees, we can begin to quantify how trust compares favorably to suspicion, how players discover trust, and how communities that trust each other can emerge.

Consider the classic Prisoner's Dilemma, in which two noncommunicating principals decide whether to reveal to the police that the other one masterminded the crime. If neither principal talks to the police (that is, they trust each other to cooperate without communicating), they both get a light jail sentences because the police have no evidence but lots of circumstantial data. If both principals talk to the police (that is, they do not trust each other enough to cooperate without communicating), they both get heavy sentences because the police have lots of evidence of premeditated conspiracy. If one principal talks and the other does not, the one who talks gets out of jail free, while the other one gets an extremely heavy sentence because the freebird is willing to take the stand to convict him. The "dilemma" comes from each party wondering if he can trust the other party without direct communication.

Each player, independently, should choose to defect from the trust relationship, to avoid being sold out by the other in exchange for freedom. Upon reflection, they each might even believe that they are better off if they trust each other (thus securing light jail sentences for both), but since the situation is a one-shot trial, double-think and triple-think leads back to a greedy defector's end (thanks to the principle, trust yourself!) and a heavy jail sentence. When the economics theorists add a twist --- such as, what if the "game" continued for every day indefinitely, so the participants had to play many times --- we discover that trust emerges as a viable option. However, to glean trust the game must be perceived as ongoing and without foreseeable end: if the cops tell the prisoners they will interrogate them for only a week, the prisoners can look ahead, work backwards, determine that it is in their best interests to squeal, and still always defect.

If the situation is ongoing, however, a whole new pattern emerges: cooperation can become stable (the discovery of trust!). Axelrod added a second twist [Axelrod, 1997]: what if you were playing this "game" against a community of other people, one at a time? He proposed a competition of different behaviors: pitting together a wide-eyed always-cooperator, cynical defector, and a host of in-between strategizers in a multiround competition. The clear victory of one strategy proved a final point: An entire community can slowly learn to trust each over over time. The winning strategy was, in fact, tit-for-tat: I will do whatever the last person did to me. This strategy was a universal winner over every other. So there we have it, the duality between cooperation and retaliation. Apparently learning comes from positive and negative feedback, after all.

The balance between cooperation and defection is what leads to the abstraction of trust as a contract. Rather than accept the full risk of cooperating while others defect, we rely on effective redress to define the limits of trust (thereby defining trust as a process derived through its contracts). This is not the classical definition of trust: in computer security or Web security, trust is a decision you get by inserting facts and policy and turning the crank. Note that in the real world, transactions are never based on brittle, infinite trust: you are never absolutely guaranteed an airline seat because of overbooking; also, your bank account statement is never final in case errors are discovered. Mechanically, we have two ways to implement these limits.

The first is auditing (that is, logging and analyzing the outcomes of each trust decision). Audits let us detect intruders and identify buggy programs, among other things. Both ActiveX and Java code security use this technique, both for registering each foreign applet loaded and for tracking their behavior (logging for COM; sandboxing for Java).

Executable code systems do not have access to the other power: rollbacks. An effective redress for a trust violation is undoing its effects as far as possible. For example, in some cases, it may be fine to let anyone write to the server, as long as there are backups. If a bank check clears into the wrong account, there is a window to roll it back (3 days in the U.S.). Sometimes rollback is only approximately achieved: fraudulent credit card use costs the holder only $50 in the U.S., but the bank must absorb any losses due to fraudulent charges above and beyond that amount.

Sometimes the best way to build trust is not to have rollback: for example, notarized documents. Surety Technologies is licensing a very clever and simple way of entangling your document notarization signature with that of everyone else in the world who notarizes a document in the same time window [Haber and Stornetta, 1991], yielding a single number at the top of the CA pyramid which is published in the New York Times. Here, the only way to forge the timestamp is to get everyone else in on your lie; so, even if you do not trust some of the individuals in the tree, you can still trust the entire tree. In general, though, if your trust is violated without redress (such as refunding your money), society calls it a crime: libel, theft, even murder...

Thus, to model trust fully within the Web, we need to acknowledge that trust is a social contract, a learning process. Our Web of Trust has to work precisely the same way. With respect to filtering objectionable content, we need to learn from our friends, neighbors, and organizations what is and is not acceptable. Machine readable ratings make this learning possible, because we can both fetch the facts from many people, and learn metainformation from them these facts because they are machine-readable. We want downloadable-code managers that track which ways programs fail or cause security holes, and dynamically remove those programs or patch those holes. We also need to learn in redundant ways; for example, we may need to check out a new game against several policies. Is it virus-free? Is it above three-stars on the PCPundit rating scale? Is is not too violent or sexual? The result of learning may be dynamic, self-modifying policies for taking various actions. This is another good reason for developing a common Trust Management API: PolicyMaker and REFEREE have already demonstrated power of policies written in many languages (discussed in section 5.3).

Representing this rich and subtle model of trust will help us contain the Pandora's box of complexity we have unleashed. In the conventional security world, generations of eunuchs worked to define the security perimeter of an application as narrowly as possible and to make mechanical trust decisions. Now, we insist that the Web of Trust supports all manner of applications that reach out of the box, intertwining many parties' trust concerns. It is not enough to distribute a document securely. We need to build confidence that the client and server computers are sending documents with integrity and privacy; that the author was trusted to speak for the organization; and that the reader can trust the author's words. In embracing the messy real-world nature of trust, we naturally encounter the limitations to Trust Management. As we automate the real-world, we only can automate trust decisions that we can ultimately audit and/or rollback: we will need people in the loop if we are going to bring our communities onto the Web.

7.3 Trust in the Mirror

A Mirror World is some huge institution's moving, true-to-life mirror image trapped inside a computer --- where you can see and grasp it whole. The thick, dense, busy subworld that encompasses you is also, now, an object in your hands...
-- David Gelernter, Mirror Worlds [David Gelernter]

With each new home page and mailing list and transaction, another community finds its reflection on the World Wide Web. Soon, their trust relationships --- the very essence of community --- will be automated onto the Web, too. As the Web and its descendents evolve into a Mirror World, they will need to adapt to the human trust relationships, but just as inevitably, human trust relationships will have to adapt to digital management. Mirror Webs will distort the nature of trust --- and thus, communities --- by creating new kinds of agreements and by shattering old ones.

Consider the changes wrought upon one very large community being reflected onto the Internet: people who use money. Currencies represent a community of customers, merchants, citizens, and bankers who choose to have trust it --- after all, even the Almighty Dollar is inscribed, "In God We Trust," in the end. There are also a litany of payment systems atop each currency: checks, credit cards, debit cards, and so on. The race is on to find their electronic equivalents, but not their clones. The root of the difference is that in cyberspace, cryptography has to substitute for atoms: gone are the old certainties of pen-and-ink signatures, fingerprints, and the physical truth that if I spend a dollar, I cannot keep it.

The new risks of the electronic world --- sniffers instead of snipers --- have catalyzed a slew of competing electronic payment systems [Sirbu, 1997]. Each is a dance between the same four players: payer, payee, and their respective banks, but with different steps. The resulting combinations of information flows, trust relationships, and risk allocations rewrite the social contracts establishing banks, credit card associations, and other financial communities. Their foundations reside in the trust required to accept risks, but instead of holding a mirror to today's money, the Internet has become more like a kaleidoscope.

The Mirror Web also magnifies latent flaws in existing trust relationships. Consider the U.S. Social Security Administration's ill-fated attempt to put its records on the Web. Each American worker has a trust relationship with the SSA regarding his pensions, sealed by the "secrecy" of his Social Security Number, mother's maiden name, and birth state. For decades, those were the keys to obtaining one's Personal Earnings and Benefit Estimate Statement (PEBES). When the exact same interface was reflected on the Web, nationwide outrage erupted over the perceived loss of privacy, resulting in a hurried shutdown and "reevaluation" [Garfinkel, 1997].

In this case, fast and easy HTTP access has raised the potential for large-scale abuse not present in the original, postal system. The SSA is stuck with a trust relationship that is not represented by a corresponding secret, so cryptography cannot solve their problem. The irony, though, is that they do share one secret record with each worker: that worker's earnings history --- which is why workers request a PEBES in the first place!

In the end, there will have to be a more secure way of accessing such records --- perhaps with a digital identity certificate corresponding to today's Social Security Card. Such precautions may even strengthen how the "traditional" paper system works, demonstrating one kind of distortion from the Mirror Web. Cryptography can offer much stronger proof than traditional means, so trust relationships will tend to be cemented with shared secrets that enable those protocols, such as PIN numbers, shared keys, and credentials. A second distorting consequence of the Mirror Web is that making trust easier to establish, audit, and revoke, will increase the number of boundaries and trust relationships in society. If the transaction costs of creating and maintaining trusted communities (such as corporations) fall, the inevitable result will be smaller communities. If economic downsizing is not a distorting enough aspect of the Mirror Web, consider an Information Society where fringe groups can "drop out" of the mainstream and stay that way through narrowcasting. Even non-fringe groups can have an impact: until the Chinese uphold intellectual property rights or India recognizes process patents, no level of cryptography can force their PCs to do so...

8. Weaving a Web of Trust

What we are concerned with here is the fundamental interconnectedness of all things.
-- Douglas Adams, Dirk Gently's Holistic Detective Agency

In summary, adopting the philosophy of Trust Management does not change much of the existing technology; rather, it changes our attitudes on how to apply that technology. It includes:

Principles
When deciding to trust someone to take some action with some object, it is absolutely critical to be specific about the privileges granted; to trust yourself when vouchsafing the claim; and to be careful before and after taking that step.
Principals
The decision to grant trust is constructed from a chain of assertions, leading to permission. There are three kinds of actors making the assertional links based on their particular identity lifetimes: people make assertions with broad scope, bound to their long-lived names; computers make narrow proofs of correct operation from their limited-scope addresses; and organizations make assertions about people and computers because they have the widest scope of all. Credentials describe each kind of principal and its relationships, such as membership and delegation.
Policies
There are rules about which assertions can be combined to yield permission. Broadly speaking, policies can grant authority based on the identity of the principal asking; the capability at issue; or an object already in hand. In other words, you might be trusted based on who you are, what you can do, or what you have.
Pragmatics
Deploying a Trust Management infrastructure across so many administrative boundaries on the open, distributed Web requires adapting to pragmatic limitations. Since objects can live anywhere on the Web, so can their security labels. Furthermore, such labels should use a common, machine-readable format that recursively uses the Web to document its language. The real benefits of Trust Management come from tying all of these details together within a single Trust Management engine. This will drive a handful of standard protocols, formats, and APIs for representing principals and policies.

We note, however, that Trust Management is no silver bullet. It will be a long struggle to weave a Web of Trust out of the scattered parts invented in our quest for Web Security. There is a lot going for the always-cheaper, always-simpler, head-in-the-sand approach of merely securing Web transactions, instead of attempting to secure Web communities. Everyone has a role to play in bringing this vision to fruition:

Web Developers
The people and organizations ultimately responsible for reducing Web standard formats, protocols, and APIs to practice in software and hardware should be committed to developing Trust Management technologies. They should become engaged in the current standardization debates surrounding public key infrastructure (the SPKI/SDSI working group at the IETF); digital signatures (in the legislatures and courts as well as IETF and W3C); and formats for adding security and trust metadata to the Web (at W3C). Looking further ahead, they should be aware of the latest research in this new approach to computer security, such as the 1996 DIMACS Workshop on Trust Management [ Brickell et al., 1996].
Web Users
Users have the power to make developers follow this agenda: this is the power of the purse. Web users should be aware of the laundry list of trust decisions confronting them every day: whether they are talking to the right organization, whether they should run an applet, or whether to allow their children to access a site. They are in the best position to demand that client and server toolmakers help automate these judgments.
Application Designers
The business people, programmers, and regulators responsible for creating and controlling new, secure Web applications should use the concepts identified in this paper to identify and control security risks. It is not merely a cryptographer's problem to uphold the principles of Trust Management, identify principals, construct policies, and integrate it with the Web. Each party to applications development should think carefully about whom they are trusting, in what roles, to permit some action, for some time period.
Citizens
The emergence of the Web as a social phenomenon will even affect people who do not use the Web. As informed citizens, we have to consider the impact of automating trust decisions and moving our human bonds into WebSpace. Trust Management tools allow communities of people to define their own worldview --- but should we allow the KKKnet to exclude the truth about the Holocaust? What are the social consequences of fragmenting our trust communities?

If we all work together, automatable Trust Management could indeed weave a World Wide Web of Trust, spun from the filaments of our faith in one another.

Acknowledgments

A broad survey paper springboards from the minds of innumerable correspondents. Weaving a Web of Trust is based on over two years' experience working with the Web security community. Particular plaudits go to colleagues at the World Wide Web Consortium, including Jim Miller, Phill Hallam-Baker, and Philip DesAutels; W3C's Security Editorial Review Board, including Ron Rivest, Butler Lampson, Allan Schiffman, and Jeff Schiller; the AT & T-centered team behind the "Trust Management" concept, including Joan Feigenbaum, Yang-Hua Chu, and Brian LaMacchia; and fellow Web security researchers Carl Ellison and Mary Ellen Zurko. We also thank Ross Anderson and Megan Coughlin for their suggestions to improve this document.

References

  1. Robert Axelrod. The Complexity of Cooperation: Agent-Based Models of Competition and Collaboration, Princeton University Press, 1997. Available at http://pscs.physics.lsa.umich.edu/Software/ComplexCoop.html
  2. David E. Bell and L.J. LaPadula. Secure Computer Systems: Unified Exposition and Multics Interpretation, MTR-2997 Revision 1, MITRE Corporation, Bedford, MA, March 1976.
  3. Matt Blaze, Joan Feigenbaum, and Jack Lacy. Decentralized Trust Management, Proceedings of the 1996 IEEE Symposium on Security and Privacy, IEEE Computer Society Press, Los Alamitos, Pages 164-173, 1996. Available as a DIMACS Technical Report from ftp://dimacs.rutgers.edu/pub/dimacs/TechnicalReports/TechReports/1996/96-17.ps.gz
  4. Matt Blaze, Joan Feigenbaum, and Jack Lacy. The PolicyMaker Approach to Trust Management, DIMACS Workshop on Trust Management in Networks, South Plainfield, NJ, September 1996. (1996b) Available at http://dimacs.rutgers.edu/Workshops/Management/Blaze.html
  5. Matt Blaze, Joan Feigenbaum, Paul Resnick, and Martin Strauss, Managing Trust in an Information-Labeling System, European Transactions on Telecommunications, 1997. Available as AT & T Technical Report 96.15.1, http://www.si.umich.edu/~presnick/papers/bfrs/
  6. Tim Bray and C.M. Sperberg-McQueen. Extensible Markup Language (XML): Part I. Syntax, World Wide Web Consortium Working Draft (Work in Progress), March 1997. Available at http://www.w3.org/pub/WWW/TR/WD-xml-lang.html
  7. Ernie Brickell, Joan Feigenbaum, and David Maher. DIMACS Workshop on Trust Management in Networks, South Plainfield, NJ, September 1996. Available at http://dimacs.rutgers.edu/Workshops/Management/
  8. C. Brookson. GSM Security: A Description of the Reasons for Security and the Techniques, IEEE Colloquium on Security and Cryptography Applications to Radio Systems, Pages 1-4, 1994. Available at http://btlabs1.labs.bt.com/bookshop/papers/4720987.htm
  9. Bruce Caldwell. Wal-Mart Ups The Pace, InformationWeek, Pages 37-51, December 9, 1996.
  10. David Chappell. Understanding ActiveX and OLE, Microsoft Press, 1996.
  11. Yang-Hua Chu, Joan Feigenbaum, Brian LaMacchia, Paul Resnick, and Martin Strauss. REFEREE: Trust Management for Web Applications, Proceedings of the Sixth International World Wide Web Conference, Santa Clara, CA, April 1997. Available at http://www6.nttlabs.com/HyperNews/get/PAPER116.html
  12. Brad Cox. Superdistribution: Objects as Property on the Electronic Frontier, Addison-Wesley, 1996.
  13. Drew Dean, Edward W. Felten, and Dan Wallach. Trust Management In Web Browsers, Present and Future, DIMACS Workshop on Trust Management in Networks, South Plainfield, NJ, September 1996. Avaliable at http://dimacs.rutgers.edu/Workshops/Management/Felten.html
  14. Dorothy E. Denning. A Lattice Model of Secure Information Flow, Communications of the ACM, Volume 19, Number 5, Page 236-243, May 1976.
  15. Dorothy E. Denning. Cryptography and Data Security, Addison-Wesley, 1982.
  16. Philip DesAutels, Yang-hua Chu, Brian LaMacchia, and Peter Lipp. DSig 1.0 Signature Labels: Using PICS 1.1 Labels for Making Signed Assertions, W3C Working Draft (Work in Progress), November 1997. Available at http://www.w3.org/TR/WD-DSIG-label-971111.html
  17. Tim Dierks and Christopher Allen. The TLS Protocol, Version 1.0, Internet Draft (Work in Progress), May 1997. Available at ftp://ietf.org/internet-drafts/draft-ietf-tls-protocol-03.txt
  18. Department of Defense / NCSC. Trusted Computer System Evaluation Criteria ("The Orange Book"), DoD 5200.28-STD, 1985. Available at http://www.radium.ncsc.mil/tpep/library/rainbow/5200.28-STD.html
  19. The Economist. Tremble, Everyone, Economist Survey on Electronic Commerce, Page 10, May 10 1997.
  20. Carl Ellison. SPKI Certificates, DIMACS Workshop on Trust Management in Networks, South Plainfield, NJ, September 1996. Available at http://dimacs.rutgers.edu/Workshops/Management/Ellison.html ; see also the SPKI page at http://www.clark.net/pub/cme/html/spki.html
  21. Carl Ellison, Bill Frantz, Ron Rivest, and Brian M. Thomas. Simple Public Key Certificate, Internet Draft (Work in Progress), April 1997. Available at http://www.clark.net/pub/cme/spki.txt
  22. Electronic Privacy Information Center. Surfer Beware: Personal Privacy and the Internet, June 1997. Available at http://www.epic.org/reports/surfer-beware.html
  23. Edward W. Felten. Security Tradeoffs: Java vs. ActiveX, April 1997. Available at http://www.cs.princeton.edu/sip/java-vs-activex.html
  24. Edward W. Felten, Dirk Balfanz, Drew Dean, and Dan S. Wallach. Web Spoofing: An Internet Con Game, Princeton University Technical Report 540-96 (revised), Revised February 1997. Available at http://www.cs.princeton.edu/sip/pub/spoofing.html
  25. Roy Fielding, Jim Gettys, Jeff Mogul, Henrik Frystyk, and Tim Berners-Lee. Hypertext Transfer Protocol -- HTTP/1.1, RFC 2068, January 1997. Available at http://www.w3.org/Protocols/rfc2068/rfc2068
  26. John Franks, Phill Hallam-Baker, J. Hostetler, P. Leach, A. Luotonen, E. Sink, and L. Stewart. An Extension to HTTP : Digest Access Authentication, RFC 2069, January 1997. Available at http://www.w3.org/Protocols/rfc2069/rfc2069
  27. Alan O. Freier, Philip Karlton, and Paul C. Kocher. The Secure Sockets Layer Protocol, Version 3.0, Internet Draft (Work in Progress), November 1996. Available at ftp://ietf.org/internet-drafts/draft-ietf-tls-ssl-version3-00.txt
  28. Simson Garfinkel. PGP: Pretty Good Privacy, O'Reilly and Associates, 1994.
  29. Simson Garfinkel Few Key Bits of Info Open Social Security Records, USA Today, Page A1, May 12, 1997.
  30. David Gelernter. Mirror Worlds: The Day Software Puts the Universe in a Shoebox... How it Will Happen and What it Will Mean, Oxford University Press, 1991.
  31. Shishir Gundavaram. CGI Programming on the World Wide Web, O'Reilly and Associates, 1996.
  32. Stuart Haber and W. Scott Stornetta. How to Time-Stamp a Digital Document, Journal of Cryptology, Volume 3, Number 2, Pages 99-112, 1991.
  33. T. Howes and M. Smith. The LDAP Application Program Interface, Internet Draft (Work in Progress), RFC 1823, August 1995. Available at ftp://ietf.org/internet-drafts/draft-howes-ldap-app-00.txt
  34. Rohit Khare. Using PICS Labels for Trust Management, DIMACS Workshop on Trust Management in Networks, South Plainfield, NJ, September 1996. Available at http://dimacs.rutgers.edu/Workshops/Management/Khare.html
  35. Rohit Khare. Security Extensions for the Web, RSA Data Security Conference, 1996. (1996b) Available at http://www.w3.org/pub/WWW/Talks/960119-RSA/
  36. Rohit Khare. Digital Signature Label Architecture, World Wide Web Journal special issue on security, Volume 2, Number 3, pages 49-64, Summer 1997.
  37. Loren M. Kohnfelder. Towards a Practical Public-Key Cryptosystem, B.S. thesis supervised by Len Adleman, May 1978.
  38. Martijn Koster. A Method for Robots Control, Internet Draft (Work in Progress), draft-koster-robots-00.txt, December 1996. Available at http://info.webcrawler.com/mak/projects/robots/norobots-rfc.html
  39. Butler Lampson, Martin Abadi, Michael Burrows, and Edward Wobber. Authentication in Distributed Systems: Theory and Practice, Digital SRC Research Report 83, February 1992. A nice mathematical treatment of trust, available at http://gatekeeper.dec.com/pub/DEC/SRC/research-reports/abstracts/src-rr-083.html
  40. Butler Lampson and Ron Rivest. SDSI -- A Simple Distributed Security Infrastructure, DIMACS Workshop on Trust Management in Networks, South Plainfield, NJ, September 1996. Available at http://dimacs.rutgers.edu/Workshops/Management/Lampson.html ; see also the SDSI page at http://theory.lcs.mit.edu/~cis/sdsi.html
  41. Nancy G. Leveson and C.L. Turner. An Investigation of the Therac-25 Accidents, IEEE Computer, July 1993. Also appears as Appendix A of Safeware: System Safety and Computers by Nancy Leveson, Addison Wesley, 1995.
  42. Microsoft. Microsoft Authenticode Technology, 1997. Available at http://www.microsoft.com/security/tech/misf8_2.htm
  43. Peter G. Neumann. Computer-Related Risks, Addison Wesley, 1995.
  44. NIST/NSA. Federal Criteria for Information Technology Security, Volumes 1 and 2, Version 1.0, December 1992. Federal Information Processing Standard (FIPS) to replace the NCSC's "Orange Book".
  45. Steve Park and Keith Miller. Random Number Generators: Good Ones Are Hard to Find, Communications of the ACM, Volume 31, Number 10, Pages 1192-1201, October 1988.
  46. Paul Resnick and Jim Miller. PICS: Internet Access Controls without Censorship, Communications of the ACM, Volume 39, Pages 87-93, 1996. Available as http://www.w3.org/pub/WWW/PICS/iacwcv2.htm
  47. Avi Rubin, Dan Geer, and Marcus Ranum. Web Security Sourcebook, John Wiley and Sons, 1997. Available at http://www.clark.net/pub/mjr/websec/oview.htm
  48. Bruce Schneier. Applied Cryptography: Protocols, Algorithms, and Source Code in C, Second Edition, John Wiley and Sons, 1996. Available at http://website-1.openmarket.com/techinfo/applied.htm
  49. Robert Bruce Shaw. Trust in the Balance: Building Successful Organizations on Results, Integrity, and Concern, Jossey-Bass Publishers, San Francisco, 242 Pages, 1997.
  50. Marvin A. Sirbu. Credits and Debits on the Internet, IEEE Spectrum, Pages 23-29, February 1997.
  51. J.A. Slein, F. Vitali, E. Jim Whitehead Jr., and D.G. Durand. Requirements for Distributed Authoring and Versioning on the World Wide Web, Internet Draft (Work in Progress), May 1997. Available at ftp://ietf.org/internet-drafts/draft-ietf-webdav-requirements-00.txt
  52. Lincoln D. Stein. The World Wide Web Security FAQ, version 1.3.7, May 1997. Available at http://www-genome.wi.mit.edu/WWW/faqs/www-security-faq.html
  53. Ken Thompson. Reflections on Trusting Trust, Communication of the ACM, Volume 27, Number 8, Pages 761-763, August 1984. Available at http://www.acm.org/classics/sep95/
  54. John von Neumann and Oskar Morgenstern. Theory of Games and Economic Behavior, Second Edition, Princeton University Press, 1947.
  55. Dan S. Wallach. Dirk Balfanz, Drew Dean, and Edward W. Felten. Extensible Security Architectures for Java, Princeton University Technical Report 546-97, April 1997. Available at http://www.cs.princeton.edu/sip/pub/extensible.html

Author Addresses

Rohit Khare, Rohit@4K-Associates.com

Rohit Khare joined the Ph.D. program in computer science at the University of California, Irvine in Fall 1997. Prior to that, he served as a member of the MCI Internet Architecture staff in Boston, MA, and worked on the technical staff of the World Wide Web Consortium at MIT, where he focused on security and electronic commerce issues. He has been involved in the development of cryptographic software tools and Web-related standards development. Rohit received a B.S. in Engineering and Applied Science and in Economics from California Institute of Technology in 1995.

Adam Rifkin, Adam@4K-Associates.com

Adam Rifkin received his B.S. and M.S. in Computer Science from the College of William and Mary. He is presently pursuing a Ph.D. in computer science at the California Institute of Technology, where he works with the Caltech Infospheres Project on the composition of distributed active objects. His efforts with infospheres have won best paper awards both at the Fifth IEEE International Symposium on High Performance Distributed Computing in August 1996, and at the Thirtieth Hawaii International Conference on System Sciences in January 1997. He has done Internet consulting and performed research with several organizations, including Canon, Microsoft, Hewlett-Packard, Griffiss Air Force Base, and the NASA-Langley Research Center.

Modification information:

$Id: trust.html,v 1.126 1997/11/30 10:40:20 adam Exp adam $