Home link Books link Press link Security link Briefings link

Published in ACM SIGSAC Review, Vol. 9, No. 3, Summer 1991.

Security Management
--"Hey! How do you steer this thing?"--

Bill Neugent
McLean, Virginia*

It had to be true. It was in the Washington Post. After 72 unsuccessful attempts to pass the required driving test, the District of Columbia police academy candidate was rejected in her bid to join the force. This clearly was the correct outcome. Yet I could not help but wonder, "What if she had passed on that 72nd try?" It seemed to me possible that the academy had somewhat of a lack of understanding of what it means to demonstrate basic competence.

grandfather Ubald

grandfather Ubald

So you just push
on this peddle?

Vulnerability to Hackers

The newspaper story caused me to contemplate the competence of our computer security (COMPUSEC) police force -- the people who manage security for our computers. I was concerned because, in the COMPUSEC field, we do not have a driving test to screen out unacceptable candidates. In fact, system managers take no tests at all, except for those managed in real time by Wiley Hackers, Internet worms, and other such proctors. Unfortunately, our COMPUSEC officers have not done so well on these tests.

How fortunate that we have on our side such vigilant stalwarts as Cliff Stoll, who tracked down the Wiley Hacker, and Gene Spafford, who helped analyze the Internet worm.1,2 They are justified in their indignation at the unethical behavior of some hackers (e.g., those who break into UNIX systems and "root" around) and the lack of protection of so many systems. That's why it was especially ironic when a hacker, himself indignant at their disdainful statements on outlaw hackers, decided to embarrass Cliff and Gene by penetrating their systems over the Internet.3 And so he did, also penetrating a number of other systems. Well, when our counselors of prudence are themselves found guilty of imprudence, what are we to do? Surely we have no choice but to await the arrival of trusted systems.

Trusted Systems

The hope for trusted systems is what makes the National Aeronautics and Space Administration (NASA) Space Physics Applications Network (SPAN) penetrations of 1987 so dismaying. Here was a system -- Digital Equipment Corporation's (DEC's) VMS -- that had been officially evaluated and rated by the National Security Agency (NSA) and found to have an acceptable emission level. But no sooner had it been rated when some enhancements were made that had the unfortunate side effect of creating a security flaw. Some hackers learned of this when DEC issued the fix, and set about penetrating VMS systems all over the network. This turned out to be easy, since so many system managers hadn't heard of the problem or fix and hadn't installed the change when it came.4

But why dwell on this unusual case? Why not look instead at trusted systems that don't trip over their own banana peels? Actually, you'd think that effective security would be a strong selling point. That's what Gould thought, too. That's why, at the 1987 UNIX Expo show in New York, Gould advertised a free color television to anyone who could compromise the security of their UTX/32S, which also had an NSA rating. Unfortunately, the television was soon given to someone who, being clever, chose to ignore the technological defenses and distract the system manager by the technological equivalent of saying, "Hey, isn't that Dolly Parton over there?"5** Of course, from the perspective of the typical trusted system designer, this particular penetration proved nothing, because the penetrator had not played fair, but had gone around the defenses rather than through them.

Really Trusted Systems

Now by dwelling on these cases I don't want to appear too one sided. The fact is that these trusted systems I've mentioned have had only "C" ratings, which is what NSA considers to be average. Nowadays new systems are coming available that have "B" ratings, which means that their security is pretty good. In fact, these new systems are so trustworthy we plan to trust them to protect military secrets.

     "What?!" you're thinking. "What?! Whose military secrets? Not ours, surely?"

But don't worry. First of all, we don't have any military secrets. But even if we did, these new systems are going to be really secure, because they're built that way from the ground up. I've talked with people working with draft versions of some of the systems. The systems aren't perfect yet, of course. There are the typical silly errors. For example, on one you could read your most highly classified mail only by logging in as uncleared. That's the old "Got it Backwards" error. On another system any user could log in at the highest security level, regardless of his clearance. But don't worry. When the final versions of these babies arrive, most of these wrinkles will have been ironed out.

The main problem with these new systems is the same problem that we have with cars: they go where you point them. This is a problem in that these new systems, like the ones that preceded them, have complex security management interfaces. The systems thus presume system managers with the demeanor of Mr. Spock (the Vulcan -- not the baby expert). At first glance, the reasons for such complex security management interfaces are difficult to fathom. What drives technologists to dream up solutions that make such faulty assumptions about human abilities, patience, diligence, and effectiveness? Are the technologists naive or just mischievous, or are they living up to their reputation as being people who can do differential calculus but can't find their car keys?

Reasons for Unreasonable Complexity

There are several reasons for these complex interfaces. The main reason is that there just is a certain irreducible amount of work and tedium involved in user registration, permission management, audit data analysis, and all those other things security managers do. Ironically, this is one area where we Americans should have an inherent advantage, in that security management is done through a TV screen. The problem with that, of course, is that the security manager channel is too boring.*** But as a result of this complexity, sometimes seemingly subtle errors in system manager etiquette can have dire consequences. This problem could be lessened by system warning labels, as are used on step ladders, but I suspect that there already is more than enough labeling going on in B systems.

Another reason for complex security management interfaces in the new, trusted systems derives from the nature of trustworthy computers. For the computer to be trustworthy, the security software has to be simple. If you add a bunch of security software to simplify the security management interface, then the system gets too complex to be trustworthy.

Now it's easy to see why readers might be a bit confused about this apparent need for trustworthy systems to have complex security management interfaces. I was confused myself 15 years ago when I first started working with the forerunner of one of these trusted systems. I even had the naivete to blurt out, "But what if the system manager just makes a mistake and labels the data wrong?" With exasperated looks, the experts slowly explained in their best parent-to-child tones:

"There's nothing we can do about that, is there? That's outside our scope. The system manager is a trusted component. If you couldn't trust the system manager, what would be the point of all this, anyway? The system manager has to be trusted."

Being Human

So you can see the genesis of the idea. But since that time, evidence has been growing to support a different idea, the idea that many trusted humans are, well, only human, and thus imperfect. Some particularly reckless researchers have even suggested that computers some day will be more trustworthy and reliable than people.

I've heard it said that everyone hates system management, but that's obviously not true. Not everyone has even tried system management. Regardless, the problem we continue to face with the new breed of trusted systems is that their security management interfaces are more complex than airline cockpits. The main difference is that, unlike cockpit errors, security management errors do not pose the threat of instant death. While that feature could be added, there are other preferable alternatives.

brother John

brother John

“Hi, I’m the new sysadmin.”

One alternative being investigated by trusted system vendors is to reduce complexity by reducing system capability. This does not promise to be a popular approach, except possibly for those people who own Trabants.
Another approach is to remind potential customers that system management can be greatly simplified just by turning security off. This gives users the best of both worlds, since they are not encumbered by security and yet can rightly say that they are using a highly-rated trusted system.

An approach that offers promise is that used in some highly-trusted systems -- separating security management from other aspects of system management. This enables minimization of the likelihood and impact of errors made in either domain. Or, to use more sophisticated security jargon, if you can't "dumb down" the job, at least make it "bulletproof."

The bottom line from all this is that we need to focus more attention on the system manager and on ways to simplify system management. While this might not improve the effectiveness of system managers, the attention should at least improve their sense of self.

Waiting for Poirot

Achievement of humane, secure system management clearly is a mystery that remains to be solved. Until it is, we should heed two rules:

Consider not only the benefits, but also the costs of improved security. Keep in mind what 40 years of successful security did for the Eastern Block.

Consider the trust you place in computers. Remember the example of the hackers who were exchanging notes via email on ways to break into and peruse systems. They were astonished that The Feds had violated their "privacy" by reading their email notes.6 Personally, I was intrigued to discover that such a wimpy legal concept as privacy could have any relevance whatsoever to people and clubs with names such as Bloodaxe and the Legion of Doom.

We have to realize that, even as we improve on the security of today's systems, limited COMPUSEC effectiveness is all we can ever achieve. And maybe this is not so bad. Maybe, instead of trusting newer computers to take risks we don't dare to take now, we should instead continue to use them as we do today, and just sit back and savor the improvement. Maybe we would all benefit if, recognizing that it is not possible to build perfect defenses against each other, we concentrate instead on trying harder to get along together.


* For five years I wrote from Heidelberg, Germany, where I supported American military forces in the field. That's reality out there. And I must say that after five years of reality, it was great to move back to the Washington area.

** Actually, the technique was to write a program with a Trojan Horse in it and then ask the system manager for help with the program. When the system manager -- trying to be helpful and forgetting to be on his guard -- called up the program to look at it, the program stole his privileges.

*** Another aspect of the problem is that, even when security management is done perfectly, system security remains dependent upon the security consciousness of every single user. This is like a car in which, in addition to the main steering wheel, there also are independent steering wheels on each tire.


1. Stoll, Clifford (1989), The Cuckoo's Egg: Tracking a Spy Through the Maze of Computer Espionage, Doubleday.

2. Spafford, Eugene H. (January 1989), "The Internet Worm Program: An Analysis," ACM Computer Communication Review, Vol. 19, No. 2.

3. Alexander, Michael, Ellis Booker (26 March 1990), "Internet interloper targets hacker critics," COMPUTERWORLD.

4. Marbach, W. D., A. Nagorski, and R. Sandza (28 September 1987), "Hacking Through NASA," NEWSWEEK.

5. Smith, K. (February 1988), "Tales of the Damned," UNIX Review, Vol. 6, No. 2.

6. (Spring 1990), "An Overview," 2600 Magazine.

ACM COPYRIGHT NOTICE. Copyright © 1991 by the Association for Computing Machinery, Inc. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Publications Dept, ACM Inc., fax +1 (212) 869-0481, or permissions@acm.org. This copy is posted by permission of ACM and may not be redistributed.


Updated: 20-Oct-2005