Heading 1

You can edit text on your website by double clicking on a text box on your website. Alternatively, when you select a text box a settings menu will appear. Selecting 'Edit Text' from this menu will also allow you to edit the text within this text box. Remember to keep your wording friendly, approachable and easy to understand as if you were talking to your customer

TAG Cyber Law Journal

SIGN UP FOR FREE
January 2020
WHAT DO YOU MEAN WE CAN’T MEASURE CYBERSECURITY?
Having spent her career working on systems security metrics, an expert is stunned to hear that what she does can’t be done.
INTERVIEW: JENNIFER BAYUK / AUTHOR & CONSULTANT
After Jennifer Bayuk read our interview with Paul Rosenzweig in December, she had a sense of déjà vu. Rosenzweig spoke about a project he’s working on that is of great interest to Bayuk. It’s about cybersecurity and metrics. Bayuk happens to be writing a book on the subject. She’s spent most her career grappling with it, in one form or another. But the part that got her attention was Rosenzweig’s contention that we haven’t developed the ability to measure cybersecurity. “What??” Bayuk says she was stunned. It was the second time in a year that she’d heard that. If she’s deluded in thinking otherwise, Bayuk says, at least she’s not alone. She’s not the only one who has worked in the field and has written extensively on the subject. In fact, she points out, there’s a community of security metrics researchers that communicate, share research and occasionally get together. Maybe they need a good publicist.
     Bayuk earned a doctorate in systems engineering at Stevens Institute of Technology, teaches graduate classes in cybersecurity risk management, and has extensive experience working in the field. Early in her career she worked in network security at Bell Laboratories and information security at Price Waterhouse. She spent a decade at Bear Stearns, the last six years as chief information security officer (CISO). After the company’s demise, she went on to work at Citi and JPMorgan Chase. These days, most of her time is devoted to writing, teaching and consulting.


CyberInsecurity News: What made you decide to get a doctorate in systems engineering—years after you’d earned a master’s in computer science?
Jennifer Bayuk: I was consulting at the time, and I was invited to Stevens to talk about a new master’s program that they wanted to offer in systems security engineering. I came on as an employee, because that’s the only way they could hire a professor to develop a master’s program. They wanted me to audit all of the advanced systems engineering classes to make sure that the program was consistent. Since I was auditing all of these classes, and I was an employee, I took them for credit. And I started building credits toward a Ph.D. I was also, on Stevens’ behalf, running a research program for the Department of Defense in their systems engineering research center. So I ran the research program, and I was able to do research for my Ph.D. as part of it. And I got professors from other universities who were part of the research program onto my dissertation committee.

CIN: Let me get this straight. You mean Stevens paid you to get a Ph.D.?
JB: [laughs] Yes. But they really were paying me to develop a master’s program and to teach a lot of classes in systems security engineering.

CIN: By then you had been in the business world. I guess you’d learned a few things.
JB: I saw an opportunity. And I should mention something that is relevant to this conversation. My Ph.D. thesis was in systems security metrics.

CIN: Can you give us a brief history of information security metrics? What was it like in the very early days, and what were people measuring?
JB: It started with one computer, and how computer scientists would know that the information was secure, which they defined using three levels of confidentiality: top secret, secret and unclassified. They wrote theorems for how the labels on data corresponded to the labels on people, and the relationships between them that would maintain the security through various operations that the computer performed. They developed formal models for demonstrating that machines were secure—or not. The idea was that no file labeled “top secret” could ever end up in the hands of someone whose label was “unclassified.”

CIN: Were these government computers?
JB: This was a more general computer security model that was meant to be used by anyone. However, because the funding came from the government, and most of the secret information back then that was on computers was from government-classified documents, they used the government classifications. And this was one of the huge mistakes in early computer security. In business, we don’t classify things hierarchically. It’s not like the CEO gets to see secrets and nobody else does. When it comes to permissions in the corporate world, it’s very much a matrix organization. That’s why the three-tiered model was ultimately found inadequate. For example, the payment card industry standard (PCIS) labels data for what it is: personally identifiable information, not by using some abstract label that says secret or top secret. Because the job functions that need data are not labeled in a hierarchical way. They’re labeled: “I’m a customer service person, therefore I need to see the customer data.”

CIN: What was the next big benchmark?
JB: It was going back to the systems development lifecycle, and having ways to measure whether security was built into the requirements and tested as part of the delivery. And trying to verify that it was built as designed. So it became much more of a systems development approach. Moving from the 1970s into the ’80s, computer security started to get a lot more mature. And by 1985, the National Institute of Standards and Technology [NIST] had published something called The Computer Security Handbook. It was very management-centered. Measuring security became making sure that your file systems were configured using whatever security the operating system had at the time. And making sure that your users had authentication, like passwords that were hard to guess. And these measures gave you some assurance that the people using the data were the people that you had intended should use it.

CIN: And the early and mid-1980s is when it wasn’t just the businesses that had the computers. That’s when personal computers started cropping up.
JB: Right. And they had no security at all. They still didn’t have even the concept of an administrator versus a user. And even as Microsoft became more ingrained in businesses, and the businesses asked for these features and they were put in, at the core of the operating system at Microsoft, as late as 1998 or 1999, they still didn’t have this division in the operating system between the user’s space and the administrator’s space.

CIN: What metrics were important?
JB: When we got into the management space, we created metrics by saying, “Has management considered and looked at and configured all available security features?” So we divide the world into a set of features like application security, operating system security and application development lifecycle. And we created programs where we would observe whether these steps had been taken, and whether the configuration on the computer reflected the decisions by management. That was an audit-centric approach. By examining the system, we would determine whether it was as secure as possible. And we would come up with things that were not correctly configured, and identify system vulnerabilities, and count systems with misconfigurations and vulnerabilities as a percentage of systems inventory. We started to get to the point where, from a management perspective, we could measure security, because we knew what was necessary to create a secure environment.

CIN: That sounds easy. Just a checklist, right? Oops, that one isn’t checked.
JB: [laughs] Exactly. It definitely sounds easier than it is. Because it did sound easy, it was often glossed over in the form of an interview. You could interview management and say, “Did you really do this?” And running the program to determine whether the configuration reflected management’s position was left for later. So the verification that the checklist was correct was the next step.

CIN: If there wasn’t a verification step, then it sounds pro forma. As if they wanted to feel good about security without assuring themselves that there was a reason to feel good.
JB:  Right. And computers became so complicated that it was very difficult to check everything. What happened is that rather than measure the computer itself, people stepped back and said, “Let’s see if we can break in. Because then we’ll know it’s bad. We don’t know it’s really good from examining all the parameters, because there’s always a bug in the security code or something.” The only way you could really tell was by trying to bypass the controls. And so everybody ended up having a hacker on their staff, and eventually that whole penetration testing process became a huge business in itself.

CIN: Much of this is in your article “Measuring Systems Security.” 
JB:  Right. To bring us up-to-date, there’s also the user behavioral analysis that is very popular now. We know that our penetration tests and vulnerability scans are not catching everything, and just looking at the configuration can only tell us what we already know. So we are now monitoring all activity within our systems and looking for unusual behavior, and investigating anomalies that we recognize. Making rules about what constitutes normal behavior so that we can recognize abnormal behavior is where we are now.

CIN: Last year you were asked to participate on a defense agency’s security planning advisory team. Can you tell us about the experience?
JB:  I generally can’t comment on specific clients or former employers. But that advisory experience was really interesting, because I’ve been immersed in metrics for most of my professional career. I was always looking for data that could tell me: secure or not secure? Even in my early days of studying systems. When I was recruited by Price Waterhouse to join an audit and consulting group, we did things like penetration testing and auditing. So I was always collecting data and using it to measure. When NIST had its first conference on security metrics, I presented on using audit findings as security metrics. To me, it’s like second nature that security metrics have been around for decades.
     So I’m at this table, and I’m looking at this defense agency’s 10-year plan, and the lead researcher says, “I really think we need some kind of metrics in here, but there’s really nothing good happening in security metrics, and I don’t know if we should make it a research item in our 10-year plan.” And I’m like, “How could you not? There’s a lot happening! There’s so much happening in security metrics. I’ve seen it. I’m in it. I’m doing it!” I was a little more calm than that. [Laughs] I waited until my [internal] rant was done. I said, “If you read some of the standards that include recommendations for metrics, and you look at your own system, then you’re going to find that there are plenty of things that you can and should be measuring.” And the other experts at the table who were there to advise were also, like, “Yes, yes. Please. Include it. Start now, if you haven’t started already!”
     But a few of the advisers were researchers from the agency, and they weren’t experts in cybersecurity. And they didn’t seem to get it any more than the lead researcher did. I can’t explain why. Maybe it’s because there is so much frustration with security, and so many incorrect metrics are created from interviews and checklists, as we previously discussed, that people are a little jaded when it comes to trying to measure security.   

CIN: So what you found is that there seems to be a real disconnect between what some researchers believe, and what experts have actually learned about cybersecurity metrics.
JB: Yes. Now I’m not saying that the world is a safe place online. I’m saying that there are people who know whether one computer is more likely to get attacked than another. Just the way we know that rain might fall—and actually, the Weather Service is getting very good at predicting tornadoes. But even when they blare a tornado siren, saying that a tornado is coming here right now, many people do not find shelter. They have been listening to that siren for 20 years, and it’s only recently that it made sense to run when you hear that siren.

CIN: This experience was really surprising for you.
JB: It stunned me. Because I would think that any institution whose main job is security would have paid more attention to that aspect of management reporting and would have more intuition about it. I believe that agency does have a mature security metrics program. But it is not visible to the people doing their 10-year plan.

CIN: Was one of the reasons you were so surprised because you hadn’t been sitting in a lab or in grad school for years and years? You’d been working in the business world, and you had experience working for businesses that knew a lot about this.
JB: Yes. But to be fair, I’ve worked in the financial industry for most of my career, and we have always been heavily regulated. And we’ve always had requirements to demonstrate that we are secure. So even though you can point to a bunch of break-ins at big banks, and we know that there are loopholes, there is recognition in these banks that it is necessary to secure your systems, and you need a metrics program around it from a management control perspective.

CIN: What you consider to be common knowledge about cybersecurity in the business world, at a certain level, is not necessarily common knowledge to even pretty sophisticated researchers.
JB: I’m not surprised by that anymore. Only in communities where I expect it to be common knowledge. There is not a CEO in the financial industry that isn’t aware that they need cybersecurity metrics. But in some other industries, like hospitality, for example, that doesn’t seem to be the case.

CIN: I bet it is at Marriott.
JB: I wasn’t going to say it.

CIN: That kind of learning experience is what gets so many people’s attention. If it’s in your industry.
JB: I think one of the biggest learning experiences lately has been the Capital One hack. I teach at the graduate level still, and I have a student who chose the hack to analyze. The controls that were in place at that bank were quite sophisticated. They weren’t complete slackers. They had a lot in place. The controls that my student uncovered from doing research were pretty good, but they were not enough to defeat the new technology—the staff hadn’t come up to speed yet on how to configure and monitor it. And we’ll always be in a catch-up game in security when features are not built into the systems, and the security people have to scramble to figure out how they work and how to measure them.

CIN: Why are cybersecurity metrics important?
JB: They are important because they are the only method by which technology controls can be monitored from a management perspective. You can’t have control over technology without tone at the top, without processes in place to control computer operations to minimize risk of an event that will negatively impact the business. If you don’t have any way to take a map from your risk appetite down to your computer operation, then you can’t monitor your risk. You absolutely need that connection. And you can’t get it without security metrics. If you’re not monitoring the security mechanisms, you can’t trust anything else on your computer.

CIN: When you were a chief information security officer, if you had gone to management and said, “We need to do X, Y and Z,” and management had said, “How much is that going to cost?”—would you have been able to answer that question? And then if they asked, “How is that going to make us safer?”—would you have been able to answer that question?
JB: Absolutely. That’s just management practice in any field, including cybersecurity. You have the same issues with resiliency—people trying to justify having a backup data center. That obviously is very expensive. You end up having alternative plans, like going into a hot site that is a vendor-provided data center, should you need to. If you’re making that plan, and making the decision on what to spend, you would have to lay out the alternatives. Then you would have to demonstrate that the loss estimate or other negative impact expected from the event justifies the expenditure.

CIN: Is it important that we have cybersecurity metrics that can be agreed upon and used by a large group of people?
JB: It is not as important as having cybersecurity metrics that can be agreed upon and used by everyone in a given organization. Everyone’s technology platforms are different. Measures that work for one organization may be irrelevant to another. Complete standardization in security metrics will only be possible if there is complete standardization in the way technology is used. That said, if you’re talking about a major global bank that could be larger than the federal government, then the answer would be yes: It is important that we have cybersecurity metrics that can be agreed upon and used by a large group of people. But if you are a small startup or even a midsize financial institution or pharmaceutical, you probably only need to get 40,000 people to agree. Yet, unless everyone takes metrics seriously enough to record them properly, and that comes from tone at the top, then they are not going to be of much value.

CIN: What guidance is out there to help companies?
JB: The regulations and the standards that you see, like NIST 800-53, dictate what should be measured. How they are measured will vary by technology platform. Industry-specific regulations often dictate attributes of an organization’s systems’ threat and control environment that should be measured. For example, the health care industry has HIPAA, which provides security rules that you need to monitor.

CIN: There have been so many instances where a simple patch could have prevented disaster. Like Equifax. And every so often, someone asks why patching can’t be regulated. Is there a way to do that?
JB: I’m not going to say that there should be regulation on patching, because the issue is broader than that. But there does need to be accountability for minimizing risk. Just as the CEO is responsible for credit risk, and oversees programs to ensure that more money isn’t loaned than can be paid back, the CEO is accountable for known cybersecurity risk, and should ensure that programs are in place to minimize it. These should include a patching process that has been independently verified by outside auditors. That’s what you would do for any other significant risk. 
     On the more general topic of accountability, I am encouraged by what we saw from the Equifax hack, because it shows that we’re making progress in this area. One of the biggest data breaches in history is one that occurred about 10 years ago: the Heartland Payment Systems case.  When that happened, the CEO came to Washington, D.C., and met with every congressperson and influential lobbyist that he could and said, “How come nobody told me about this? How come there are these cybersecurity hacks and there’s nothing being done?” Meantime, all of us CISOs who started the FS-ISAC [Financial Services Information Sharing and Analysis Center], in 1998, were saying, “How come you never sent anybody to the FS-ISAC? If you’re really serious about wanting to know about cybersecurity, this has been around for 10 years. You could have joined.” Not just that. I remember in the late 1990s seeing the first headlines that cybersecurity had made it to the newspapers. And we were all just awed, because normally companies hushed up all of their computer problems. There were times in the 1990s where you would have the [New York] Stock Exchange go down for two hours, and it would not make the newspapers. People just hushed it up, because they did not want anyone concerned about the health of the financial system. But that changed. So Heartland Payment’s CEO had no business complaining. And I actually had lobbyists call me, saying, “Is he right? Is it plausible that he should not have been aware?” And I said, “Of course he should have been aware.” He kept his job. He smoothed it over with public relations. Ten years later, the CEO of Equifax resigned. I think that’s a great step.
     I’m interested to see what happens in the case that happens 10 years from now. Does the CEO get called to the board and have his retirement package stripped because he did not execute the duties of his office? That’s really where we need to be.
Share