Heading 1

You can edit text on your website by double clicking on a text box on your website. Alternatively, when you select a text box a settings menu will appear. Selecting 'Edit Text' from this menu will also allow you to edit the text within this text box. Remember to keep your wording friendly, approachable and easy to understand as if you were talking to your customer

TAG Cyber Law Journal

March 2020
He also invented virus defenses, but these days he often helps lawyers understand cybersecurity.
In 1983, while he was a grad student at the University of Southern California’s engineering school, he invented the computer virus. In 1984, while working on his doctorate in electrical engineering, he invented what sounds like an oxymoron: a “positive” virus, designed to infect executable files—not to destroy them, but to make them smaller. He has since invented most of the virus defenses now in use. A quick glance at his website will demonstrate that the interests and activities of Dr. Fred Cohen have never stopped growing since those early days. It makes you wonder whether he also infected  himself back then with some unstoppable, positive bug.
     These days he spends of good deal of time as a consultant. He works with lawyers on litigation, and with companies to help them avert and resolve problems. In both roles, his expertise in cybersecurity is sometimes his calling card. But his passion for systems and analysis makes him sound more lawyerly than the lawyers he works with. His approach to problem solving may be worth a close look.

TAG Cyber Law Journal: You've been qualified as an expert witness in many courts. What kinds of litigation do you get involved in?
Fred Cohen: First of all, most of the cases never get to court, right? I usually create a report or rebut a report on the other side. I am not a big fan of opinions in reports. I prefer facts and conclusions. So a significant amount of what I do—I wrote a book about it called “Challenges to Digital Forensic Evidence”—is challenging evidence brought by the other side when it's not legitimate. I help attorneys. The areas of law in this field are somewhat obscure and ever-changing, so sometimes I identify legal issues that they might address. But mostly they want to tell me what the law is, and I want to figure out how to apply it appropriately to the situation at hand. People get accused of being not diligent, and so I do reviews because in my consulting practice, I help specify and verify cybersecurity and related matters for enterprises. So when you're spending your time analyzing, specifying and verifying—we have a methodology and standards of practice and so forth—that gives you a leg up to be able to say, “Look, we know what the rest of the industry is doing. We know that if you don't develop duties to protect, you can't effectively make good decisions about protection. But that doesn't mean that the decisions you've made are bad decisions. You may have made good decisions, despite the fact that you didn't figure out what your duties were.” Just walking through the methodology step-by-step and saying, “Here's what they do and here's what they don't do” is very useful. It's also very useful to do before you get in litigation. In fact, a lot of companies doing M&A—when you’re acquiring a company, you really want to know what's going on so you can choose them effectively. From the cybersecurity standpoint, mismatches can create all sorts of havoc.

TCLJ: A lot of laws and regulations use the word “reasonable” when referring to a company’s cybersecurity duties. I've heard lawyers complain about that. Especially about how vague it is. Have you consulted with companies that have raised this issue with you? Have you made recommendations about what would constitute “reasonable” practices in this context?
FC: “Reasonable” is not a hard thing to understand, but it’s hard to crisply define. There's a separation of duties issue. If you think of it in terms of accounting, you wouldn't want the same person saying, “We're going to buy this from them,” and then approving the exchange and writing the check. And then doing the audit. Specify, execute and verify is how we divide it up. Depending on the particulars of the risk situation, and the different aspects of what you're doing, you don't want the same person to specify, verify and execute—or manage the execution. So that separation of duties should apply to what you do in cybersecurity, because the consequences of failure tend to be very substantial for the company.
     The big money is an execution, right? If you're providing firewalls, or intrusion detection, or emergency response services, that's all on the execution side. But specifying—that is, identifying what you should do and will do—and verifying—that is, making sure that you’ve actually done what you said you were going to do—those should be independent of the execution, so that you have separation of duties. That's a fundamental. If you don't have a separation of duties and something goes awry, then somebody might accuse you of not being reasonable and prudent by not having separation of duties.
     But it doesn't apply everywhere. If you have a tiny company with three people, you can't exactly separate the duties. If you own the company, you're allowed to determine who to write a check to and write the checks. Because it's just not that big. In the accounting world, for some of my small businesses, my accountant does independent checks on the books. But It would be unreasonable—that is, excessively expensive and not necessary—to have a whole bunch of people working for me that just do that separation of duties. So there's this balance between the consequences of failure and the costs associated with doing something, and that's where that trade-off comes into play. What makes it reasonable and prudent is that you have reasoned about it, and you have a reason, you've thought about it and made a decision, and you have a basis for your decision.

TCLJ: And now you’ve mentioned another word that seems worth a discussion. What do you mean by “prudent”?
FC: That’s a softer thing to define. If you're in a neighborhood and people are throwing rocks through windows, then you should be doing something—especially if you own a restaurant and you have customers sitting inside near the windows who can get cut by glass. It might be prudent in that circumstance to undertake something to protect your customers. Our process is that we look at the baseline—the “as is” situation—but we don't comment on whether that's reasonable and prudent. We identify a reasonable and prudent future state, because we don't want to induce liability in our clients. When people do this gap analysis, you're just criticizing people, and there's just no benefit to it. You get people fired. Some people have been fired from assessments or reviews like this that we've done. But we can't do anything about that. We just state, “Here's what you said” or “Here's what we saw.” Then we identify a reasonable and prudent future state, which is not the only reasonable and prudent future state. It's just one. We then work with our clients, who may say, “Gee, Fred, the way you said it was this way. But we were thinking about doing it this way.” That's a reasonable alternative. What's your reasoning behind it? So you document the basis for the decision, and now you have “reasonable.” If you haven't thought it through, if you haven't done this sort of a process, then that could be considered negligent just on its own.
     If top management has not considered it, and the consequences were reasonably foreseeable—let me give you an example. We've seen large data centers sitting at the end of an airport runway in a flood zone, and they were in the basement. Well, probably having your only data center—no redundancy—underground in a flood zone is not a good idea. You should be able to figure that out. It's reasonably obvious, and it’s also in the literature. Reasonable and prudent, for a professional in any field, would be at least a working knowledge of the history of the field, and an understanding of what other people have seen and done.

TCLJ: The relationship between lawyers and technologists is not always an easy one. Have you always found it easy to communicate with lawyers during your many years in this business?
FC: I'm a Jewish boy from Pittsburgh. My father was a professor. My mother was a professor. Both of them in nuclear physics. All of my aunts and uncles and cousins were highly educated. So I had to either be a doctor or a lawyer or “shame on you, you're only a business executive!” In school I ended up writing software for a law collection practice that one of my uncles and cousins ran. I got to understand all those methodologies. Legal Software Incorporated was the name of the company we formed, and we made an automated collection package for a larger collection law office. And I made software called “Payback” for the small collection office. So I never had a problem dealing with lawyers.
     There’s a language skill issue, right? When you’re brought up as an engineer and you go to school as an engineer, you learn the engineering language. And you learn the engineering communication style and the critical methodologies that they have. When you go to law school, you learn this adversarial approach to dealing with people. So depending on where you come up, you have that language. I've worked across an enormous variety of different places in my career. And you just sort of grow a library that you can translate in your head as you're talking.

TCLJ: As someone who has consulted a good deal, have you observed tense circumstances and relationships between legal departments and chief information security officers at companies?
FC: Tense circumstances? Certainly. There was a period of time when we got called in largely in cases where people were indeed tense. But increasingly we're proactive, so it doesn't have the level of tension of: “I'm afraid I'm going to lose my job.” Which is mostly what happens inside businesses. People are afraid of losing their jobs, and they're jockeying for position. The legal folks are usually distant from all that. They're not worried about losing their jobs all the time. They're the trusted adviser. And we're similar: We're sort of a trusted adviser. We help grow companies. So that's an inside friendly relationship. But where the tension arises is when people are afraid, right? Anytime you bring in an auditor, that's the feared word. Audit. So whenever we do these things, the first thing we say is, “Not an audit. There's no punishment. You won't get fired. We're not going to make you do anything.” They're not usually worried about the lawyers either, unless they think something's going on with the business.
      A chief information security officer [CISO] that's doing their job well is interacting with a wide variety of people throughout the corporation because it's a very cross-cutting function. So they should be working with HR on HR issues. They should be with working with legal counsel on the legal issues, working with the technologists on technical issues, working with executive management on policies and procedures. They should be interacting with all these people on a regular basis. It should be part of their normal professional communication. If they're not placed highly enough in the enterprise, then they're in a situation where they're not allowed to do that. They may be blocked by the chief information officer. The chief information security officer’s average time in the job is about 18 months before they get fired. When they show up, if they’re smart, they’re looking for their next job. It takes about a year to get hired.

TCLJ: Why is that?
FC: There are lots of different reasons. The typical sequence is that you get a new CISO because you have to. The person hired is usually not competent to do the job. Not because the company is trying to hire somebody incompetent, but because it's a supply and demand issue. There's too little expertise and too much demand. So you're going to hire somebody most of the time that’s not good enough to do it. And then the pay rates are going to be ridiculous. If you look at the specifications, you have to have 400 years of total experience across 373 different things, which nobody has. And you're going to get paid approximately what an engineer gets paid right out of school. Lots of people do better than that. The highest salary I'm aware of is like $1.2 million a year. That's in a multinational financial institution. That's the first problem. The second problem is that they come in, and the first thing they should do is a review—what's the current situation? You do a baseline and figure out what should reasonably be done, and you propose that. All of those proposals get rejected. In fact, the study gets rejected. You're not allowed to spend $20,000, $50,000 on finding out what the situation is. So instead, you find it out over time by walking around and being hit with stuff. If you should do such a study, you'll identify things. But you don't have access to top management. And you don't know what the duties are, since they aren't provided to you. You sort of have to make stuff up as you go along. I notice that we don't have any anti-virus at the company. We probably should have anti-virus because if we don't, we’re going to get a virus. OK, let's get that. So you suggest that it should be on the budget. But there's no budget. No capabilities, no assistance, no connection to the rest of the company. That's a recipe for disaster.
     The process goes like this: Something goes awry, and the CISO does something stupid like say, “Well, I told you so.” And now they get fired—reasonably—because, if they're still there, then when the lawyers come in for the lawsuit from the other side, and the CISO says, “Well, I told them to do it and they didn't,” then you have true liability. If they're better at this—more experienced, more knowledgeable—still after about 18 months something will go wrong, because they didn't have the budget or the support to do the things they said they wanted to do. And there's not adequate communication with top management. And after it goes wrong, it's easier to fire the CISO and buy a new one than to fix the problem.

TCLJ: Are you ever in a position to help a company figure out how to improve these communications so that they are able to work together more effectively?
FC: Our approach is to use our standards of practice wherever possible. We go through a fairly comprehensive set of issues identifying the current situation. Part of that is understanding what the CISO, or whoever is the lead for cybersecurity, can observe and can affect. If they don't have adequate power and influence, we identify that. If they're not in communication with the other people they need to be in communication with, we identify that. We don't say, “You're not in communication.” We ask, “Who are you in communication with?” Or “What do you have the ability to do?” And they tell us. And then we say, “A reasonably prudent future state would be for you to be able to get the results of all the audits. But not influence the findings, because the auditors are an independent party you shouldn't be messing with.” They should be able to influence policy. They don't get to set policy. But they certainly have to know what the policy is. So we identify what they should be doing, what their power and influence should be. And if, reasonably and prudently, they should have influence they don't have or power they don't have, then we'll identify that. We would typically never say something like, “You should be talking to the lawyers more.” The reports are written in much more structured language.

TCLJ: Do you think it's a good practice at companies that have more than a lawyer or two for the lawyers to be communicating regularly and effectively with the chief information security officer?
FC: In a medium-size business, the communication should be going on, because there aren't that many people at that top level. But in a large enterprise, we typically advise that there's a monthly meeting where the group includes somebody from HR, somebody from Legal, somebody from Ops, the whole list. And they all go over cybersecurity issues. Call it an Executive Council. And the CISO should head that meeting, even though they're not the boss of any of these people. It's a collaborative effort, and there should be issues brought up that cut across the whole enterprise associated with the cyber-related systems and the protection issues. That's just standard advice. As far as frequency, monthly meetings would be for a high consequence level associated with protection failures. For the medium consequence level, you would want to have at least a quarterly meeting of that sort.