Heading 1

You can edit text on your website by double clicking on a text box on your website. Alternatively, when you select a text box a settings menu will appear. Selecting 'Edit Text' from this menu will also allow you to edit the text within this text box. Remember to keep your wording friendly, approachable and easy to understand as if you were talking to your customer

Cyber In security News

TM

March 2019
SUBSCRIBE FOR FREE
INTERVIEW PART II: STEWART BAKER / STEPTOE & JOHNSON
A HISTORY OF CYBERSECURITY: A PERSONAL AND PROFESSIONAL JOURNEY
A veteran lawyer recounts the challenges he’s experienced over the past quarter-century. 
This is the second of two installments on one man’s life in cybersecurity. Steptoe & Johnson partner ​Stewart Baker has had a long career moving back and forth between private practice and public service. For the past quarter-century, much of it has been focused on data and technology. In PART I , Baker recounted how he first found his way to a career that combined law and technology when he took the job of general counsel at the National Security Agency in the early 1990s. There he came to understand how World War II cryptology was cybersecurity’s antecedent. Similarly, the encryption and backdoor access battles during his time at the NSA were echoed in the conflicts and tensions sparked by these same issues today. In PART II, Baker recounts the many challenges he confronted in helping to create from scratch the Department of Homeland Security. He discusses the privacy wars between the U.S. and Europe, the impact that Edward Snowden’s revelations have had on the NSA, and the approach that general counsel should adopt to defend their companies against cyberattacks.

BUILDING AN INSTITUTION
CyberInsecurity News: Would you say that DHS was your most important public service stint?
Stewart Baker: Probably. Or at least the one where I felt that I made a significant difference on my own. All of the other jobs, either I was stepping into a job that had a well-known set of requirements, or I was staffing someone who knew what they wanted to do, and I was helping them achieve their goals. At DHS, because it was brand-new—certainly Mike Chertoff knew what he wanted to do on certain topics, but he was delighted to have somebody go out and think about the topics he wasn’t focused on. DHS was also a place, at the time, and maybe to some extent still is, where the question of who could do what had not yet congealed. So when you came up with an idea, there weren’t five people to tell you that that was their job, not yours. I sometimes tell people that the best part of my time at DHS was that not once did somebody say to me, “Oh Stewart, that’s not how we do things here.” It was too new. If you were willing to do the work and capable of producing results, everybody was happy to say, “He’s already producing something. We’ll let him go.” Or at least Secretary Chertoff was.
     Getting the department off the ground was an enormous labor. You cannot understand how hard it is to create a new government institution—especially one as big as that. It was back-breaking labor. There were many people who had joined because they were convinced of the mission. But none of them had been doing their job more than a year or two, and many of them were newer than that. So you had none of the employees in DHS that you have in every other organization, people who have been there for 20 years, who know what their job is, who don’t aspire to some other job but who are determined to do their job right. People who have that at the Justice Department or the Defense Department have a whole set of relations with colleagues. They have defined the borders between their jobs and everyone else’s; they have figured out how to do their jobs because their bosses have asked them to perform tasks more than once. At DHS, though, there was nobody who knew those kinds of things. You couldn’t take anything for granted. You had to think, “What else needs to be done? Either I’m going to have to do it myself, or I’m going to ask somebody else to do it who doesn’t realize it’s part of their job.” That wears on you.

DIFFERENT VIEWS OF PRIVACY
CIN: Tell me about the areas that you helped to define that relate directly to the world of privacy and cybersecurity.
SB: The privacy issue that I took on, because I was in charge of international affairs as well as other policy issues, was the resistance that the European Union had raised to our efforts to find out who was coming to the country before they got off the damn plane. Prior to September 11, the customs officials who were meeting overseas passengers had almost no warning about who was coming, and no ability to prioritize for screening the people who were of greatest concern. They got a manifest from the airlines, and it had to arrive before the passengers did. But that was about it. There was no ability to say, “Well, here’s somebody with a very common name. Can we do some research to see if it’s a common name that’s associated with terrorism or a common name that’s not?” We’d lost almost all ability to do that, which meant that border officers had 30 seconds, when they were looking at your passport, to decide whether they were going to look at your luggage. And then maybe border officials could do a secondary screening of maybe 1 percent of the passengers. So 99 percent of the passengers were admitted based on inspection of their passports and a 30-second encounter. Which is how the hijackers got into the country. All but one of them, who was turned back by a very able and brave customs officer.
     To dramatically improve our border screening, we wanted to have access to all of the information that travelers had already given to the airlines: who they are, what their phone number is, who they’re traveling with, what their connecting flights are . That information was available to the airlines—sometimes weeks before the flight. Well, we thought the U.S. government also needed that information in advance. But the European position was: “We won’t let our airlines give you that information, because you’re invading the privacy of the European travelers.” This was a deep culture clash. Our view was: “What the hell, they’re coming into our country. It’s not Germany’s job to decide what kind of privacy standards the United States should use when it encounters Germans on U.S. soil.” The long and bitter and still-continuing debate between the U.S. and Europe over access to that information produced probably three rounds of negotiation that I participated in. It’s discussed at length in [my book] “ Skating on Stilts .” There’s a little bit of melodrama associated with the narrative, and twirling its mustache on the railroad tracks is the European Commission—clearly, in my view, the villain of the piece, as it takes action that we at DHS thought was likely to result in more American deaths at the hands of terrorists. And we just weren’t going to take it. So we had confrontations. In 2005 to 2008, I led and mostly won confrontations with the Europeans in which they were arguing for privacy and we were arguing for security. That was certainly a high-stakes conflict.
     The other issue that I worked on at the very end of the Bush administration was cybersecurity. The Department of Homeland Security had been envisioned as being in charge of cybersecurity when it was first created. And all of the resources of the government that were focused on cybersecurity—at least outside of the military and intelligence community—were transferred. But there were very few of them, and the transfers were sometimes in name only. The FBI had the biggest cybersecurity unit. And when it came time to transfer it, they announced that all the agents that had been part of it had suddenly been reassigned and therefore weren’t available. But they had some very good office furniture they could send over.

CIN: The privacy piece you just described, like the encryption, is still with us. Just a different form.
SB: We may soon have a conflict with the European Court of Justice. The ECJ may say that it’s a violation of the fundamental law of the European Union to allow this travel information to be exported to the United States. They couldn’t have picked a worse president to say that to. But this is a very anti-American court. So we could be heading toward a real confrontation. And it’s over the same data that we’ve just been talking about. The court rejected an EU-Canada arrangement for transferring that data to Canada, on the grounds that that would apply doubly to the United States. So the question is: When the U.S. requirement gets to the European court, is the court going to stick with its Canadian precedent, or is it going to blink?

CIN: When Edward Snowden began releasing government documents, it seemed like a huge event. Can you talk about how you felt about it at the time—personally and professionally?
SB: I will confess that initially I was quite surprised. The first document leaked was by far the most impactful. It was the one that indicated that all call records—not content, but records—inside the United States were given to the NSA in their entirety. And there was no indication that there was a limitation on what NSA did with that data, or that there was a requirement for probable cause. And it was hard to square that with the law as I knew it. So that was quite confusing, and therefore of concern.  It turned out that there was a legal basis for the order; It had been legally vetted and, at least after the law was changed in 2008, it had a pretty solid basis in law. But the leak was extraordinarily damaging politically, and it was, in my view, released in a way designed to cause the greatest possible damage to the National Security Agency—not  just by Snowden but by The Washington Post and the various reporters. They were as much advocates as journalists, and they were absolutely determined to do as much damage as they could. Maybe they thought they were doing a good thing for the world, but it was not an act of journalism, because they withheld a bunch of information that would have provided better context. For example, they could have shown all the limitations that the government had placed on itself and that the courts were enforcing. That would have given much more nuance to that single piece of paper. But Snowden and the reporter didn’t provide that context, and so people spent a week or two thinking that all this data about Americans was just handed over to the NSA to do with what it wanted. And that false impression did permanent damage to NSA and that program. So I started out surprised, but I was soon also appalled that Snowden and the journalists, in my view, deliberately left a misimpression about the program. I spent a long time unwilling to call Snowden a traitor. He clearly presented himself at the time as a kind of naive libertarian who was simply standing up for the rights of Americans against the overweening intelligence state, and he gave every impression of believing that. As he has continued to hang out with one of the most authoritarian leaders on the globe, only rarely offering modest dissent from Putin’s policies, I’ve come to the conclusion that whatever he was when he started, he’s been deeply compromised by the Russians and maybe the Chinese. And I think it’s an open question whether that compromise, knowing or unknowing, goes back to his initial release of the documents. We aren’t going to know that for a while.

CIN: What about now? Do you still think it was of tremendous importance? A landmark event?
SB: Yes. At the end of the day, the only program that suffered political damage was the one we were just discussing, where metadata was being used to try to find people who might be repeating the pattern we saw with 9/11. That program was modified and put on a different footing. The data is no longer given to the government but instead is left with the companies that now have an obligation to store and process it. At the end of the day, the intelligence impact was pretty modest. The kind of processing and storage that the companies are doing is more complicated, but not dramatically different from what could have been done when it was being stored at NSA. But the fight over that program created a host of congressional critics who remain critics to this day. Also, severe damage was done by some of the false Snowden-related stories claiming that Silicon Valley companies were giving NSA free-run access to data that they collected. That impression was also left by reporters who failed to do their jobs, either because they preferred to be privacy advocates or they weren’t very good reporters. They left the strong impression that these companies were all in bed with NSA in a way that left the companies with no control over what NSA did. That, in turn, led the companies, in their own economic interest and because their own culture was already hostile to the government, to undertake massive new security programs and to build fences against their government’s requests for help. And that led ultimately to a fight between the FBI and Apple. To this day there’s a Silicon Valley culture of trying to build security to defeat the United States government, and that’s not healthy. Especially when they’re competing with a Chinese Silicon Valley that is legally and culturally bound to build technology that will advance the interests of the Chinese government.

CIN: I asked you earlier [in PART I] about the perpetrators and victims in the early days of cybersecurity. Compare the players and stakes and risks today. 
SB: The early days of cybersecurity were characterized by a kind of naivete on all sides. The attackers mostly saw themselves as gray hats or white hats. They were improving people’s security and having fun, learning things that they weren’t supposed to learn at the same time. It was a game that smart people played to show that they could do it. They did it for the same reason the Lakota Sioux counted coup against enemies. The very best attack on an enemy was to sneak into his camp and hit him with a little stick to humiliate him and get away without being hurt yourself. That attitude prevailed among the hacking community. And then, sure, if they could get free long-distance phone calls, they’d do that too. But it was more mischief than crime. I think the assumption was, on both sides—this is where the naivete on the defenders’ part comes from—that this was a passing problem. Of course there were bugs in the software. Everybody knows that. And one of the things you have to do after you finish programming is debug it. You have to take out the bugs as you learn about them, and then after you’ve found all the bugs—or at least all the bugs that matter—your program runs fine. And I think the assumption was, that’s how it will be for cybersecurity. We’ll figure out how to protect against attackers because a successful attack is just a bug that someone has exploited. Maybe hackers are doing us a favor by finding the bugs, telling us about them, and then we’ll fix them, and after we’ve fixed them for a few years, there won’t be any more, or there will be fewer and fewer, and we won’t have to worry about them.
     What we’ve seen in the last 10 to 15 years is the gradual dismantling of that assumption. No one believes that we are getting better in terms of cybersecurity. We are spending more. We are more sophisticated. But the attackers are more sophisticated, and we are adopting more and more technologies that expand the attack surface in ways that more or less guarantee another two generations of serious cyber compromise.

CIN: Is there anything you see that suggests there’s hope that this trend might be reversed?
SB: I have sometimes tried to popularize—not with much success—what I call Baker’s Law of Cybersecurity: “Our security sucks, but so does theirs.” There are almost always going to be problems with the security of our data and our systems. But the people who are attacking our systems are not better at protecting their systems than we are at protecting ours. They’re no better at protecting their data. And their data keeps spilling out, whether they want it to or not. They’re leaving breadcrumb trails all over the internet. Which means that we are going to know more and more about the people who are attacking us. It will be very hard to attack from a position of anonymity, and once you’ve broken anonymity, all that remains is to tackle impunity. That is to say, first you do attribution, then you do retribution. We can attribute these attacks today much better than we ever could have 25 years ago. We know who’s doing it. We know what country they’re from. We know what their girlfriends wear on a holiday. And so we have a much better chance of punishing the people who are launching these attacks. We may have to punish an entire nation-state, but when we are sufficiently aroused by particular kinds of cyberattacks, we’ll do that. What that means is, this goes from being a technological problem to being a law enforcement and international problem, in which you’re trying to find ways, now that you know who’s doing it, to persuade them to go into a different line of work. That’s how we police our cities. We still have crime, but we feel that it’s under control because we mostly don’t experience it, because the people who commit crimes so often end up in jail and either learn from that or end up in jail again.

CIN: Do you think it’s important for companies to be able to cooperate effectively with the government on cybersecurity, and vice versa?
SB: Oh yeah. Because the government doesn’t have as many tools in cyberspace as it has in the real world. In the real world, you have 911 lines that bring the police quickly to the place where the attack has occurred. You have police patrolling areas that look likely to have crime in advance of the crime—able to watch what’s going on and say, “That looks like suspicious activity.” None of that is true in cyberspace. The people that are patrolling the networks under attack are the chief information security officers (CISO) of particular companies and their contractors. The government doesn’t know anything about what’s suspicious on a particular network. And they really aren’t the people who respond to an attack. They don’t have the technical ability to show up quickly and help stop an attack, in most cases, and so the first responders are a set of private cybersecurity companies who immediately go in and start addressing the attack. Those companies have discovered probably at least as many ways to distinguish one attacker from another, and have at least as much ability to attribute an attack, as the U.S. government has. The information that comes from inside the networks of victim companies has to be combined, for attribution and action purposes, with the authorities and the information that can only be gathered by governments outside the network—mainly by breaking into the computers of the attackers, or computers being used by the attackers. There really needs to be intimate cooperation.

CIN: How do you think the effort is going right now?
SB: Just OK. We are stumbling forward. There is still a lot of mistrust. But if you look at attribution efforts, the people who are doing the attribution at private companies were doing attribution while working at the government before they went to work for the private sector. And they still talk to people in the government; people in the government still trust their judgment. So there is a pretty porous division between the two and a lot of dialogue—some of it informal, some of it formal. That’s all to the good. There are certainly problems in that area. The Justice Department inexplicably insists that, despite the fact that there are probably 10 times as many cybersecurity resources in the private sector as in the public sector, the private sector has to stay inside the company firewall when investigating an attack. No matter how easy it would be for the company to continue to track its attackers when the attackers move to a new infrastructure, the company can’t do it. I think that’s wrong. But it’s a firmly held view inside the Justice Department over several administrations. It may take a while for that to change. It’s not the end of the world, but it means that the division of responsibility is not optimum.

CIN: So they can keep investigating, but they can’t hack back.
SB: I’d quarrel with that term. I think there’s the notion that “hacking back” means shooting weapons in a dark room, hoping you’ll hit the right target. And no one wants that. Opponents constantly argue, “You’re going to attack the wrong person. It’s vigilante justice.” These are all tropes that the government is fond of, which I think are insufficiently nuanced to address some of the very real situations where we need to empower victims of hacking, because the government will never be fast enough, and the private sector has the resources and the warning to move faster.

CIN: Is the government providing enough intelligence information—threats they’re aware of—to companies? I know that’s sometimes a complaint that companies have.
SB: Yeah, I do hear that. And I take that with a grain of salt. One thing I’ve learned from watching the intelligence community for 25 years is that intelligence in the abstract is almost never useful. For the intelligence to get good, you have to have a customer who understands the intelligence and how it’s being collected, and can tell the intelligence officers exactly what he wants and what’s wrong with the intelligence that has been collected. You don’t get good intelligence if you just try to go out and steal the best secrets you can. Because you usually don’t know what secrets really matter. You need a very sophisticated consumer who can say, “OK, I see what you’ve brought me. There are some interesting things here. But it isn’t exactly what I need. Go look for X.” The spies on their own would never think to look for X. They’re just not as deep into the technology or the situation as the consumer. So for an exchange of intelligence to be really useful, you have to make the companies your customer. I don’t think that’s likely to happen. Part of the private sector’s unhappiness with what they’re being given is their assumption that since what they got wasn’t that useful, they must not have gotten the good stuff. That’s not how intelligence works, in my experience. There isn’t “good stuff” just hiding out there. The good stuff—you have to dig it out as a customer. That isn’t likely ever to be something that the intelligence community does for individual companies.

CIN: What can companies be doing better?
SB: I think companies need to ask themselves—and they’re starting to do this—not so much what’s on my checklist of technologies to deploy, but who wants my stuff? And what tactics and tools are they using this week to get it? Those are things that are knowable. Those are things that you can get from intelligence. You can get it from the private sector forensics people as well. So knowing that there’s a North Korean or Chinese campaign to get information on, let’s say, windpower means that, if you’re a windpower company, you suddenly have a whole new set of adversaries, and you’re going to have to spend a whole hell of a lot more money on cybersecurity or lose your technology.  You constantly have to do a cost-benefit analysis there. But the key is knowing who’s breaking into your system, and why: Are they motivated by money, are they motivated by the desire to steal your technology? Are they criminals or governments? All of those are things that should be part of your cybersecurity planning.

CIN: You just said the magic word: “money.” When I talk to CISOs at companies, if we talk long enough and they feel comfortable enough, they will almost always say something about budgets, and how the management of companies so often underestimates how much is needed. And they routinely underfund cybersecurity budgets. What do you think?
SB: I agree. But I would also like to speak up for management and the board. Their most common complaint is: “My CISO comes in and says, ‘Bogeyman, bogeyman, bogeyman! Give me more money!’ He tells me that the North Koreans could do these things to me. Should I be worried about that or not? I feel as though I’m chasing goal posts that will always recede.” So there is a culture clash there. That’s one of the reasons that I like a model that asks, “Who wants my stuff, and what are they doing to get it?” Once you have that answer, you can also ask the question, “What would it cost to make sure they don’t get it? What do I have to do to defeat every one of the tools and tactics they currently are using?” Then, when I know how much that will cost, I can compare the cost of letting them win to the cost of making sure they don’t.  So I think you can reduce this to a cost-benefit analysis, if you’re disciplined about identifying who’s likely to attack companies like yours, and if you have up-to-date intelligence on how they’re getting into the systems they attack.

CIN: If you were the country’s cybersecurity czar, what would be the first things you would do?
SB: I would say, “If you are in a business that the lives of a large number of Americans depend upon—pipelines, refineries, electrical power grids, water, sewage systems—you’re going to have to meet certain cybersecurity standards. We’re going to continue to use regulatory agencies that you are familiar with, if you are regulated already at the federal level, but we’re going to find a way to make you meet these standards. And when we’re done getting you to meet those standards, the people that you do business with are going to have to meet those standards. We can’t afford to have weak links in our security process.” I hate to say that, because I see all the downsides to regulation. It slows everything down. It raises the cost of everything. It locks in a lack of competition. But there’s no obvious alternative, other than saying, “If you’re hacked, we’re going to expropriate your company, because you don’t deserve to be in business.” That would motivate people—sort of like the death penalty for companies. I don’t think we really want to do that. So, better to come up with a constantly shifting set of best practices, and impose them on people who currently don’t care enough to adopt them. They’re not evil people; they just don’t have enough skin in the game to worry about security.

CIN: What advice would you like to dispense before we conclude this conversation? Anything special you have to say to general counsel?
SB: Yes. It’s from a theme that we’ve been talking about—the need to think about who your adversary is. And it’s one of the hidden advantages in having your general counsel involved in cybersecurity. Engineers are used to protecting against inanimate nature. They’re really comfortable asking whether gravity is going to bring a bridge down. They’re much less comfortable asking, “What if the Russians wanted to bring it down, and they had all the explosives they needed?” They don’t do adversarial thinking as a routine part of their job description. And neither do the risk managers, who are most comfortable asking questions like, “Is this a risk we can self-insure, or do we have to get insurance for it?” In contrast, if you’re a good general counsel, you wake up every day and you say, “I know there are people—maybe competitors, maybe the plaintiffs bar, maybe governments—people out there who want to do my company harm using legal tools. I need to know who they are. I need to know what tools they will use to attack, and I need an early warning as they develop new ones. When the plaintiffs bar develops new lawsuits, I want to watch that. I hope that they try them out on somebody else first. But if they don’t, my job is to figure out what we will do in response.” In other words, the way of thinking about cybersecurity that I’m advocating is a way of thinking about corporate interests that lawyers are already familiar with. It should give general counsel a little bit of confidence that they have a perspective to bring to cybersecurity that isn’t just, “What will the regulators think if we don’t do this?” They can insist on more deeply adversary-based cybersecurity planning.

CIN: Based on your experience as an outside lawyer who advises general counsel, do you think that many GCs have a sense that cybersecurity is a really important part of their jobs, and have a seat at the table at their companies to help tackle it?   
SB: Yes and yes. First, you’d be crazy as a CEO not to give your general counsel a seat at the table, because liability is a significant part of the calculation here. In practically every tabletop exercise we run, people start turning to the general counsel almost routinely. It’s odd how often questions that easily could be answered by somebody else—there’s no necessary connection to the law—end up getting lateraled to the general counsel for one reason or another. I’m a little less confident that general counsel always bring to the task a breadth of strategic thinking that maximizes their value to the company. They will always say, “I can tell you what our regulatory obligations are. I can tell you what our privacy statement says. I can tell you what the liability decisions of the court might be, or what the FTC would say about this.” Those are all things that general counsel should be doing. I feel a little bit like a proselytizer when I say, “No, that isn’t enough; what you need to bring to this, in addition to all that, is the sense that your company is engaged in an eternal battle with institutional opponents, and that your job is to make sure your company is as fully armed for that battle as possible.”