cybersecurity ignorance dangerous illustration captis executive search management consulting leadership board services

Cybersecurity Ignorance Is Dangerous

By Tarah Wheeler, an information security researcher and social scientist, and a cybersecurity fellow at the Harvard Kennedy School.

May 3, 2021, 2:57 PM

In one of the biggest tech book launches of 2021, Nicole Perlroth, a cybersecurity reporter at the New York Times, published This Is How They Tell Me The World Ends to cheers from the general public, plaudits from fellow journalists, and a notable wave of criticism from many in the cybersecurity community.


Book cover

Book cover

This Is How They Tell Me the World Ends: The Cyberweapons Arms Race , Nicole Perlroth, Bloomsbury, 528 pp., $21, February 2021

Perlroth’s book about the global market in cyberweapons is a riveting read that mixes profound truth on policy with occasional factual errors, and it ultimately achieves its goal of scaring the shit out of anyone who doesn’t know much about the topic. But the book might also be read by people who have to act on cybersecurity policy and are unfortunately trusting Perlroth to explain the technical details accurately.

The book fails on that count, and the risk is that policymakers either won’t implement the sensible policies she recommends, or that they’ll so misunderstand and fear the technology described that they’ll overreact and make ill-informed and potentially dangerous policy choices.


In a string of interviews with known and shadowy figures largely from the U.S. cybersecurity journalism and military community, with some credible information security technologists mixed in, Perlroth’s book describes the global market for what are known as zero-day vulnerabilities—undisclosed software bugs that can be exploited for access.

She situates cyberespionage as the natural successor to classical espionage. Nearly a third of the book is dedicated to the history of the Cold War and Soviet espionage, truly prescient for a book being released right after the SolarWinds listening operation, a U.S. government data breach in which Russian hackers are suspected. Perlroth’s story of Project Gunman, the 1984 counterespionage operation to find how the Soviets had breached U.S. encryption, is riveting.

The technical details here are fascinating, and she draws a clear line to the moment when U.S. and Russian communication tools began to be based on the same technologies at Microsoft, IBM, HP, and more. “It was no longer the case that Americans used one set of typewriter, while our adversaries used another. Thanks to globalization, we now all relied on the same technology,” she writes. “A zero-day exploit in the [National Security Agency]’s arsenal could not be tailored to affect only a Pakistani intelligence official or an al-Qaeda operative. American citizens, businesses, and critical infrastructure would also be vulnerable if that zero-day were to come into the hands of a foreign power, cybercriminal, or rogue hacker.”

Her account of the outsized funding for offensive weaponry being developed at the end of the Cold War brings fully into focus the beginnings of the cyber-arms race: “So fixated was the NSA on its new offensive cyber tools that year that offense trumped defense at the agency by a factor of two. The agency’s breaking-and-entering budget had swelled to $652 million, twice what it budgeted to defend the government’s networks from foreign attack.”

Even more importantly, as those dollars rolled in, “Congress continued to approve vague ‘cybersecurity’ budgets, without much grasp of how dollars funneled into offense or defense or even what cyber conflict necessarily entailed.” It’s disturbing to realize how much Congress budgeted for offensive weapons without understanding that those weapons would not be functional without puncturing holes in U.S. defenses, nor that the tools of offense and defense in cybersecurity are fundamentally different. They didn’t seem to understand that they weren’t buying guns that could be used in both offense and defense—they were buying the digital equivalent of nuclear weapons, biological agents, and mustard gas.

Perlroth exposes the inner ethical absence in the brokers and purchasers of these weapons when saying that “nobody apparently stopped to ask whether in their zeal to poke a hole and implant themselves in the world’s digital systems, they were rendering America’s critical infrastructure … vulnerable to foreign attacks.” Perlroth explains that “More hacking—not better defenses—was the Pentagon’s response to the Russian attacks on its own classified networks.” She’s right. Adding more offensive cyber-capability isn’t fixing the problem of crumbling U.S. cyber-infrastructure, which is decaying along with the country’s bridges, dams, and roads.

The book’s analysis of the strange economic incentives of offensive cyberweapons is, however, paired with some unforced errors when describing Stuxnet, surely the most public and well-analyzed cyberattack. The author insists that Stuxnet, the world’s first highly publicized act of cyberwar, took advantage of seven zero-day vulnerabilities. But for more than a decade, the widely accepted number has been four partially known exploits. (Two had been patched by the time the news came out.)

This is odd, and the cybersecurity community has corrected her reporting on this factual error for years. It may seem petty and tiny, but there’s a reason we’re insisting on the difference between known vulnerabilities (the U.S. repository for catalogued vulnerabilities is at 153,031 and counting), exploits (most of the time these are known and unpatched; a common open-source database currently shows 43,989 and counting), and zero-day vulnerabilities (completely unknown and comparatively rare; Google’s Project Zero found 16 total wild zero-day vulnerabilities in the first three months of 2021, which was a new record for them). It’s because that difference is huge in terms of what it implies we and policymakers should do about cybersecurity. Let me explain.

Perlroth often conflates encryption backdoors, zero-days, vulnerabilities, bugs, and exploits, and she calls any backdoor into a system a “zero-day.” While we in the industry sometimes colloquially mix up “vulnerability,” “exploit,” and “bug,” an encryption backdoor inserted as an access method is categorically not a zero-day; it’s just software intended to be broken into from the beginning, instead of unintentionally insecure.

Exploits are one or more bugs that can be used alone or (most frequently) strung together to exploit a system, and they are not necessarily secret or new. They can be—and most of the time are—constructed out of a half-dozen boring, ancient software vulnerabilities that have been patched for years and even decades. Software developers can prevent zero-day vulnerabilities by building more secure software. But one can only mop up the mess after exploits, because people didn’t patch half a decade ago when experts told them to. The policy responses to these two extremely different issues are equally disparate.

Incentivizing the patching of old software vulnerabilities fixes almost all the problems with exploits, and it would fix the vast majority of the problems Perlroth describes in the book, including the malicious software NotPetya and WannaCry, among others. True zero-day vulnerabilities are, by their definition, unknown. The NSO Group’s Pegasus vulnerabilities (which attacked mobile phones) were some of the very few zero-days mentioned in this book that actually were unknown vulnerabilities.

It matters when experts explain these things to policymakers that we do not mix up the desperate need for more defense and patching, which is the logical conclusion of seeing that these major cyberattacks have been caused by lax security maintenance—the kind of thing that is boring and unflashy—with a need to address reserves of zero-day vulnerabilities held by other countries, which would dictate a need for cyberweapons as a kind of mutually assured destruction. After this book came out, respected hacker and bug bounty pioneer Charlie Miller—to whom Perlroth devoted an entire chapter—said on Twitter about the chapter on him that “At a high level it is accurate but there are many many details that are wrong.”

It’s worse that when technical experts, including at the NSA, repeatedly debunked her prior claims about an important cyberattack that occurred in Baltimore, her book’s response was to claim that they were fixating on a “technical detail to avoid responsibility.” While the NSA certainly deserves blame for letting loose the tool known as EternalBlue, Perlroth’s claim that Baltimore was attacked with it is incorrect; the city’s systems were brought down by a different exploit.

Perlroth’s idea that, in the 2015 Ukrainian power grid attack, the Russians “randomized their code to evade antivirus software” is garbled, and she overhypes an elementary skill as more complex than it is. “Randomizing code” sounds advanced, when in fact getting one’s code past antivirus is often simple, if time-consuming. It is more akin to a high school student swapping a few words around in a sentence in a paper to evade plagiarism detection software while not changing the actual meaning of an essay. But the sheer banality of the truth is far more frightening than any exciting, exaggerated word choice.

What the Russians reportedly did was alter their code just a tiny bit to hide from the signature or encrypted tool detection that antivirus software uses. Even the more sophisticated techniques are commonly available and taught in globally popular cybersecurity classes, not isolated to the Russians. What’s frightening is not that the Russians can do it but that anyone can.


Perlroth wrote this book for a general audience, aimed it at U.S. policymakers, and wrote it about the information security community—a relatively small, diverse, global group of people—which she admits she does not belong to. No one in a general audience is likely to know about the technical errors in the book.

As would be expected from someone from the cybersecurity community critiquing small inaccuracies, I have a long list of fact-checked errata after having read this book line by line three times, but I don’t have to list them here. Plenty of researchers have already done so on the technical side (our community does like to find and fix things), and published blog posts and reviews, many of them criticizing whichever portion of the book is most relevant to their own expertise.

The book is at times profoundly cinematic. And sometimes when Perlroth goes for thrills, she makes morally questionable choices.

There’s an extremely exciting narrative account of the 2010 Chinese cyberattack on Google and several other companies that would later be called Operation Aurora. It stars former information security pro and Google employee Morgan Marquis-Boire in a heroic profile of a hacker saving his company.

There are two problems here. One is that Perlroth is playing to the Great Man hypothesis, as if a single heroic hacker saved Google. In reality, tens of or hundreds of or maybe even 1,000 engineers worked on removing the hack. Marquis-Boire was not a savior; he was a smart person working among many other smart people.

The other, larger, problem is that Marquis-Boire is a former information security pro for a very good reason; he has been accused of sexual assault by multiple women on two continents over a period of more than a decade. Perlroth includes only a two-sentence disclaimer later: “In 2017 several women accused him of drugging and raping them. After one accuser leaked an exchange in which he admitted as much, Marquis-Boire vanished without a trace. I never heard from him again.” Later in her endnotes she concedes that he didn’t disappear but simply stopped responding to her.

It’s a puzzling (and for readers who know the accusers, nauseating) choice to knowingly include a heroic profile of someone who has been accused of multiple serious crimes.


In the epilogue, Perlroth provides policy solutions that are almost all dead-on. “Lock down the code” may have no technical meaning, but it’s a clear and understandable directive. Her plea for companies to slow down and fix security vulnerabilities before products go to market, with a combination of penalties and incentives from the regulatory environment, is correct. Her call to segment and audit networks is perfect and precise. Telling users their passwords have been gone for years and to turn on multifactor authentication with different strong passwords on each site is exactly the correct advice, and telling companies to patch their vulnerabilities is absolutely spot-on.

Her assertion that developers need to be provided with multifactor authentication and other verification tools is valuable but could be even more forceful; companies should require multifactor authentication for all their products and enable their developers to do so, rather than decreasing security for the purpose of achieving higher and faster sales.

However, the excellent policy prescriptions she makes don’t follow from the details of the story she tells earlier in the book.

Perlroth makes this clear herself when she concludes by citing a talk by then-head of the NSA’s Tailored Access Operations—essentially the nation’s top hacker—Rob Joyce at the Enigma cybersecurity conference: “Despite the attraction of zero-days, Rob Joyce … called zero-days overrated and said unpatched bugs and credential theft is a far more common vector for nation-state attacks.”

Her inclusion of this comment undermines one of the primary arguments of the book: It’s not exciting crashes that cause the car to die; it’s a lack of oil changes and tire rotation. At the end of the day, the main lesson is that human beings are lazy and don’t like to do regular maintenance.

Unfortunately, the humans who control budgets and read this book might take precisely the wrong lesson from being scared to death: that they should spend more on offensive cyberweapons, instead of learning the real lesson that it’s the civilians who need defending and basic, boring, inglorious protection.


It’s unfortunate that many of the technical critiques of Perlroth’s reporting over the years have been written by angry men on the internet. I have no doubt that the nasty sexist undertones of much of the technically accurate criticism leveled against Perlroth over the years would be absent if she were a man. Such online misogyny makes it difficult for any woman to sort through and understand which corrections being shouted at her should be internalized and which parts should involve the block button—an experience with which I am intimately familiar.

But a response that knowingly ignores the accurate technical critique (such as on the number of exploits in Stuxnet) or which exploit affected the city of Baltimore (it was RobbinHood, not EternalBlue) to build the concluding policy recommendations is not the answer. This book may speak important truths and thrill the lay reader, but it’s not unreasonable to expect the cybersecurity reporter at the newspaper of record for the United States to be as obsessive about facts as the people in the community she’s writing about.

We in the information security community are also to blame; we need to ask ourselves why what will likely be the best-selling book on cybersecurity this year has been written by someone who doesn’t really understand and listen to the people who work in this community. Our insularity, arrogance, and uncompromising natures in not meeting people where they live leaves this gap in the market waiting to be filled by sensationalism instead of humble acknowledgement that we need people outside this industry to understand what we do. We aren’t telling our own story clearly and simply, so someone else tried to tell it for us.

This is not a CAPTIS article. Originally, it was published here.