Changelog Interviews – Episode #432

Big breaches (and how to avoid them)

with Neil Daswani

All Episodes

This week we’re talking about big security breaches with Neil Daswani, renowned security expert, best-selling author, and Co-Director of Stanford University’s Advanced CyberSecurity Program. His book, Big Breaches: Cybersecurity Lessons for Everyone helped to guide this conversation. We cover the six common key causes (aka vectors) that lead to breaches, which of these causes are exploited most often, recent breaches such as the Equifax breach (2017), the Capital One breach (2019), and the more recent Solarwinds breach (2020).

Featuring

Sponsors

LinodeGet $100 in free credit to get started on Linode – Linode is our cloud of choice and the home of Changelog.com. Head to linode.com/changelog OR text CHANGELOG to 474747 to get instant access to that $100 in free credit.

Retool – Retool makes it super simple to build back-office apps in hours, not days. The tool is is built by engineers, explicitly for engineers. Learn more and try it for free at retool.com/changelog

RenderGet $100 in free credit to give Render a try! Plus they’re going to assign a world-class engineer to your account to provide guidance and answer any questions. Render is built for modern applications and offers everything you need out-of-the-box — one-click scaling, zero-downtime deploys, built-in SSL, private networking, managed databases, secrets and config management, persistent block storage, and Infrastructure-as-Code. Send an email to changelog@render.com to get your free credits.

Grafana Cloud – Grafana Cloud is our dashboard of choice – Grafana is the open and composable observability and data visualization platform. Visualize metrics, logs, and traces from multiple sources like Prometheus, Loki, Elasticsearch, InfluxDB, Postgres and many more.

Notes & Links

📝 Edit Notes

Transcript

📝 Edit Transcript

Changelog

Play the audio to listen along while you enjoy the transcript. 🎧

So we are here and excited to talk about some big security breaches, cybersecurity breaches, with Neil Daswani. Neil, thanks for coming on the Changelog.

Thanks for having me. Thrilled to be here.

Well, I thought I would steal a couple of facts from your book to set the stage here; a couple of things you say right in the beginning… You say “In a series of breaches, key background data of over 20 million U.S. government employees and a large fraction of the U.S. consumer financial and social media records have been stolen, and in the past 15 years more than 9,000 data breaches have occurred. This is something that’s going on all the time, isn’t it?

Yeah, that’s right. If we go back to 2015, for instance, the government’s Office of Personnel Management was breached, and that’s the breach in which the 20 million government employee identities were stolen… But that’s just one of many breaches. If you go a little bit earlier in that paragraph, I talk about America’s been hacked; but the hacking of America has not been a singular event. It’s through a series of breaches, like the Office of Personnel Management breach that targeted government identities… And like the Equifax breach, in which the consumer financial records of over 140 million Americans were stolen. If we look at some of the abuses and breaches at Facebook, a large volume of social media data about consumers has also been stolen. So you put all these things together, and it really makes up an attempt at hacking the country overall.

Let’s rewind back to 2007. You were working at Google, and you co-wrote this book, “Foundations of Security”, which was focused on web app vulnerabilities… And back then you saw that security on the internet was bad and going to get worse, but then you say you wouldn’t have been able to predict how bad it was going to get over the next 13-14 years. So you’ve cited a few things, but maybe just in plain words, how bad is it? Are we screwed, or what?

So back in 2007, back when I was an engineer at Google, the main concern that myself and my co-author at the time, Christoph Kern had was that software vulnerabilities could be used to conduct cross-site scripting attacks, SQL injection attacks, and plague a whole bunch of online properties.

[04:10] At the time, MySpace had gotten taken down for a few hours, because someone wrote a worm that spread through the social network so fast… It affected millions of profiles, and they had to take the service down in order to deal with it.

Another thing that was happening back at that time is worms. Worms like Code Red, and Nimda, and SQL Slammer, typically ran by maybe one author, an amateur, caused a lot of disruption on the internet. So when I joined Google, Christoph (my co-author) was one of the folks that influenced me to join the company. After I joined the company, I had the absolute pleasure of meeting people like Vint Cerf. Vint was one of the two co-inventors of TCP/IP, the set of protocols on which the internet runs… And serendipitously we identified that I was his academic grant student, because he was on my Ph.D. advisors [unintelligible 00:05:09.25] committee back when my Ph.D. advisor was having his Ph.D. And Vint was also concerned how software vulnerabilities could be used to take down online properties, and could be used to take down malware, or result in malware propagation… So he was kind enough to write the foreword for the book, and that was what we were concerned about at the time.

I think what we’ve seen now, fast-forwarding to 2013 and afterwards, given the number of mega-breaches that have taken place, it’s pretty clear that software vulnerabilities and malware are only two of the root causes that have led to these breaches. If we look at other major causes of breaches, things like phishing, unencrypted data, inadvertent employee mistakes, third-party compromise and abuse have grown to be addition root causes that have resulted in even bigger breaches than the kinds of things we were worried about back in 2007 when I was an engineer at Google.

So how do we get here? Was it just focusing on too little? Because like you said, there are six different causes/vectors that may be the focus of the infosec community, and those in software are trying to solve or route around these particular two things, but actually it was a much bigger surface area that we weren’t secure in… I just wonder how from 2007 to today, how we got to this point where there’s been so many breaches, and not just minor breaches, but these mega ones… And they all seem to happen for different reasons. How do you think we got here?

So the way that we got here was not a – it was a gradual sort of thing. When we look at things like phishing, for instance… Phishing was an issue prior to 2007. The word “phishing” was first coined in a newsgroup called AOHell (America Online Hell)…

Nice…

…in, I believe, the late ‘90s. And phishing was always a concern because of the fact that the initial protocols that the internet was built on - the email protocol, for instance, SMTP (Simple Mail Transfer Protocol) would allow anybody on the ARPANET (the predecessor to the internet) to send an email; anybody could send an email to anybody else, claiming to be whoever they wanted to be, because all the organizations, the initial universities and military organizations on the ARPANET trusted each other. But as the internet got commercialized, phishing started getting used more and more. It was initially used to try to lure people to fake banking sites, for instance, and try to get people to enter their username and password credentials for banking sites. But what we’ve seen is that phishing has also evolved. Many attacks that take place these days are spear phishing attacks, where the attacker wants to break into an organization, they figure out who’s the administrative assistant, the CEO, they figure out how the email addresses are crafted, and they send in these spear phishing emails to them.

[08:31] So phishing was always an issue, but in terms of what you could do with phishing attacks grew over time, and has led to bigger and bigger breaches. Now where we talked about software vulnerabilities, we talked about unencrypted data has become more of an issue… You know, back in 2003, when California was the first state to pass a data breach notification law, the law was structured such that if somebody’s name and some sensitive identifier about themselves was inadvertently exposed or stolen, then that needs to be reported as a breach.

So there have been a whole bunch of breaches due to unencrypted data that have been getting reported since 2003, but most of them have been smaller in nature. I would say that if we look at another one of the root causes, third-party compromise and abuse, that really started becoming an issue in 2013. So when Target got breached back in 2013, and over 40 million credit card numbers were stolen, the attackers initially broke into a company by the name of Fazio Mechanical Services, which ran the heating and air conditioning for all of the Target retail stores. Also for a bunch of other retailers as well. But the attackers stole network credentials from Fazio Mechanical Services, and then because the Target and Fazio networks were tied together, it was one flat network, the attackers were able to pivot from Fazio’s network into Target’s network.

If we look at just the following year, in 2014 the JP Morgan Chase breach occurred, because they had a third party by the name of Simmco Data Systems that ran a website that was used to manage their charitable marathon races. The attackers leveraged a vulnerability at Simmco Data Systems to then break into JP Morgan Chase. And JP Morgan Chase was spending 250 million annually on their security of their bank, and the attackers were able to get out with 70 million names and email addresses, JP Morgan Chase customers, and then could target consumers with spear phishing attacks. It was another example where there was a third party that was leveraged in order to conduct the attack. So third parties have become more and more of an issue.

And then finally, there’s all kinds of other inadvertent employee mistakes that can be made. If we look at cloud services these days, if you just have one misconfiguration of your Amazon S3 buckets and you misconfigure the bucket with important/sensitive data in it to be public instead of private, well that counts as a breach. So the root causes that I was concerned about did indeed grow over time, and you can kind of see the progression over the years. But what has been really stable over the past seven years is pretty much the overwhelming majority of breaches have been due to these six root causes.

And they have gotten far more sophisticated… Just thinking about phishing alone, I remember when you would look at a phishing attempt and somebody would post a screengrab or whatever it was, of somebody trying to phish them back in the day… And it was laughable how pathetic the attempt was to fool somebody. It would only work on, let’s just say, the most vulnerable of internet users. And the phishing attempts today are so pointed, so well done.

[12:02] I just got one the other day, which was acting as if it was Google, contacted me about something with my account. It was actually, of course, a spoof Google email address, but it linked back to a Google form, so it was a Google domain, a Google doc that had a form… And I got there – because I just followed it with Curl and stuff, just to see where it was gonna take me. Because I could tell, but I still wasn’t 100% sure. I was like “Could this possibly be Google? I don’t’ think so, but maybe… I’m gonna follow this trail…” And it took me a while. I had to go three or four Curls, following redirects to find out “Nope, it was definitely just a phishing attempt.”

The sophistication of the bad actors - maybe because there’s so much more to gain - has really ramped up over the years.

That is exactly right. In fact, when I think about sophisticated phishing attacks – and in my book on big breaches I talk about a lot of big breaches, but one that comes to mind is back in 2016, probably one of the most interesting phishing attacks was conducted against John Podesta, who was Hillary Clinton’s campaign manager. And what happened there at that time is that the Democratic National committee was under attack by the Russians. And so John Podesta got an email - they were using Google apps - and it was perfectly crafted, it looks exactly like the Google apps password reset email. Basically, it told him “Hey look, we think somebody is trying to attack you. You might wanna change your password.” So John Podesta - or one of the staff members - gets the email, and does the right thing. He/she doesn’t just click on the phishing link in the email at the time, but rather forwards it to the IT Department at the Democratic National Committee, and asks them “Hey, is this legitimate? Should I reset my password?” So the IT Department responds, and said “Yes, we are under attack. Please reset your password.” Except what John Podesta or the relevant staff member did was they didn’t go to the link that the IT staff member told them to go to. Google.com/security, or whatever it was. Instead, he went back to the original email that they got from the attacker, and clicked on the link there. And the attackers were then able to log into John Podesta’s Google apps email account and make off with 60,000 emails that they then released.

A pretty interesting phishing attack, and today, as you can imagine, the Democratic National Committee uses two-factor authentication, so that simply stealing the password is not good enough to steal emails in droves.

Yeah. We actually linked out to something recently that says that’s not how 2-FA works. And they were purporting essentially that 2-FA was a security measure, when really it is a security measure, but not in the way they were saying it was. Basically, 2-FA is meant to prevent attackers from masquerading as you, not to prevent sites masquerading as real sites. It’s sort of a backwards thing. But 2-FA enables you to be you, rather than somebody else being you because it requires multiple factors; I’m speaking to a security expert here, of course, but… You know, it disables the ability for someone else to be you if they have multiple factors that say “This is you. This is who you are, because these devices have consensus that they agree they’re you.”

That’s right. And there’s actually many different flavors of two-factor authentication, some better than others. When you log into a website with two-factor authentication, you have to present your username and your password, but you’ll also have a, say, two-factor code sent to your mobile phone, and you have to enter the four/six/eight-digit code, whatever it is. But there’s still many ways for the attackers to beat that.

[16:10] For instance, if an online site is relying on SMS in order to send two-factor codes, one of the things that the attackers can do is what’s called the SIM-swapping attack, where what they’ll do is they’ll call up your cell phone provider, the wireless carrier, and they will pose to be you, and they’ll use publicly-searchable information about you - your name, your address, your phone number, how many pets you have, how many kids you have, whatever they can gather from Facebook, and if they can figure out what is the verbal passcode that you used for your account with your wireless carrier, they can convince your wireless carrier to switch your phone number to use a SIM card and a phone that the attacker owns, instead of your actual phone.

And once the attacker does that, once they have SIM-swapped and taken control of your wireless carrier account, then they can get all the two-factor codes that are sent to you when you try to log into your bank, or whatever.

So there’s many ways of doing two-factor… One way is SMS, but it does have that vulnerability that makes you susceptible to SIM-swapping. There are additional ways to do two-factor authentication where you use an app like Google Authenticator, or Duo or whatnot, where you get a six-digit code that’s generated by an app on your phone. Attackers will not be able to – say, if they can steal your SMSes, they won’t be able to have visibility into what is the code there… It ends up being more secure against that channel of attack.

On the other hand though, there is this concept of “What is a completely non-phishable defense?” To an extent, whenever you go to a website to log in, if you have to enter your username and your password and say a two-factor code, you can imagine that any attacker can start up an impostor site that will ask you for the same three things - the username, the password and the two-factor code… Irrespective of how that two-factor code got to you or was generated.

So one of the challenges with these authenticator apps is that attackers can still set up impostor phishing sites. But there’s an even better two-factor which I could tell you about. Does that make sense?

Oh, yeah.

Yeah, so the apps like Google Authenticator, Authy etc. - they are on a rotating key. That code rotates every N seconds. I’ve not implemented it as a developer, I’m not sure how the server-side syncs up with that code… But setting that aside, if somebody were to phish you and get you to enter all three things, they would have N seconds to then go and sign into your actual email or whatever it is, with that code, before it expires. I think that N is like 60 seconds… I’m not sure what it is. But yes, absolutely, please tell us the best way, because I’ll just do that.

Yes, that’s right. And by the way, 60 seconds is a lot of time for an attacker site to receive the two-factor code and then do an automated authentication into your account…

Totally.

…and then transfer money out, or whatever it is.

Plenty of time, yeah.

So 60 seconds is a long time. And by the way, for those folks that are software developers in the audience here as listeners, there’s actually two different standards, two internet RFCs that are used to design that kind of authentication. There’s what’s called HOTP and TOTP. HOTP uses an HMAP, a hashed message authentication code, to generate the two-factor codes… And then TOTP uses time. So it doesn’t just rely on a seed, but there is a synchronized clock in between the authenticator and the website. So there are two ways to do that. But at the end of the day, it is possible to get phished by having [unintelligible 00:20:12.25] that accepts a username, a password and a two-factor code.

[20:18] So the most secure way to do two-factor authentication is to use what’s called a security key. A security key is a piece of tamperless hardware which you have to either plug into your laptop or your mobile phone… Or many mobile phones have secure enclaves and whatnot on it, that can be used to generate the appropriate two-factor authentication information… But the idea there is that in order to log in, you provide your username and your password, and what a website checks for is the ability for your security key - whether it be something like a Yubico YubiKey, or whether it be your mobile phone - to generate the two-factor code, but not in a form field that you have to manually enter. And that is a non-phishable form of defense against phishing.

And if you look at that particular set of security key technology, when Google deployed that, I believe in 2017 or 2018 - I can’t remember exactly which it was - they deployed it for tens of thousands of employees, and when they looked at it the next year, there were absolutely no phishing attacks. And both Google and Salesforce have used security keys to eliminate phishing as a root cause of any potential breach against themselves. And I really hope that more organizations learn from that experience and leverage security key technology to eliminate phishing, instead of having it continue to be a major root cause to breaches.

So Neil, there’s several breaches we could talk about - big ones, small ones, a couple recent… But there’s some ways in. What are the common ways in, and what are some of the most recent breaches? I know Capital One happened recently, Equifax, and SolarWinds is ongoing… But where do you begin to sort of break down the vectors into these breaches in particular?

Sure. So the six major technical root causes of attacks and breaches are phishing, malware, software vulnerabilities, unencrypted data, third-party compromise/abuse and inadvertent employee mistakes. And if we talk about, for instance, the SolarWinds hack - and I’ll mention that, for those of you that have heard of the SolarWinds hack that occurred in, or rather that was announced in December 2020, you may have heard that it’s being compared to a digital Pearl Harbor. But I would say that there’s some major differences about the SolarWinds hack from Pearl Harbor.

[24:18] First of all, Pearl Harbor was a complete surprise when it happened. And if we look at the SolarWinds hack, the way that attackers broke into many governments and organizations was using SolarWinds’ and their software as a third party.

If we look at third party compromises, there have been third party compromises going back to 2013 and 2014. I happened to mention the Target breach - it was initially caused by a third-party; the JP Morgan Chase breach was initially caused by a third party. Facebook has had a number of hacks and breaches over time in part due to third parties like Cambridge Analytica. So third-party compromise and abuse is nothing new.

In addition, if we look at attacks against the government, if we go back to the Office of Personnel Management breach, in which 20 million government employees were stolen, the government getting targeted and hacked by foreign adversaries is also nothing new.

And then thirdly, if we look at hacks that have been attributed to the Russian government or organizations that are working for the Russia government, that’s also nothing new. If we look at the Yahoo breaches that were announced in 2016, we should keep in mind that there were four Russian hackers responsible for that, two of which were ex-FSB agents. FSB is the new KGB.

So if we look at those aspects of the hack, there have been components of that taking place for years. And I think if there is anything that is new and novel, it is that the scale of the attack was probably larger than in the past, based on some measures… And I would also say that it was a case in which a third party was leveraged to hack multiple government organizations, whereas in the past there has typically been one third party used to hack some major target, not multiple major targets.

So I’d say that the SolarWinds hack is not a digital Pearl Harbor, because it shouldn’t have come as a complete surprise. I think the other aspect of the SolarWinds hack that’s interesting is that beyond it having all the previous components, the SolarWinds hack - the carnage of it, or the after effects, comparing it to Pearl Harbor… When Pearl Harbor got attacked, all the carnage was immediately observable. And if we think about the SolarWinds hack, I think that the impact of it is going to be understood over time. Months or years. Not immediately the day after.

[27:49] Those are just some of my thoughts on the SolarWinds attack. And I think the other thing to keep in mind, I’d say if there’s a third thing to talk about with regard to SolarWinds, it’s that based on new information that has come out, 30% of those organizations that were impacted were impacted by channels other than SolarWinds. And it just happened to be the case that we are discovering the hack and attack in a particular order. The order in which the [unintelligible 00:28:20.12] conducted the attack may have been wildly different.

So the Solar attack is certainly interesting, but the components of it are not new; scale has been larger. We’ll learn more over time, and we’ll also learn how much SolarWinds was or was not at the heart of it over time, once everything gets pieced together.

It must be difficult to go back forensically and uncover the truth. I mean, surely, there’ll be things that we’ll never know for sure; the order of events, how things went down… But I guess in the digital world we can at least timestamp and get that kind of chain of custody stuff a little bit better than they used to. But the work of going back and forensically discovering what all went down, and by whom etc. has to be deep and tedious, and probably rewarding work, if we can dig anything out of that history.

Yeah, the forensics involved in understanding how breaches have occurred, and attributing them to particular attackers is indeed very interesting, painstaking work. And if I think back to a bunch of the breaches that we discussed in the Big Breaches book, there is certainly some attacks, for instance the attack against Yahoo, the attacks against OPM. There wasn’t enough forensic information to piece together how the attackers even got in. It’s suspected that phishing and malware were the key vectors, two of the key root causes… But it’s unclear.

Now, there’s other [unintelligible 00:30:05.29] For instance, if we look at the Capital One breach, in which a single former Amazon employee was able to leverage a server-side request forgery vulnerability and a firewall misconfiguration - the investigation there was very speedy, and happened because of the fact that Erratic, which is the codenamed handle of the attacker that got in… She left her resume in the same GitLab repository where she archived the hundred million credit applications that she stole out of the Amazon S3 buckets.

Really?

And so obviously, with her resume there, investigators were very easily able to follow up and make the attribution. I think whether it’s cyber-criminal or whether it’s [unintelligible 00:31:06.28] adversaries, the authorities are always looking for the breadcrumbs and they’re always looking for the mistake that the attacker makes, because nobody’s perfect. So any attacker - if you study them long enough, if you study their trail long enough, you’ll find something. But sometimes they just need a mistake.

That’s a big mistake.

The whole time we’re having this gigantic conversation, I’m a gigantic fan of Mr. Robot and Elliot… So I just think about how Elliot would act in terms of a hack, or a rootkit, or a malware attack, or a 2-FA spoof, or all these different things that he did during the show… And I just think about it like that - he’s the kind of person, in that show at least, where he didn’t make mistakes, or not many mistakes… But that is the truth though - you can follow somebody long enough and you see… Because they’ve got limited time; they’ve got limited time to do a breach, or to steal that code, to 2-FA spoof that person, or whatever it might be… And they’re going to slip up, somehow, someway…

[32:09] I’m curious though about that resume, if it just wasn’t good enough. Like, it wasn’t really her… Because forensically, as a hacker, you can fake mistakes, too. And you can frame somebody. I’m not saying she was framing, I’m just saying that just seems so obvious.

How do we know…?

Like, “My resume is chillin’ in this GitLab repository.” It just seems too pointed… I don’t know.

It seems like she couldn’t have been that dumb…

I’ll try to add a couple of things. First, as we’re having this discussion, I’m reminded of a story that someone once told me about an interview with an FBI agent. And the FBI agent says “To catch most criminals, we wait for them to make the mistake and then we catch them.” So the interviewer asks “Well, so what about the criminal that doesn’t make a mistake?” and the FBI agent says “Well, we don’t catch them.” So that’s one story that comes to mind.

But going back to this particular Capital One breach and Erratic, I agree that if it was just the resume being in the GitLab repository, you could look at that as somebody might have tried to frame them… Of course, in this particular case she was tweeting about the attack, publicly, on Twitter, as she was doing the attack… And there was some concern also about just the mental stability of the attacker in this case. But it appears that she created enough evidence, and even posted things on Twitter saying “It’s the equivalent – I’ve strapped a bomb to my chest”, or something like this.

Hm… What was her MO? What was her motivation? Why was she doing it?

I do not know… I think she was probably technically capable of doing it… You know, she might have been mentally unstable, might have wanted attention, and this seemed like a great way to get it… I don’t know. I wouldn’t even wanna speculate.

Well, I thought a lot of people will write masterminds… In the fictions at least, they like to explain why they’re doing what they’re doing; especially if the goal is attention, a lot of times.

The monologue is famous.

I wonder if she came out and gave her monologue.

Exactly.

I didn’t know this was a single attacker, who was also tweeting, and leaving a paper trail on GitLab, so I was just wondering if maybe she published her motivations…

Well, I haven’t heard them yet, as of writing the Big Breaches book and the chapter on the Capital One breach… Which I thought was pretty technically interesting as well. I don’t believe that the monologue appeared, but who knows; maybe if it does show up, we’ll have to post something on the book’s website and pointing to it.

Yeah. A second edition. So Capital One, this was an ex employee who had – did she have insider information about this particular vulnerability, or did she just find it by… You said there was a misconfigured firewall, and then there was also a server-side vulnerability that she was taking advantage of. Was this a case where she had knowledge of that system, and so it gave her the advantage, or was she just out there fuzzing it and seeing what she could find?

So I think in this particular case she was an ex-Amazon employee, and she probably had technical skills and knowledge. Based on my research and study of the attack, I don’t believe she had any insider knowledge about Capital One. I think it was revealed that she was probing not only Capital One, but a bunch of other companies as well…

Gotcha.

…and simply knew enough how cloud systems work. And what she had identified is that Capital One had an Amazon EC2 instance, pretty much a virtual machine, that was running an application that had a server-side request forgery vulnerability. And basically, what that means is that she was able to send requests to that EC2 instance, and the EC2 instance would query Amazon’s metadata service, and then relay the responses onto the attacker.

[36:14] So from a pure computer science perspective, the Amazon metadata service is required, so that things that are running on Amazon and EC2 instances can even ask things like “Well, what’s my IP address?” It’s running on cloud, it shifts from machine to machine… You need things like your IP address.

And the intention is that, well, only the EC2 instances should be able to query the metadata service, because they can also ask for things like security credentials. And what happened in this case is that because Erratic identified that the EC2 had this server-side request forgery vulnerability, she was able to ask questions like “Hey, could you please give me the security credentials for a whole bunch of Amazon buckets that are part of Capital One’s deployment?” And the EC2 instance would query the metadata service, the metadata service would say “Sure, I would be happy to give you the security credentials.” So it would give the security credentials to the EC2 instance, to Capital One’s legitimate EC2 instance… But the problem is that it would relay the information back to the attacker, any random person on the internet.

And once Erratic was able to get those security credentials, she pretty much cached those in her local Amazon client command line. And once she had those credentials, any queries that she made to the Capital One Amazon S3 buckets, there was no way for S3 to tell the difference between the attacker versus a legitimate program that was trying to access the hundred-million credit applications.

So once she got the credentials, she asked the Amazon S3 service for Capital One for all that data, and it happily handed it back to her [unintelligible 00:38:08.13]

Fascinating stuff. How about the Equifax break? Because that was also credit records. I think that was more – it was like 150 million, something like that? I had the pleasure of being a part of that one, as one of the millions who got their stuff leaked…

Yeah, I’m happy to talk about the Equifax breach. The Equifax breach – if you have read about this particular operation in the media, what one associates and attributes to it is the Apache Struts vulnerability that was use to initially get into Equifax. Basically, in March of the year that the breach occurred there was an Apache Struts vulnerability; it was a [unintelligible 00:38:54.17] vulnerability which allowed any attacker on the internet to request that the server run commands of the attacker’s choice, and it would happily do that remote code execution vulnerability, and there was a [unintelligible 00:39:10.01] very quickly. But there’s a lot of [unintelligible 00:39:13.16] technical details, and there was a lot more to the breach than just how they got in.

What had happened is within a fairly short period of the Apache Struts vulnerability being announced, it was quite observable that the vulnerable server at Equifax was getting queries from Chinese-attributed IP addresses probing as to whether or not it’s vulnerable. And the particular probes would do things like inject an HTTP header that had a command to run instead of having typical header information…

[39:56] And these probes would do things like change the current directory to a shared memory device, drop a file into shared memory so that it wouldn’t touch disk, so it couldn’t get picked up by antivirus scanners… And then change the permissions of that file and shared memory to be executable, and then run the thing.

So those probes occurred at around the same time that Equifax vulnerability management team had basically scanned Equifax’s servers to see “Okay, what servers do we have that are vulnerable to this thing?” But the problem was they were using a McAfee vulnerability scanner that was end of lifed, and was not being as actively maintained… And that scanner was also only scanning the root directories of the servers. It was not scanning subdirectories. And the particular server that was vulnerable at Equifax, the vulnerability was present in a subdirectory.

So the vulnerability scanning team at Equifax, when they sent out the notes to say “Please patch our vulnerable Apache Struts servers”, the scans came back negative, saying there’s no vulnerabilities. So their team might have thought “Oh, this must all be patched.” In reality, the scanner was having a false negative.

So what happened a couple months after that is that additional Chinese-attributed requests hit the still vulnerable Apache Struts server, and basically established a footprint; they got some files… And that formed their beachhead for their attack. They got in that one machine, they started scanning, they identified that there were 60 other machines or databases that they could query… But they didn’t have credentials for those databases. So what happened is the attackers found a configuration file that had unencrypted credentials for the databases, and that was one way that they got information from databases.

Another thing that the attackers did is they took advantage of a SQL injection vulnerability. So while everybody knows that the Apache Struts vulnerability was associated with Equifax, what fewer people know is that there was a SQL injection vulnerability in one of the databases, and the attackers used one of their web shells that they planted to exploit that SQL injection vulnerability and steal data out of one of the other databases. So there were many things that had to go wrong; it was not the Apache Struts vulnerability that led to the Equifax operation.

It seems pretty easy to point to third-party issues. It seems like that’s where the trust breaks down. You’ve got this trust between the primary party and some sort of third party… And some [unintelligible 00:42:46.12] but it seems like the third-party vulnerability is just the entry point, not the problem. It’s part of the problem, but you find some sort of vulnerability there, and then they have experience and know-how to masquerade and to query databases or find files or set something into RAM instead of on disk… A lot of inner workings of how security measures are watched, monitored and whatnot… Not just simply, “Oh, I hacked an open source dependency, and boom - I’m in.” It’s much more than that.

Yes, that’s right. So I would say that third-party compromises, third-party abuse is a very significant point of entry. And as we’ve talked about Target, JP Morgan Chase, Equifax, who had third-party components and third-party companies that were leveraged as part of the attack - I’ll mention that it’s not always third party. If we look at Facebook for instance, there was a breach that they suffered in 2018, where tens of millions of access tokens were stolen; access tokens which would allow people to log in and various Facebook users… And in that case, there were three vulnerabilities that were used altogether, not all third-party.

[44:06] The three vulnerabilities that came together was – so the feature that got abused at Facebook was the Facebook View As functionality, which allows you to view your Facebook profile as a member of public… And in the first vulnerability, the View As feature allowed somebody to incorrectly post a video.

The second vulnerability was one in which the video uploader incorrectly generated an access token that had permissions as a Facebook mobile app. And the third vulnerability was that the access token was generated not for the user as a viewer, but for the user whom was being looked up. So all those three things came together in a much more sophisticated attack, in which three vulnerabilities had to be leveraged together, not all of which were third-party. I believe they were first-party code there. So I’d say that both first-party and third-party vulnerabilities are important and significant when it comes to breaches.

And by the way, let me mention that Facebook did a very nice and thorough investigation in that 2018 breach; it was great to see the transparency that Facebook had when they investigated that. I honestly think - to just put a little bit of view on this, perspective on this - I think looking at Facebook 2016-2017 and before, there were certainly a bunch of abuses of the platform that were taking place, where attackers were able to use that fact… You know, APIs at Facebook could just be queried very easily. And once they shut all of those other paths down… Like, if I had to guess - this is just a guess - that this particular hacker that used these three vulnerabilities in the Facebook View As profile bug. If they wanted to steal information from Facebook profiles, they were forced to then do something much more sophisticated. But I really liked the fact that Facebook was very transparent and posted some of the technical details of that in a blog post. So I’d say that’s the authoritative information. And if I was slightly inaccurate, my apologies for that, but there’s a great Facebook post on it.

It’s amazing how more sophisticated is a nation state or a highly motivated actor in 2021, or even back when this one happened, as compared to where we started this conversation, with the Samy MySpace hack of 2005. You know, the era of worms and viruses that were either accidental, or for fun and they got out of control, or they were maybe malicious in some cases, but just – they’ve sure grown up since then.

This three-vulnerability combo to get in the Facebook thing - that’s an amazingly impressive hack, isn’t it?

That is a much more sophisticated hack. I was impressed with Facebook’s speed at which they were able to diagnose and debug and troubleshoot and identify the root causes behind that.

Thinking about all these breaches over the years, there’s certainly been a bunch of breaches - as in the case of the Samy worm, one cross-site scripting vulnerability could have been leveraged to pretty much take down MySpace routers. Or the one Apache Struts vulnerability that led to the Equifax breach was the initial point to get in. We have seen the attacks become more sophisticated as per the Facebook example that we talked about, but if I think about the Capital One breach, a server-side request forgery vulnerability and a firewall misconfiguration - that was perhaps not as sophisticated and done by one person.

[48:08] There’s a saying in the security community that attacks only get better. And I’d say the simple attacks and what people can do with one vulnerability - those issues still exist, but now on top of that we have to deal with more sophisticated attackers at the same time.

So Neil, you have painted a bleak picture of Swiss cheese out there, with all these holes, and a world of just cyber-criminals doing what they do, and breaching all of our large and small organizations… Where do we go from here? What do we do about it? You have in this book a list of seven habits that we call highly effective security people/organizations… So it’s not all just storytelling, you have some prescription here as well. How can we route around or solve the problems that we’re seeing out there?

Yes, thank you for asking. So in writing the Big Breaches book, it’s not all about how these breaches have happened, but if you look at it, half of the book focuses on the breaches; the other half of the book focuses on “What do we do about it?” and “How do we get to a better state of the world?” I think that, as you mentioned, we have a chapter in the book – the second half of the book starts off with a chapter on what are the right habits.

Myself and my co-author, Moudy Elbayadi, we’re both fans of Stephen Covey and his “7 Habits for Highly Effective People”, which you can use for personal development… And so what we thought we’d do is come out with “What are the 7 habits of highly effective security for organizations.” Some of our habits are similar and build on what Stephen Covey talks about. For instance, our first habit is to be proactive, prepared [unintelligible 00:52:02.06] Stephen Covey in his work also focuses on being proactive. So we think that being prepared and being paranoid is the right way to code. [52:13] One of our other habits is to make sure that you build and design security in. Security is a property, it’s a characteristic similar to quality. You can’t exactly build a product and then launch it and then try to make it a quality product afterwards. Quality is something that’s gotta be inherent and built-in. Security is just a type of quality, and needs to be built in from the beginning.

We also believe that in order to achieve security, one of our habits is that you’ve gotta automate. I think if you try to rely on your users, or developers for ways to try to get things right, and they have to manually take some right step every time, it’s gonna be very hard. So we believe in heavy automation; relying heavily on automation and finding vulnerabilities is very important, which I can talk about in just a second.

Another habit that we believe in is to measure security. Measure it quantitatively and qualitatively. And then finally, we also have a habit around continuous improvement. In Stephen Covey’s book he talks about sharpening the saw, ensure that you’re always getting better and always sharpening that blade. In our corresponding book chapter we talked about the importance of embracing continuous improvement, and make things 1% better every day. Over time, that’ll compound like you wouldn’t believe.

So those are some of the habits. But I’d be also happy to talk a little bit about [unintelligible 00:53:42.20] book we give advice for how to go about addressing the root cause of software vulnerabilities.

Let’s start with a couple of these habits and then we’ll go from there. Measure security - for instance, what does that look like for a software team?

For a software team, one thing that you can measure is how many vulnerabilities are you finding in your code with a scanner - whether it be a static analysis scanner, whether it be a dynamic analysis scanner, whether it be based on penetration testing that you do, whether it be based on bug bounty programs, where you have external researchers trying to find vulnerabilities…

I think that one thing that you can look at is what are the number of vulnerabilities that you’re finding, say, using the automated means. The way to look at that is that’s the tip of the iceberg. The scanners that we have are better than they’ve ever been before, but they’re still not as sophisticated as, say, a cryptography expert reviewing the guts of your authentication code. And if the scanners are finding vulnerabilities in your code, it means that you’ve probably got a lot under the tip of the iceberg to worry about.

Another example is that you think about the scanner as a flashlight. You shine a flashlight in a room. If you see a cockroach, chances are there’s a lot more cockroaches than just what you see with a flashlight. So it’s important of course to get to a point where you scanners are not identifying vulnerabilities in your code, but once that’s done, chances are there’s still more security bugs in your code, and you’ve gotta then start using white box pentesters, and/or bug bounty programs and other things where you have sophisticated humans looking at the code to find the additional vulnerabilities.

[55:49] One thing about security is that you’re never finished with it, just because you’re never finished with the software. Any successful software company has more software coming down the pipeline every single day… Let’s say we get past the shine a light in the room phase, and you keep shining the light on a routine basis and there’s no cockroaches there… Then you’re at a phase where you’re saying “Well, we need sophisticated third-party auditors, pentesters that we’re going to hire.” What are best practices around that? These can be expensive things, the software is changing… They can finish their audit and then you introduce a vulnerability the next day that you don’t know about… Is there best practices around measurement? Like, “Well, you should have a third-party audit once a year, or six months” or “You should have it part of your team, that’s like the security team that goes around the rest of the organization and tests things.” What are people doing out there that is working well?

I think that security audits are good activities to do them once or twice a year. They’ll test your basic hygiene. That said, if you really wanna have a handle on things, I think taking a continuous approach is indeed the way to go. Because like you said, you could have an audit, you could do a pentest and then a new vulnerability can get introduced the next day. So there is a set of new tools that are available on the market where they don’t take the approach of, say, doing security tests after specific parts in the development pipeline… Rather, we’re in a world where we wanna have agile development, we wanna be continuously releasing, we wanna be continuously pushing code… And so the point-in-time test model of security is becoming an old model. And a much better model is to take the approach that you wanna have continuous monitoring for the security of your code, and you wanna have observability that provides you with constant security monitoring.

For instance, DeepFactor is an example of an observability tool that will monitor your code for security vulnerabilities pre-production, so that as you’re going through your development, as you’re going through your tests, as you’re going through your staging, if you link in some new library and that library is old or unpatched, it will let you know right away. You don’t have to wait to the point that you go get your software pentested to find that out. You can identify it much earlier. And by the way, the cost to fix it is much, much less when you identify it earlier and right away, rather than waiting for a penetration test.

There’s a divide between infosec people and developer people… And I think that’s part of the problem. I understand there’s only so many things that you could focus on as a human, and so there are people who are generally considered infosec - these are your penetration testers, your security researchers, your audit firms, cryptography people… They’re kind of like in this group. And then there’s like developer people, who are focusing on JavaScript and Node.js, or they’re writing Go code, they’re talking about new features, and APIs, and stuff like that… And there are those who float back and forth. But when you talk about security-first, building it right in, starting with it in a similar way you start with software quality, or application quality, a lot of times the people who are doing that coding just don’t have that expertise. They don’t understand what is a best practice around how to do SQL queries in a way that it’s not injectible, or whatever it happens to be. Whatever that particular tech surface is. How do we bridge that gap? How do we get these people to be the same people, or at least sitting next to each other, digitally speaking maybe…

[59:51] Because I do see kind of separate communities, and sometimes they even look at each other with the side-eye, which is kind of strange… There are those who float in-between, “I feel like I’ve kind of done a little bit of that, sit in the middle…” But I think if we can get the software developers more equipped with the security knowledge, either at the outset, or ongoing, and maybe get the infosec people more equipped with the ability to write some software - I’m not saying y’all can’t write software, but you know… So we could have one big group. Is that something you think would be advantageous to the software community?

Yeah, so I think you asked a great question, and you started providing an answer in the right direction.

So I think the all traditional view is you have the development team and you have the information security team, and they’re separate teams. And that old model, like the information security team can be perceived of as like the Department of No, which is not what you want, right?

Right…

I think a more modern view, a better view is that the information security team exists to serve enablement of the business, and the philosophical approach should be “Yes, and how.” “Yes, we want to launch. How do we do that in a way that mitigates risk?” I think that the mentality should be “Yes, and how.”

And within an information security team you may have an application security team or a product security team, and I think that team should be staffed with people who used to be engineers and developers solely. I myself, I’m a software engineer in my background; my first job at Bell Communications Research years ago was as a software engineer. And in the Foundations of Security book that I co-authored together with Christoph Kern years ago, the focus was “Look, I’m somebody who’s developed software for a living, but now I just wanna make it secure.” And I think that’s the right kind of team that you need.

The goal of the application security team should be to enable the developers to be able to write code securely, and give them the tools and frameworks that they need to write code securely, such that they can monitor it, but don’t necessarily need to be an approval gate.

Right.

So if the application security team can say “Bring observability tools like DeepFactor and get folks to use those tools”, then developers can monitor the security of their own software themselves, and perhaps all that stuff can be aggregated together so that a CSO can look at the full picture and try to understand “Okay, what is the security posture of our codebase? How likely vulnerable is it or not? What kind of additional tools should we invest in to further help the developers?”

Another model that I’ve seen worked well is if you’re a larger engineering organization and a relatively small application security team, what you can do is encourage one of the developers and each of the development teams to kind of be the local securities arm, where they go through some training, maybe they know more about some of these tools, they may be able to identify and exploit SQL injection or cross-site scripting vulnerabilities of their own, and they kind of service the local security DNA in that dev team, but just coordinate with the more central application security team.

So I think that we can set up models like that… It’s a much more progressive, collaborative way in which to go about achieving a more secure software than what exists otherwise with a more traditional model.

[01:04:02.19] Yeah, I like that. And I know – I mean, that’s inside specific organizations, which is really where we mostly operate. I’ve seen also where we operate kind of in the online spaces out there in the communities… There’s been efforts to bridge these gaps. I appreciate people who are trying to do that work. We know that there was a similar dev and ops gap, where the developers write the software and the operations people put it into production… And then there was DevOps, “Hey, let’s get together, and let’s break down that barrier a little bit.” Now we’re seeing dev-sec ops, which is a terrible term, but it’s kind of like “developers, security and ops.” Let’s bring everybody together and work together.

I don’t like that particular dev-sec-ops term, because it’s kind of strange, but I appreciate the movement there, and the efforts being put in place to really break down that divide and build better products together.

I think that collaboration is key in order to result in more security, and more resilience, and more fault-tolerance, and all kinds of other good things that comes out of the collaboration.

Right.

Well, something you mentioned earlier, Jerod, to this divide between these two camps, or maybe three camps (based on dev-sec-ops), is the collaboration happens when respect and empathy are in place. So if as a developer I can empathize with my security counterparts, or as a security counterpart I can empathize with developer counterparts to have respect for one another’s surface area of concerns, so to speak - if there is that, then collaboration can take place. But when you get the side-eye, as you mentioned - well, that shows a lack of respect and a lack of empathy, and what we need to work on is those fundamental human traits, like empathy and respect for one another’s work, to collaborate better.

Yeah. It’s a difficult situation, because on the face of it, one person is writing the code, and the other person is trying to break into that code. Just by what you’re tasked to do, you’re kind of set at odds with each other, aren’t you? Because one person’s exploit is somebody else’s vulnerable software, in the case that we’re talking about software vulnerabilities and not these other ways… We find out people get in in many other ways anyways, so there’s lots to think about.

What about these other vectors/ways that we can fight things, like unencrypted data, third-party compromise…? That one seems so difficult, and something that is happening more and more now that we have all these mergers happening. I just can’t imagine bringing in a third party through a merger or something, and now you have these two disparate companies and codebases and infrastructures, and now they’re acting as one… I can just see how there’s so many problems with that. Even when the third party becomes a subsidiary to the first party, these things are happening all the time in startup companies and enterprises all around the world.

That’s true.

What are some ways that you can combat or guard against third-party compromise?

I’d be happy to comment on that. But before I do, I just wanted to comment – you know, we talked about things like respect and empathy being important in between teams… I just wanted to chime in with one additional characteristic. I think the characteristic of accountability is important. I think that if the application security team and the security people are accountable when there’s a software vulnerability or a compromise, I think that’s the wrong model. I think that developers should be held accountable for the security of the code, just like they’re held accountable for the quality of their code. And if pretty much the application security team are there to support them and help them, then even though to an extent it might seem like their fundamental goals might be at odds, if you set the accountability on the software developers, then it in fact merges them back together… Because then in order for them to achieve the secure software which they’re accountable for, they’d love to get the help of the application security team.

[01:08:17.00] Right. Because they’re accountable to secure software, not putting a security flaw in there. So putting a security flaw in there is a by-product of just making software; it’s gonna happen. Bugs, they’re gonna happen. A flaw is gonna happen, something’s gonna happen, but the accountability isn’t on being a human being who can write code and not create a security flaw, it’s the accountability to a secure application, and that requires a team, not just an individual without flaws or problems; it’s an individual that has counterparts that can help them through that, and create secure software.

Yes, yes. That’s exactly right. I think though if you look at very large organizations, like banks for instance, where they’re regulated by all kinds of regulators, I think there’s another aspect of this, where the security team usually becomes the one that has to report information into what eventually gets to the regulators… So there has to be some monitoring, there has to be some validation, because at the end of the day, everyone’s butts are on the line. But I do think that where you set that accountability, and how the monitoring and validation is viewed and perceived is important. It’s not that the security team wants to do that because they wanna be a pain in the neck, they wanna keep the company out of regulatory trouble. So I think there’s a lot of interesting aspects here.

To go on to the second part of the question, with regards to some of the other root causes, we chatted about various kinds of third parties, and we chatted about malware - let me just give one example that comes to mind. In the book we have a chapter on the Marriott breach, in which 383 million customer records were stolen, there were five million passport numbers that were stolen.

The reason that occurred was because Marriott acquired Starwood, and the combination of the two basically was gonna make the world’s largest hotel chain. And it turned out that Starwood had been compromised by a piece of malware a year before the acquisition talks even started. The malware footprint and reach had not been identified. Not before the acquisition, not during the acquisition, but pretty much after the acquisition. So you’ve gotta keep in mind that any third-party company that you’re thinking about acquiring is gonna become a first-party, and if they’re breached, well, you’re breached, too.

In both Marriott’s and Starwood’s case there was a lot of susceptibility to malware… But I mention that because it’s an example where – there’s many kinds of third parties. Third parties are not always just suppliers. Third parties can be entire companies. And with regards to advice to deal with that, one of the things that I talk about in the book chapter in which I give guidance to technology and security executives, it is that when you’re thinking about acquiring a company, sometimes one might do a penetration test of the company that you’re acquiring. That might get done and it might tell you about what’s the potential susceptibility that that organization might get exploited and breached… But the other thing that I think is really important - and I’ve done this for some of the acquisitions that I’ve been involved in - is if there’s enough things that you’re worried about, what you could do is don’t do a penetration test, do some active, proactive threat-hunting, where you don’t just look for potential vulnerabilities, you look for indicators that the company has actually already been breached, or compromised, or penetrated in some way.

[01:12:15.00] You’re looking for different kinds of evidence; you’re looking for are there [unintelligible 01:12:17.09] files somewhere in the environment that might already have been aggregated by attackers and have all this tooling data in it? Are there binaries that have hashes that might be indicators of attack or indicators of compromise, even if the company is not aware of a breach, or even if their penetration test findings look good? You’ve gotta do that threat-hunting as well.

Perhaps if Marriott had done such an exercise on Starwood before the acquisition, they might have identified that there was already a breach that took place, and that would have impacted the acquisition discussions.

Maybe a good place to close would be this word you mentioned, accountability… You mentioned it from a teammate aspect. But I would imagine that there’s some accountability in terms of due diligence to say, in this case, Marriott and Starwood… Marriott being accountable to do necessary due diligence to confirm Starwood’s potential threat vector etc. whether they’ve actually already been compromised… But accountability to companies that sort of just do business and don’t pay attention to or don’t do enough on the security aspect, and Jerod’s personal information or my personal information, your personal information, Neil, is taken. So what’s the accountability level? I know we’re sort of speaking just at security, but what’s the accountability to these companies to do security right and respect their customers and their information? Is there jail time involved here, are you familiar with the legal system around security in companies, and vulnerabilities and exploits and stuff like this? What can you speak to in terms of accountability there?

First of all, I’d be happy to speak to it. I’ll of course caveat what I’ll say here –

You’re not a lawyer…

…I’m not a lawyer. [laughter]

Opinion-only…

Opinion only, yeah.

As a former CSO, I’ve worked together with a lot of attorneys and general counsels at both companies at which I have been a CSO, and let me mention that I think that there have been strides in accountability over the years. So if we go back to the target breach in 2013, after the target breach it was not just the CSO that was fired, it was the CEO. And that was the first breach where that had occurred. So the accountability now goes up to the top.

In one of the book chapters I encourage folks to have their chief security officer report to the CEO, because at the end of the day if something goes wrong, that CEO is gonna be held accountable. The days where a CEO can’t say “Oh, we had a breach. It’s the CSO that should get fired.” Well, that’s not necessarily the case. The CEO can get fired too, if the breach is bad enough.

In fact, if we look at other breaches – if we look at the Equifax breach, there was a change in CEO and CSO there as well. So the accountability has gone all the way to the top. I think that board members need to be asking their CEOs the right questions. The CEOs need to be also asking the right questions if they don’t have a CSO but have a CTO. There’d better be somebody that should be accountable for ensuring the security of the products, as well as the IT organization.

Security is not just an IT problem. I think that it could be the case that if some companies have a CSO that’s still reporting to the CIO - well, the important thing to realize there is that security is not just an IT problem, it’s much broader than that. So there has been an increase in the accountability. CEOs have gone fired because of breaches, and that accountability has gone all the way up to the top.

[01:16:20.14] By the way, I don’t know that the accountability has gone all the way to the bottom, and I’m actually not quite sure that’s the right way to go. I think that, for instance in the Equifax breach, the CEO tried to pin it on somebody who was supposed to patch that Apache Struts server. But I think that’s heading in the wrong direction, because if you’re any reasonable-sized organization, you’ve got thousands, tens of thousands, hundreds of thousands of servers. When you go to patch all your servers, inevitably some are gonna be down, some are gonna be crashed, whatever. Everything’s not gonna get patched right the first time.

By the way, you should have automated patching for that number of machines. I think relying on humans to do patching is likely to fail… So you’ve gotta also then understand that when you’re operating at any level of scale, you need to have automated technical verification that the patch got successfully deployed, and the verification should take place for instance before a ticket about the vulnerability should be closed.

I think that in terms of [unintelligible 01:17:33.25] accountability, I would put the onus on the CIOs and the CSOs, to say “Look, you guys need to have a scalable, systematic, automated approach to things like patching, with [unintelligible 01:17:46.29] verification”, because I think that the days where we should expect humans to get every single detail right is the wrong direction to go.

Definitely a troubling scenario, given the breaches that have happened, the data breaches and whatnot… It’s an interesting, I suppose ever-changing world, cybersecurity, cyber-crime… But Neil, thank you so much for writing the book, for everyone. This isn’t just simply for security researchers or security experts, it’s for everyone. There’s a bit for everyone in there, and we need people like you out there sharing this kind of message to more people, to increase the ability for accountability to occur… So thank you, Neil, for coming on the show and sharing all you have.

Yeah, thank you for having me. Both Moudy, my co-author and myself, we had a great time writing the book on “Big Breaches: Cybersecurity Lessons for Everyone.” My hope is that the book will be a good contribution to the field, that will help bring more people into it. I think there’s enough security books out there that are written for security people, or for developers, or for different discreet audiences, but I think we need to also bring boards, business executives, tons of folks into the fold… So in the book there’s something indeed for each of those audiences, and my hope is that folks read the book, use the book within their organizations, and follow up and act on some of the advice that we provide, so that we can achieve a stronger cybersecurity posture across many organizations.

I agree. Change begins with awareness, and awareness begins with books, so thank you

Thanks, Neil. This was awesome.

You’re welcome.

Changelog

Our transcripts are open source on GitHub. Improvements are welcome. 💚

Player art
  0:00 / 0:00