Security Masterminds

Technology and its Impact on Users and Cybersecurity, with special guest, David Willis

June 20, 2022 David Willis Season 1 Episode 7
Security Masterminds
Technology and its Impact on Users and Cybersecurity, with special guest, David Willis
Show Notes Transcript Chapter Markers

Episode Summary
Technology is everywhere in society these days from our communication, shopping, and commerce capabilities. Whether email, online purchases, or using the blockchain, it amounts to large amounts of data being collected about people. All of this data, while easy to store, is also harder to manage and protect. As users, people exhibit behaviors when using this data, and the technology is learning those behaviors to effectively identify if it's this person based on geography, time, and frequency. All of this, along with being able to help people properly secure their data, and when they make an error, they receive a small learning mission to complete to help understand the mistake without feeling inadequate or reprimanded.
In this month's podcast, David Willis shares his experiences with technology, human behaviors, and micro-learning based on his years of military and technical expertise over the past twenty years.

David Willis, Head of Technology Integrations for the Business Development Team
David is an experienced business, security, and technology leader with over 20 years experience across telecommunications, financial services, and software industry verticals.
David currently serves as Head of Technology Integrations for the Business Development Team, focused on addressing tactical and strategic security and IT solution integration needs at scale for Netskope customers. David also leads the building and expansion of new routes to market for Netskope.

Show Links

KnowBe4 Resources

David Willis:

It's not our job to knock you off the internet. It's our job to make sure that when you're about to do something really dangerous, That we educate you, that it's dangerous, that we give you alternatives to do that, that are still enabling productivity. I am David Willis, I'm responsible for making sure our stuff works with your stuff to do good stuff for customers.

Announcer:

Welcome to the security masterminds podcast. This podcast brings you the very best in all things, cybersecurity, taking an in-depth look at the most pressing issues and trends across the industry.

Erich Kron:

Technology is actively used within organizations and designed to make users tasks easier within the cybersecurity space. With so many attacks occurring because of human error, what can technology and automation do to augment security, to help the users protect the organization?

Jelle Wieringa:

David Willis is the head of technology integration for the business development team at Netskope addressing strategic security and IT solution integration needs at scale for the Netskope customers.

Announcer:

This is episode seven Technology and its impact on users and cybersecurity. With our special guest David Willis.

Erich Kron:

Hey, Jelle. I spoke with David Willis and it was a very interesting conversation that we had. But as usual in something that I think fascinates both of us is asking people, how did they get into cybersecurity?

David Willis:

My roots are actually pretty important because I grew up in Montana in a very isolated place, right on a reservation, ended up going into the Navy after graduating from Stanford. And then, deciding I wanted to go into the private sector just was a better fit for me. and I wanted to AT&T, which is a great place for people to get a good grounding. And it's funny, talked to other cybersecurity professionals, most of us had a pretty good grounding in networking. And that's not just because it was a great place to start it's because at the time people did not understand how important cybersecurity was. I started off understanding how to build networks and I moved my way up the stack, understanding how applications worked over networks. And then I moved into the vendor side away from AT&T and I'm started to learn a lot more about cybersecurity, and have, done stints at a variety of companies, Palo Alto networks Fortinet and here at NetScope. Getting closer and closer to the way people are actually working with the cloud and with applications. So it's been this fun evolution, right? I'm literally following the way the industry has evolved, where it's networking, then security became more important, but it was premise-based and then it became in the cloud. And, it's been fun to track where I really see not only the industry going, but people, the way people are interacting with the technology.

Erich Kron:

So like many others that I've met, his background, he started with the military and I also was in the Navy, although I was enlisted. I think it's interesting as things evolved, he mentioned how that kind of changed things as well, going from on-premise to cloud. And how so many of us started with networking. I think that's a pretty common thing Jelle. do you think? How did you start.

Jelle Wieringa:

I actually started in a similar way as he did so technical background, which made just as him, a solid foundation for cyber security. It's really cool that you understand the technology side of things that you have at least a basic grasp how IT works in general, how networks work and back then it was a niche within IT. David he worked for all of the big names. He worked for Fortinet he worked for Palo Alto. Those are big names in our industry that have laid the foundation of most of the security that we see too and understand today.

Erich Kron:

So being able to be exposed to that technology back then, quite helpful and I really do think in a lot of cases while not required. I think that operational background makes us much more well-rounded cyber security individuals.

Jelle Wieringa:

So the thing is cybersecurity is focused on technology. revolves around technology to solve a problem. Whether it is building a firewall to stop automated attacks or training people, using a technology driven platform that provides content to the user technology is the focus of what we do. Having a good understanding of this. It really helps. It makes it so much easier. That being said, you don't have to be a true technical person. A understanding is enough.

Erich Kron:

I agree a hundred percent. I think that mindset plays a lot in the role as well. So speaking of mindsets, I know that a lot of what shaped me was my military career. When I was active duty, although I did spend about 10 years with the army as a contractor, and I found that it definitely shaped me as a person. And so I asked David, I said how did the military shape you as a person? And with cybersecurity?

David Willis:

I think it definitely gave me, a true understanding of just how important it is to be ready for warfare of any kind. The reality is I talk to people all the time and they think that there's been this calm and the calm is of course disrupted by events like the Ukraine. And I keep having to point out, we've actually been in a state of cold war for years, in cybersecurity. And so it's really interesting to me when I talk to other professionals who came out of the military and just felt like it was a natural fit for them to go into cybersecurity. Now we're not necessarily fighting nation states or helping our customers fight nation states, but the same principles apply: being ready, being operationally aware Putting ego below mission. I think all of these things are absolutely important when we have such an important remit and responsibility to help maintain our country and our culture and the global culture. Everything is running over the internet. So I love the fact that I was able to take what was a very honorable profession of protecting the United States. And now I feel like protecting the world from a lot of very bad actors so that people can engage in the way that they want to live their lives, both online and offline, but everything is so interconnected that you protect the whole thing.

Erich Kron:

I like what he said there. And I think in a lot of cases, the military is where people get their first exposure to a mission based job or role where the mission is the big thing. But I found that carries over to what I do also in the private sector. KnowBe4 here, I love that we're tackling the human element, which has been overlooked for so long. I honestly feel a part of a larger mission. And I think a lot of that is it's something that people without that experience are not exposed to. Although it's not a requirement by any means.

Jelle Wieringa:

Making the world a safer place. It is the basics of what we're doing and what most people that I know in cybersecurity love doing. It's an interesting perspective that he brings. Like I do like his idea, his concept of you all having to be ready and operationally aware. Everyone needs today is at risk. While, not every adversary might be a nation state trying to take down another country or take down you. It is the fact that everybody can be a victim. being aware, it's actually very important for everyone doesn't even matter. You don't have to be in IT. You don't have to be in cybersecurity. Everybody needs to be aware of what's out there in order to keep them and their organization safe.

Erich Kron:

The mention of nation states while it's true, not everybody's going to be targeted by nation states. There are so many other groups out there and ransomware has definitely shown this. It doesn't matter what size organization you are or what industry you're in or any of that. If they can find out that you're out there. They can throw the attacks around so much of it as automated these days. It really doesn't matter. You don't have to be on a nation state list to actually be up against some pretty good, bad actors.

Jelle Wieringa:

As we see a lot happening over here in Europe at the moment, this information is a matter of cybersecurity. It's being aware of the information that you get presented with and understanding whether it's fake or not. So being aware is important.

Erich Kron:

That's an interesting point. That brings me to another thing, with what's going on in today's world I wanted to ask him what he thought, how important is security and privacy in today's society?

David Willis:

I think to most people it's becoming as awareness grows of what privacy really means. It is becoming more important. One of the things I find very ironic is I think in some ways companies are both more concerned about, and of course, less concerned about than most at average individuals. Most people have no idea. The kinds of information that is present on the open internet about themselves, their company, their family, and they're just proceeding along with this idea okay, and the information that's out there. Yeah. Maybe they have som and it's not that big of a deal, but as awareness grows of the implications of that data, and as I think people receive their 10th or 20th breach notification via email, they're starting to get this idea that maybe this is a problem. And, there's two ways to respond to that problem. One is to literally give up. And then there's the, I need to take control. And I think the European, the GDPR rules, the California privacy protection act, the New York Protection act these are all indicators that as people become more aware of the implications of those data sets that are out there that can be turned into machine learning algorithms and fed into those things to truly predictively analyze and understand how to manipulate us. They're becoming really important. Then there's the corporate privacy aspects as this data gets into the cloud, where once it's out there, you have no control, right? So it's the corporate equivalent of I'm going to post my Facebook pictures from high school where I'm chugging a bottle of vodka. Once it's out there, it's never coming back. And it's the exact same concept with data related to corporate environments. A partner today might be a competitor tomorrow. And so the privacy information, the sensitivity of that information is absolutely driving us to be more aware and more concerned about that. Because again, the cat's out of the bag, the horse has left the barn, whatever analogy you want to use, if the data gets out and you don't have the ability to delete it remotely, or you don't have some kind of government construct that enables that, it's going to be a problem. what I get excited about is I learn more about like blockchain technology and other technologies that are a little bit more distributed to potentially make sure that if it's your data, it is always seen as your data, right? And this is a little bit more democratic approach to protecting data than a technology like IRM information, rights management, where you need someone to go in and touch that data at creation. You still need to touch it, but you don't need necessarily a technology that is paid for in order to do it. And because of the distributed blockchain technology, as each person touches it, you can figure out okay, Who had it, where did it go? And then you can unwind how it got to those places, which is pretty exciting. It's sorta like the UPS service telling you at any given time, what happened to your letter? You could do the same thing with data, but it's still very early days in the use of that kind of technology. IRM has been attempting to do something like this for awhile. And we've seen that the human element is preventing us from really working at scale. So long-winded way of saying it's absolutely essential. It's really scary. And we're just starting to get our hands around the scope scale of the problem and our need to address it.

Erich Kron:

Something that he touched on that I think was pretty important was where he said it could potentially basically make sure that if it is your data, it's always seen as your data like me, I have somebody who apparently has my same name also in Florida elsewhere that had pictures up, that, that were not complimentary to me like who I am. They didn't reflect who I am, nothing really bad, but it was something that I noticed when I Googled myself years ago. Now, fortunately, I think I've managed to bury it to like page 842. How many of us have thought about that? How many of us have thought about things that if somebody just starts up an account with your name. So it's an interesting point with the whole blockchain thing and I think it may have some use in the future.

Jelle Wieringa:

So blockchain is, it is an interesting concept. The biggest problem is that the application of blockchain is still a far away future for most things, smart contracts and stuff like really cool. Now, if you want to do that on somebody's data, on a data level, personal data level, that's going to be interesting. Like the whole thing with data and privacy in general, whatever technology you use to secure it or to track it or take control of it. The Biggest problem is that and I think that's the one thing that David is absolutely right about is once it's out there, it's out of your control because we don't have blockchain yet. It's not here yet.

Erich Kron:

yeah. That's a very good point. I know it's grown quite a bit too, and obviously we have the human side and we have this technology tracking us and doing all kinds of stuff around us as the human So one of the things I asked David, cause I always think it's fascinating to hear this is how do you see the interaction between technology and the human?

David Willis:

If I could wave a magic wand and fix the problem. The problem is Layer 8. It's the human, it's always people even if I have automated systems, if the person doesn't want to turn it on, it doesn't get used. If I have a technology that makes things a little bit easier, but requires some degree of human intervention, there will always be people who are either too lazy or too ignorant to turn it on. And we see this time and time again, the more successful systems basically try and eliminate humans from the problem as much as possible. We talk about like data classification and data tagging, which has been around for years, nobody uses it because it involves people understanding what constitutes a particular dataset. Is this email that I'm going to send confidential. What is confidential even mean? And I have five categories is this private is this confidential? In the presence of too many options, people make no choice and they just send stuff out or they incorrectly market. And so people have given up in large part on this idea that I can trust people to be the operators of these complex technologies, because most people are not security practitioners. They're not IT practitioners, they're not going to know the implications of these requests or choices that they're being offered. And as a consequence, the technologies are failing. Not because the technologies are bad, but because the people basically are the problem. And so I get excited because I'm seeing the flip side of the coin of machine learning algorithms, which is if we can just train some of the technologies to understand what constitutes this at scale, without having to manually see dataset. We are going to be able to get technologies that automatically detect, Hey, this is sensitive information. This is normal behavior with that information. This is abnormal behavior with that information. And they'll start to step in and basically take or re-establish control where we've never really had control in the past.

Erich Kron:

That's a very interesting discussion that he's talking about there and towards the end, he's talking about behavior and I'm a big fan of looking at user behavior analytics and. Those sorts of things. And I think that is somewhere where machine learning and AI really does well is looking at things that are abnormal within a normal set. In other words, people interact with this data normally now all of a sudden it's being encrypted very quickly. Maybe there's something weird going on here and we need to stop whatever process is doing that. This is the stuff AI and machine learning is on 24 7. It can be watching things all the time. It doesn't nod off or, zone out and think of other things like humans do if we try to put them in this role. So I think there is definitely some positive stuff in there as well. The layer eight thing we know this is an issue and it's not the user's fault necessarily, but humans are targeted so much because they are the way in systems and people make mistakes and they make errors and we do need the technology, but he's right about adopting it if it's difficult. And it goes back to the BJ Fogg principles, he had that tweet out there that said Humans are lazy, social and creatures of habit, design things for this mindset." and it's true. It's not a negative thing. It's saying that we don't want to do extra work to get the same thing out of it. And I think that's a, an important thing to consider when we're coming up with the solutions that, like he mentioned would require someone to touch every data set out there. That's probably going to fail.

Jelle Wieringa:

What I think is that AI is best use as an augmentation to the skills that we as humans already have. It's not a substitute for the simple reason that most AI don't have ethics programmed into them. That's going to be an issue, especially when you look at cyber security if you look at what kind of data or what value data has to your organization, if something is confidential or not, Because that's very context driven. So you can use an assisted AI and deep learning to basically figure out for you what types of data are confidential, what types of data are internal, et cetera, et cetera. But. In my experience, it's still takes a human to add that special, something that special tweak to it and to correct the AI when it's wrong, as long as you use AI as an augmentation, to what we do, but allow humans to influence that AI, I think we have a good chance to actually build something cool. But it's not just AI, that will save us because I think that the human will never go away out of the equation. It will always be part of it. It needs to be a part of it. So it's a combination between getting great AI systems that have outlier detection and are able to see strange behavior out of normal behavior, et cetera, but it's also about making sure that people get a higher understanding, a better awareness of what they're doing of their own actions. So if their own behavior, because it's you, that presses that button to send that email with that information that you didn't want to send to that person in the first place. It's common sense. And common sense is not something that machines will capture very well. It's something that we can use combined common sense, combined human behavior with AI. We've got a great thing. Coding.

Erich Kron:

No, I like that, augmenting, but not replacing. Now. It is interesting though with all of this stuff, that's going on we're generating data at levels that I don't think we've ever seen in the past. Just incredibly and part of this driver in my opinion, is the cloud. The cloud is definitely there. And it allows us to really quickly ramp up resources, storage put stuff out there all the time that that is accessible by millions and millions of people. But. The cloud can have its issues. With the cloud, is it help or is it hindrance?

David Willis:

It's the dark side of the dissolution of the perimeter. So in the historic model, everything was on prem and everything was kind of locked behind this barrier. Now the reality was everyone thought that was safe and it wasn't, but because cyber criminals, just weren't as sophisticated, you'd have to have a human compromise where someone would go in and plug in a USB or a tap or whatever and that was just harder. The cloud by virtue of being easy, which is why we love it, makes it easier for criminals to get to the data. So it makes it easier for us to upload and share, like I've been sharing information using Evernote or box for years. Because it's just so easy to collaborate. But that also means that if I make a mistake and make a manual change to say, I'm going to publish this to the world because it's not a problem that means anyone can find it. So again, that human element, right? There's all these tools that we've done that we've put into these things to make them easily usable, more adoptable. And that's great. As long as there is some degree of governance and understanding of what we've actually done. And the problem is we're ahead of our ski tips in many ways, in many companies where they're like, okay, again, we have to have our employees be useful. Each team is deciding what tools they're going to use. There's really no governance in a lot of environments. And so what we find is most companies are totally unaware of this permissive model, which is really important for getting work done what that's actually meant for their security posture. The other thing that we see a lot is people put their data wherever it's convenient or wherever they feel comfortable. People don't even reads ULAs right. They just click through. So they're like, Hey, okay, I'm going to put my data up there because it's convenient. It's useful. I get my job done. And then I've perpetually and indefinitely lost control over that data. We're even making it easier even inside when we think we're putting it in safe places. The reality is we've given up control. So a breach, which is perceived as a cyber attack against complex that is highly targeted, could be as simple as I just Google and find the document. So yeah I'm frankly amazed that we don't hear more breaches. It just it's it's only going to get worse before it gets better. So again, we're enabling people to do their work and what's so funny to me about the cloud is we're seeing the exact same thing play out that we saw in premise based solutions where everybody's like they release IDs solutions, intrusion detection. And then you're like, wait a minute, maybe we should block it. So then IPS comes along, right? Because it was like, it was far better to block than it was to detect a remediate. This is why it's so important to not just have a strategy of I'm going to try and go back after people have made mistake and fix it as quickly as possible, because the reality is our bad guys are automating all the tools, right? So unless your tools are automated to be as fast as the bad guys, there will always be this window of opportunity. So you can do that, but you need to accompany it with automated mechanisms, for blocking or policy guidelines and guardrails to keep people from making these decisions before they do it. And that's one of the reasons I came to Netskope. I got really excited about this idea that again, the human element is the problem. So if I can put something in front of a person, as they're doing these things and say, do you really want to do this? Do you understand what you're doing? Give them some coaching or give them just-in-time training. We can maybe stop this problem before we give the bag guys that window.

Erich Kron:

It's interesting. The cloud and the cloud has definitely impacted what we do a lot, but as somebody who watches for breaches and such, I can tell you so many times the word Amazon S3 bucket is synonymous with data breach. We see it all the time through a bunch of data up there, open some permission. And now when it's not behind a firewall, it's accessible to the human population. And so it's no big surprise that this stuff keeps popping up. And then we find out that somebody made an oopsie because it worked better that way. If you just give any permissions, we've seen that in, in on-prem stuff. But the catastrophe that follows that in the cloud is pretty significant.

Jelle Wieringa:

The cloud has opened up so many opportunities for organizations and so the ease of cloud has just created a lot of shadow IT that's something that is there in any company, every company has shadow IT. I actually, he mentioned 900 applications, cloud applications in a average organization. I'm like, wow, that's an awful lot of application that can go wrong. As you can misconfigure and that's the whole thing. I talk to customers, I talk to security practitioners and they're like those silly users, they just pulled another application and Hey, guess what? They didn't even configure any of the security settings. They just opened it up to the world. And I can see where those security practitioners are coming from, cause in their life, in their world, everything needs to be secure. But I do think that with the introduction of cloud, a lot of the responsibility for security has shifted to the end user as well. And it's up to us as IT practitioners to work, to collaborate. That's what the cloud is all about. So Hey, collaborate with those end users to make it more secure. Don't blame them for using those applications because they're simply easy to use. I will too, but just collaborate. Be nice, play nice. Don't blame them. And just make sure that you work with them to make it secure. And one thing that you need to do nowadays is make sure that you understand what those 900 applications actually are, because if you're blind, you can't do anything.

Erich Kron:

And one of the things I always try to say is as security practitioners, I think our role is to come alongside users and help them make better security decisions. And it's a change from department of no, it's a change from all of that, but I think what you said is very similar to that. We're talking about the people we've talked about the end users a lot of times. So I asked how do you see the value proposition for cybersecurity between the technology and the people elements?

David Willis:

So as I like to say, any cybersecurity professional can be successful by unplugging the firewall into a failed block. The reality is you have to work with the way people work and you have to minimize the friction. Otherwise they will find a way around. So what I get really excited about is figuring out ways to create these low friction thresholds that start to incrementally reduce the possibility of a breach or inadvertent sharing. And there are several ways to do that. The first thing is to take advantage of all these really interesting risk indicators that a lot of our companies are starting to be able to generate. Years ago you would have a SIEM with billions of data points. That meant nothing, but because each of our platforms is got, gotten smaller, smarter, and machine learning has become more readily available on with AI. We're able to surface this into some very singular, very value, telemetry of risk. By looking at that, I can start to make some changes that maybe the user's not even aware of in our systems that say, okay, Bill seems to be off. Maybe he got compromised. Maybe his credentials got compromised, but there's something strange happening. So instead of just unplugging the internet, I'm going to start doing things like preventing bill from doing inadvertent sharing of really sensitive docs outside the company. And when Bill tries to do that, I'm going to present Bill not with a block or a silent block. I'm literally going to present bill with a coaching page that says maybe you don't understand our DLP policies very well. I'm going to redirect you and enroll you in training because I'm going to assume you're a good guy. So we start coaching the user about appropriate use incrementally, reducing their ability to make changes while their risk score seems to be high and enrolling in training them on what to do. And I'm not talking like the incredibly painful training. I'm talking like five minutes of,"Hey, this is what's sensitive and you shouldn't send this kind of stuff out unless you get approval, or to this app" because without that people are going to keep doing what they normally do. We haven't educated than we haven't trained them. So in as long as the system is smart enough that it keeps tracking users, these high telemetry values and responding to incrementally reduce risk. So that maybe Bill who takes the training fails it, or doesn't take the training, doesn't care about it. Now we know a little bit more about what's going on in Bill's world, which is that Bill doesn't care about security. Then we continue to incrementally change the security and it operational stack to adapt to bill and Bill's behavior. And so there's this learning back and forth. So it's not this absolute model, which we had 10 years ago, which was just simply unplug the internet. Usually that only happened when a security analyst got involved and said there was a defined problem. So there's two advantages. We're lowering the friction for our users, as well as our IT security teams, because we're leveraging the systems that are learning about behavior and making incremental changes. And we're including the human element by coaching, guiding and ultimately iterating on their behavior. So if we can do those things, we bring the human element back front and center to the security problem, because it is the human element. Most of the time.

Erich Kron:

One of the really good things about this that I really personally keyed in on is where he said, if we can lower the friction for our users, as well as our security teams, we can really make a difference. And I love the idea of that. We need to do that for both sides, because a lot of times, if we lower the friction for our security and it folks, then it creates friction for the end users and vice versa. And I think if we can come up with a solution that helps reduce that for both sides. And when we're looking at solutions and we're looking at ways to do things that we're paying attention to that, and we're cognizant of that, I think we can make a big difference.

Jelle Wieringa:

The biggest problem is that IT and endusers they often don't understand each other well enough to collaborate in a good way. So if you can make something frictionless, meaning they can work together on fixing something on shaping behavior of the entire organization on shaping that culture and being more secure. That really is a good thing. And I think that what David is talking about is actually a really cool next step in what we do in teaching people give them that micro-learning nudge them to that right behavior. It's what we've been telling them all along. That's the way to do this. If you can do that based on the actual behavior of a person. It will be less friction because somebody just did, that thing. So now you confront them with whether that's good or bad and you give them some training on, okay, maybe next time you do this differently. I think that will work way better because you don't frustrate people.

Erich Kron:

Yeah. It makes me think of sometimes with our own types of training that we do for people. So By making it relevant to them, and even those micro nudges can make a big difference in that. Yeah. So much of this is about risk and reducing risk in other areas at the least amount of cost most effectively. And speaking of risk, I thought this was interesting there's something called zero trust, which I started hearing about years ago and it was this fantastic, wonderful thing that eventually might turn into something. And frankly, I'm surprised to see that it has actually turned into some working products that seem to work pretty well out there. I don't know that it's a hundred percent fully baked yet, but we're definitely making progress. So I asked him about how do you view risk and how it impacts the concept of say zero trust.

David Willis:

And it's funny when I talk about risk and it's something I'm very passionate about. I see it as like a Rorschach, right? There are a few terms in the industry where everyone has a different idea about what it means like zero trust. I love the term zero trust because it means nothing? It means something different to every single person. I think risk as a term because of the way it was initially pitched in different products was really misused. Because risk is so contextual. Risk is only a function of the value of the information or the product that you're potentially going to lose and the likelihood that it gets lost. And I think a lot of times we don't look at these things like that. We don't assess both likelihood and value except for the most valuable things. And so the death by many duck bites falls outside of the risk picture because it's too hard to quantify. So I love trying to find ways to, again, programmatically surface that and find a way to add context to it. And if you can add context to it and you don't need people to keep trying to calculate it manually, and you can take advantage of these systems that weren't around 10 years ago, To help calculate what the risk is. It makes our lives as cybersecurity professionals, lot better. If I had a machine learning template years ago that could have ingested all the information about the investment I was going to make it would have said, Hey, don't do it, don't do it. Similarly I get really excited about the industry when I see all these minor tools that are being used to continuously protect me from bad decisions. But these tools that our industries can take advantage of that are similar to the things that I think the consumer side have really figured out. I get excited about, like micro learning and that micro learning to train our engines again, to get smarter. That's where I get really excited about what's coming for the cybersecurity industry. Just like the finserv industry faced, frankly, had to figure out how to do given how much cyber crime has going on right now.

Erich Kron:

I love this phrase. I don't think I've heard this before. Maybe I'm just sheltered, but death by many duck bites, I've heard different things used, paper cuts and such, but never death by many duck bites, but it paints that odd picture in my head, you can easily see what he's going after there. And sometimes it definitely feels that way. The other thing that he said that really stood out to me and here was the fact that he's talking about things being used to continuously protect me from bad decisions. And that kind of comes back to some of the stuff we've already said, though, on the human side, we make bad decisions sometimes. And to have technology there to go, Hey, wait a minute. This is probably not a good thing. Or are you sure you wanted to do that? There's a very powerful thing to do that, there's been a couple of times that I really wish something would have popped up and said, do you really mean to reply to all?

Jelle Wieringa:

I think we need to bake"Death by many duck bites" as a warning message into our software. But if we can shape behavior in a non-intrusive way in actually a way that's, that can be fun for users that's contextual, that's relevant. That's a good way to do it. Zero trust is essential to me it's a very big concept with very big implications. And if something is a very big concept, we're very big implications. It usually takes a very big movement to get it done. I think it's better to just focus on the individuals that are in that organization and see if you can make them more aware, train them better and shape that behavior because in the end you don't want to not allow them to do for you. That's it. You want them to do allow as much as you can, because that will give the business the freedom to do what it's there for business go faster, accelerate. That's what they want to do. So you don't want to block them. You want to enable them!

Erich Kron:

Yep. And that comes back to being business enablers. I think that's a fantastic, now, one of the other things that we want to ask these answers that we get are pretty fascinating, depending on the guests we have on the show is we're asking people, what do you think is the biggest threat in the next decade?

David Willis:

I believe it's still remains identity. I think until we have a true biometric method for identifying users consistently. So there's been like technology related to a holographic concepts, uniquely identifying a person and a device. And then what I call the tuple between those two. Tracking that user and that combination and knowing when either one of those is off, I think is going to be a huge challenge for us because you look at the IOT, there's so many more devices getting connected to the internet. There's so many, there's so many ways people connect to the internet. The complexity is just going to continue to grow exponentially. So trying to get our hands around, who is doing what is going to continue to be a huge problem for us. And when you map in an add on how many times people's identities get compromised we're going to have a huge issue. So multifactor authentication has been around for years. I still have banks that don't use it. We have technologies like 1Password. I can't get my wife to use 1Password, right? I'm like, I'm a cyber security professional. You're still using a spreadsheet. You're killing me. Having a way, knowing again, the human element, people don't like to do multiple passwords. Our entire approach to passwords is totally horked right. The idea that we create passwords that are hard for people to read and easy for machines to break is a problem. And then you take that and you couple it with the lack of understanding of device identity we are just, I think on the cusp of having some huge issues that need to be solved because we can start to solve the data and figure that out because data is of course, very changeable, but I think the DLP technologies have started to evolve to a point where I think we get better control over the data. We have system level controls that are pretty good, but what we're lacking is I think a truly robust way of identifying this user at this point in time is on a device that is trusted. End-to-end and they can do what they need to do. And that trust model follows them through each system. Each network each says application because we're all in it together. So that's my personal take. Now I might have totally missed the bus. Maybe it's actually the black hole at the center of the earth that, that we need to worry about. I'm not sure, but I certainly see it in my day to day life, that we still haven't mastered the most fundamental of, should I trust the user and should I trust the device they're on and should I trust that combination?

Erich Kron:

Not sure about you but the black hole in the center of the earth was not on my threat landscape.

Jelle Wieringa:

Well, It is now somebody should have told me sooner. I think he's absolutely right. It all comes down to that user understanding where he is, what he does, if it's actually that user. I do think that it's a pretty big problem to solve. I just think that being in security is all about people, process and technology, and we need to focus on all three of them. That triad is we need to fill in all of the blanks in technology. We need to make sure that the user gets understands what he's doing himself and needs to look at his behavior. And we need to get processes that provide both quality and security to everything. Once we get that done, then we have a pretty clear view on who's doing what, when and most organizations look at it from a technical perspective and that's simply not enough.

Erich Kron:

I have to agree with the identity piece here as being a big threat, and this is nothing new. Obviously the internet was never designed with identity in mind and let's face it. Our email address is not our identity nor is even a login in one way or another regardless. Even when you had AOL, when you logged in and it said you have mail that wasn't really your identity. It was whatever you put out there. Okay. And that's always been a big hole in the internet, and it's why it allows cyber criminals to work so much. It's why I allow spoofing. It's why it allows all of these things that we see on an everyday basis in these social engineering attacks. And it's going to continue to be an issue. Now that brings me to this fundamental issue with the internet. It was not designed. It was designed with redundancy in mind. Well, On a similar note, wanted to know what he thinks about how can our industry actually improve.

David Willis:

Well, All right off the cuff, I would say vote for Pedro, but right after vote for Pedro to me, we are at a very dangerous time. We have amazing tools, but people are afraid to use the tools and what's fascinating to me is SOAR technologies, which automate our response to bad actors has been frankly, sitting and collecting dust on the shelf and the bad guys are automating everything. And so there's this weird disconnect where the tools are actually starting to emerge. Like I said, risk telemetry, API changes to platforms, incremental, frictionless changes to reduce the attack surface. And yet I don't see people using them. And I don't know if it's because there's too many options. I don't know if it's because people are afraid that if they make a change and lock the CEO out, they're going to lose their job. But at some point, reality has to intrude and it needs to intrude sooner than later that we have to be as fast as agile, as automated as the bad guys. And so my one ask to everyone is take a deep breath and figure out what beyond event enrichment you can do to automate your platform's response to risk. We, as vendors are trying to give this information to you, please leverage it because there's a lot that can be done that will not result in the CEO getting locked out, but we'll make your lives easier and make the bad guys go one house down the road to the next company that hasn't done it, and they'll go after them. And they're not going to go after you. And if we do that enough, the cost of crime is going to grow to the point where they will find some other way of making our lives miserable, but it's not going to be something that touches on our personal lives.

Erich Kron:

Okay this has been a fascinating discussion with David Willis about the human factor and how it works into technology and how these things play out and work together. I've had a lot of fun with this discussion today, and thank you for joining us here at security masterminds. Jelle.

Jelle Wieringa:

Since we spoke with so much about AI, I'll end off with a very appropriate one. Hasta la vista baby.

Announcer:

You've been listening to the Security Masterminds podcast sponsored by KnowBe4. For more information, please visit KnowBe4.com. This podcast is produced by James McQuiggan and Javvad Malik with music by Brian Sanyshyn We invite you to share this podcast with your friends and colleagues. And of course you can subscribe to the podcast on your favorite podcasting platform. Come back next month. As we bring you another security mastermind, sharing their expertise and knowledge with you from the world of cybersecurity.

Introductions
How I got into cybersecurity
How the military shaped me as a person and cybersecurity
Humans & Technology
Is the cloud a help or a hinderance?
Cybersecurity Value Proposition between tech and users?
Risk & Your Perspective
Biggest cyber threat in the next decade?
How can the industry improve?
Wrap-up / Outros