Security Masterminds

Demystifying AI's impact on Cybersecurity with special guest Clint Bodungen

October 31, 2023 Security Masterminds Season 2 Episode 22
Security Masterminds
Demystifying AI's impact on Cybersecurity with special guest Clint Bodungen
Show Notes Transcript

Discover the untold dangers of AI in cybersecurity as expert Clint Bodungen uncovers the dark side of generative AI. Is our reliance on technology putting us at risk? Find out in this eye-opening discussion that will leave you questioning the future of cybersecurity.

"Technology, as much as we need it, enables complacency. The technology enables that complacency, and we've seen the consequences. We need a proper cybersecurity culture that aligns with our natural desire to do the right thing and help others. "
- Clint Bodungen

  • Discover how AI is revolutionizing cybersecurity and gain insights into its impact on threat detection and response.
  • Explore the relationship between organizational culture and cybersecurity practices, uncovering strategies to foster a security-conscious environment.
  • Unlock the potential of AI in cybersecurity and uncover innovative ways to enhance your organization's defense against cyber threats.

Connect with Clint Bodungen

  • LinkedIn:
  • Twitter:
  • Email:
  • Organization:
  • CyberSuperHuman - AI Courses -

Connect with us


KnowBe4 Resources:

Show Notes created with Capsho -
Sound Editing - James McQuiggan
This episode was edited by Matthew Bliss of MB Podcasts. If you'd like to ask Matt what he can do for your podcast, visit and schedule a consultation today! 

A proper culture makes people proud to have a sense of ownership over what it is they're doing. And that is the culture that people need. I'm Clint Bow, dungeon. I am the co founder and CEO of Threat Gen. Welcome to the Security Masterminds podcast. This podcast brings you the very best in all things cybersecurity, taking an in depth look at the most pressing issues and trends across the industry. Generative artificial intelligence models like Chat GPT are having a significant impact on security. There are compelling arguments that the benefits of generative AI outweigh the risks if the technology is used properly. With proper precautions and cultural norms, organizations can harness the advantage of generative AI for security while mitigating risks. Klimt Bodungen is a CEO and a co founder of Fredgen and an author of several cybersecurity books, including an upcoming book on artificial intelligence with a focus on generative AI. This is episode 22, demystifying AI's impact on Cybersecurity with special guest Clint Bodungen. Hey everyone and welcome to this episode of Security Masterminds. And in this episode we're going to talk about training and education around cyber ranges and we're going to talk a lot about AI. I think this is a fascinating episode and I hope you enjoy it very often. We have some pretty interesting stories related to this. In this case we wanted to know from Clint what was your cybersecurity origin story, how did you get into this? And well, this is what he had to say. I actually started programming when I was. Eleven years old on my Tandy 1200. And I was getting computing magazine but. There would be the code in there for writing these text based games and. I would tell my mom to type it out because I couldn't type. I started learning to code from there. She had a friend that knew how. To code and started teaching me how to code. So I was really popular in high. School, obviously because of of this. And so when I was in high school because I could code, I taught the class computer science in high school. So I was basically a ta and hacking wasn't really a thing back then. In the early 90s. So fast forward. I actually went to college with a theater. I was going for theater special effects. I went to the Colorado Institute of Art to start with, transferring to Denver University. It did start sort of a fortuitous path for me because industrial design technology was a lot more than art and theater special effects. You're really learning how to come up with it's, like architecture for industrial products, right? You're learning how to design. You work with the engineers to do all the exploded diagrams and stuff like that. Way back then we were actually doing things for like programmable logic controllers and things like that. So it was really interesting how that. Ended up kind of coming to fruition later in life. But it was an art degree and ended up moving back to Houston and ended up switching my degree to graphic design. So fast forward to the Air Force. I was intelligence operations. Then I went into air transportation, but not the FedEx guys. It was more like FedEx guys with guns. I was in a tactical unit, and. So it was fun. We got to do a lot of training with 101st Airborne. My OIC, my lieutenant, was like, well, you know what? You're good at computers. So they stuck me in the land. Office, and that's when I really reinvigorated my passion for computers and started learning about it. I was reading all the books and. Learning, and I remember I got on. A TDY temporary duty to Diego Garcia. It's paradise with bombers. And it was actually really fun. But there's nothing to do there but snorkel. Ride the turtles or ride the snorkels and turtle. So I spent countless hours going through. Like, CBT networking courses and all that and just really developed a passion for it there. When I came back, they had this new program. It was part of the OpSec program called the Computer system Security Officer. The CSSO program? And you didn't have to be an officer. I was enlisted, and they made me. The OpSec manager and the CSSO for my squadron. And lo and behold, that's when I got into this thing. It wasn't cybersecurity back then, it was. Computer system security then I believe it. Came infosec or information security. And so as the buzword bingo evolves. Over the years, that's really what it was. And then where I really got into. The Red Team side of it was my supervisor in the Air Force and I, we were really big into this thing called AOL Punting. I don't know if you were familiar with that. Back in the AOL days, before you weren't allowed to use AOL, or this was before I knew you weren't allowed to use AOL. And then once I got out of the Air Force, I worked for an ISP in the Air Force, where the. Guy that was my supervisor was a guy that hacked. He got in trouble for hacking their system. But instead of this happened a lot back then. Instead of prosecuting him, they offered him a job, and he became their head security guy. And so I worked underneath him. He kind of taught me the ropes on the Red Team side of things. Before there was a Red Team side of things. I guess it was just hacking back then. It was all amalgamated together back then. Right after that, the rest is kind of history. I had some cybersecurity titles, I had some cybersecurity positions, and because I had. Enough aptitude and passion, I was able. To grow my cybersecurity career. That was more of a traditional one than we see sometimes yella like some of these other folks have come from way out there. But going the military route, he said something in there about writing code from a magazine. And I remember those days. My first computer was an atari 1200 XL, and I had the basic cartridge, and I used to get these books and these magazines, and they would have programs in the back that you would copy line for line, and it would do silly things like make a cursor move around on your screen. Do you remember those days? Yella, unlike you, I'm a little lot younger. So in my case, it was a different type of computer started on. And another thing that he said, which I really recognize in a lot of our organizations that we talk to, is small organizations evolve, right? They become bigger, et cetera, and they start up with an it department, and there's a guy in it who has an affinity with security as well. So, hey, dude, you're now our security guy. Go build it out. And it grows, and it grows, and it grows, and he falls into something bigger. And at a certain point, they have their own real, like their cybersecurity department, their SEC ops department, whatever you want to call it. That's how most people get started nowadays, especially the ones that are passionate and have aptitude like clint does. It is a great way to roll into the industry. Nobody chooses cybersecurity. It just kind of happens to you. And that's the beauty of our industry. Whenever we talk to somebody who's like a CEO or founder of something or that has that entrepreneurial spirit, there's something special about these folks. I'll admit I'm not an entrepreneurial spirit. That's just not me. But usually what you find is the successful ones have found something that's missing, and they're able to fill that gap. So I wanted to know, what was it that was missing in our industry that led him to start threat gen? Well, this is what he said. Well, really, I was filling two gaps. I feel like my entire career, I. Was filling kind of the SCADA ICS, process control IACS, depending on where you come from and what you want to call it. But I've been doing industrial cybersecurity since 2003. That was a gap in and of itself, right? So my entire career since 2003, it's. Still a gap, has been focused and shaped. There filling that gap. But I guess that wasn't good enough for me. I guess I find all the little tiny crevices of gaps because I want to make sure that I'm filling everything, I guess. But in 2013, I was working for. A company called simation, and at one point after I left, got acquired by accenture. I was sitting there. We were coming up with the training program at simation. I was the product manager, the cybersecurity researcher at first. Then I got moved over to product. Management because we're developing some products, and. Our product was training. And we're all sitting around, and it's. Kind of an all star crew because there was like Brian, Michael, Dylan, Barrisford, Eric Forner. I mean, these were like the original guys at simation doing all the research, the black hat presentations, hacking PLCs. So those guys were really fun to work with. And the gap there was really the training aspect of it. There was a training gap like the people, right? How do you train people on this? Because if you first of all, I mean, HMI screens are somewhat visual sometimes, but they're still kind of boring. Reading code is boring to a lot. Of people, and HMI is human machine interface. So how do you train people on this? Effectively, there's a lot of people that would give you go to a class and then you spend a week there. Playing with the Plc and you learn some parlor tricks and you learn some cool things. The instruction is quality, the instructors are quality. But my problem with a lot of training is that you go there for five days, you learn a bunch of stuff, and then you come back and then, now what? I don't have access to this stuff. You may forget it or whatever. And so I just feel like a. Lot of training doesn't fulfill the purpose it's supposed to. You learn a little bit, but you. Forget a lot kind of thing. And so we wanted to build something that was more immersive, more engaging, something. Where you could actually learn process control. Security, ICS, cybersecurity, without posing safety risks. Without damaging production systems and things like that. So that's kind of what we first and then the word cyber range didn't exist, right? There weren't cyber ranges yet. So we were trying to create all this from scratch, trying to figure out. How to do it. So I guess the impetus of what. Became Threatchen was we're sitting around playing Grand Theft Auto, and this was me. And another friend playing Grand Theft Auto. And we're shooting up this power plant. Or we're in a substation and they're shooting this up. And I was like, we can do that. And my budy looks over and he's. Like, you want to go shoot up what? You want to go shoot up a power plant? And I was like, no, video game physics. Video game physics are very realistic. And if you could put a power plant here with somewhat realistic physics, that's it. That's our answer. We can create industrial control systems, process control systems in video game space, use. A gaming engine and then go from there. And so that's what we did. I became a video game developer, and. After about six months of working on this and fumbling through like Pygame and all these different video game type of. Engines, aaron one of my threats and. Co founders, we were working together at the time. He says, you know, my brother is. Getting his PhD in video game design at Central Florida. I'm like, and you waited till now. To tell me this once, I knew. That that accelerated everything. And so, yeah, in order to help solve OT cybersecurity kind of training gap. And the limitations of being able to. Get quality training that is immersive engaging and then you can take with you can keep it right? So now doing everything in a gaming engine, everything could be virtual, everything. You can deliver it over the web and you can make it more accessible and then you can do it safely. We ended up developing so we developed virtual PLCs and processes that can literally speak. We literally had real PLCs talking to our video game back and forth and real life PLCs were operating virtual process control environments and vice versa. So we created this virtual environment. So that's really kind of how we. Were solving that problem. And then it evolved into a further. Gamified version because there was another problem. That existed, was that most of your. Cybersecurity force doesn't actually need to know how to hack these things. CTFs are fun, but most of your cybersecurity force doesn't need to be a hacker, they don't need to hack things. A large portion of cybersecurity being left. Out are leadership, management, strategic, the big picture. Everybody wants the sexy stuff, everybody wants. The hacking, but at the end of the day, there's a gap in leadership, training, management and all that. So we created a more strategic version. That'S like Civilization or Risk, if you've ever come across that before. And that's what it was. So we created this more strategic version that using a gaming engine, using the natural, competitive, adversarial aspects of gaming and cybersecurity together to create something that is more accessible, to create something that's more. Strategic, to create something that doesn't require. Deep technical acumen, to be able to learn what the Red Team does and things like that. So we kind of did that and that just kind of moved into like, oh well, let's use this to make tabletop exercises better too, because those are boring. So that's kind of how everything evolved. And the solutions we were solving there. So when I was in the army, every year we had a head to head competition that used a cyber range. And I found that the people that participated in those cyber range exercises came out learning quite a bit because they were really, really dealing with true activities and things like that, which a lot of times you don't get in just a class or a course. So I love the idea of cyber ranges. What do you think? Yeah, there's value in me telling you how it should work, but there's even greater value in you actually doing it. And that's what a cyber range allows you to do, right? The hands on part comes in really handy. It builds experience and it builds muscle memory. And I think that muscle memory part is, in cybersecurity, very important because it kind of automates what we do, saves time. And in the end, that's what we need to do. Be quick about it, right? Because the attacker is not letting up, so we need to be quick in our defense. And the gamification part that they're doing at Threat Gen, that is really cool because Gamification triggers the mind to remember things. It makes it more fun, so to speak. People like doing fun things. It's like in school, you wanted to be in the fun classes, you definitely didn't want to be in the boring ones. But gamification works really well. But for me, what's critical here, the thing I really like, is the fact that they're including leadership skills, because we all too often see when there's a ransomware attack that there's downtime in an organization, it's chaos at the back end. It's chaos because people don't know what to do. It's chaos because the techies might know what to do, but the rest of the organization just goes like, there's silence there. So having the leadership teams included in this training, giving them guidelines, giving them basically that same muscle memory as the tech guys have on what they should be doing in case of an incident, that will shorten downtime immensely. And we all know, especially in ICS, downtime costs money. So given how involved he is in the training and education side of things, I always feel like people maybe underestimate the value of training. So I wanted to ask him, what does he think people miss when it comes to the value of training? So Bruce Lee said, you fight like you train, train like you fight. I mean, lectures don't prepare you for battle. Training prepares you for battle. And so you have to do things hands on. I don't care what kind of learner you say you are. Well, I'm a visual learner. I'm an audible learner. Everybody's a hands on learner. The only way that you're truly going to prepare somebody for the heat of the battle. And look, Combat Freeze exists in every industry, in every aspect of life, right? If you're not prepared to handle something under stress, you get Combat Freeze. And you have to inoculate yourself with stress. Inoculation is a thing, right? And that's what intensity training is. And live fire exercises it's so that. You act automatically under stress. And whenever you have an incident, I've seen it happen in real life with cyber incidents. People freak out when there's an incident. And they scatter like ants. If you train properly, if you understand how to handle the situation, if you're used to that adrenaline going, if you're used to chaos, then you actually perform well under pressure. You calm down. You've heard this in so many movies. Remember your training well. If you're properly trained, you won't have to remember your training. Stress will actually trigger the training. And so you have to do hands on. You have to do things. And by the way, annual training is checking the box. You have to have frequency. So, yeah, I would agree with so many of the things he said in that. I was military, I was u. S. Navy. And they taught us things like how to repair a broken pipe, and we practiced on these things, and sometimes we practiced on fire drills on the ship. You'd just be walking along and they'd call a fire drill, and you had to pretend to grab a fire hose and work on this. And it felt a little silly at the time, but it's one of those things you don't want to have to figure out when something real happens. If there's really a fire, you don't want to have to go, okay, let me look at the manual and let me read how I'm supposed to deal with this. Right? It's repetition. It's stuff happening over and over again and the hands on exercises. So, yeah, I agree a lot with what he just said. Yeah. Practice makes perfect, right? Absolutely. There's a lot of power in semantics. There's a lot of power in words. So calling it practice from now instead of an exercise, I like that. Yeah. Most people understand that. So it's practical. It speaks to everybody. So I'm all for, let's change it into practice. Let's practice people. So I'm always curious about this with people that have a lot of experience and have seen a lot of things. What are some of the common mistakes that organizations make when it comes to cybersecurity? And this is what he had to say about it. People rely too much on the technology and don't focus on people and culture. And I think people let complacency set in too easy. And that's not just personally individuals. That's organization wide. I firmly believe that when it comes to the trade off between convenience and security, convenience will always win out. And that's the reason why security fails in a lot of times, because people. Want their convenience more than they care about security. But if you make security a culture, then people do care when it comes to cybersecurity culture, right. Culture is so ingrained in society, in religions, in politics. Once you become part of a culture, I mean, it's literally, it's tribe like. And that culture, whether people realize it or not, culture instills a sense of. Self discipline in that you put pressure. On yourself to remain part of that culture. You put pressure on yourself to adhere to written or unwritten rules of that culture. It doesn't have to be a shaming culture. I don't think that works nearly as. Well as when you motivate people to. Want to have a sense of belonging and a sense of helping other people. Right. If the sense of culture makes you. Inspires you to want to do good. To want to be an example, I think that's the proper culture, right. A culture of shaming people doesn't work. I think a lot of corporations try to do this and they think that. That is team building, and it's not a proper cybersecurity culture should instill a sense of pride. I'm proud to be part of this culture. I'm proud to be part of this cybersecurity warrior, cybersecurity champion. A proper culture makes people proud to have a sense of ownership over what it is they're doing. Technology, as much as we need it, enables complacency. People get reliant upon technology. The technology enables that complacency, and we've. Come reliant upon it, and so we need it. But I think you counter that. It's a counterculture. You have to not just create cybersecurity culture. You have to create a proper cybersecurity culture that is not based on shaming. It's based off of a sense of. Pride for belonging and the desire to want to do good. I think that aligns a lot with kind of what I was saying. You have a natural tendency, natural desire. To do good and want to do good. What do you do when people aren't looking, when people aren't around? Right? I think that's the measure of true character. I think that simply by saying that culture isn't something that you belong to, culture is not something that you try to do or try to adhere to. Good culture is just something that happens and then everybody behaves a certain way. Because that's just the way everybody knows. It'S the way it should be done for good, because it's the proper thing to do. And I think that's good culture. So I like that he talks about culture. I think culture is a very important thing, but there's a lot of discussions that need to be had about it. The way I looked at culture, so many organizations are now talking about culture. It's become a topic, a top of mind topic for most security people, at least. And that's a good thing, because we can only drive the ideas and the value of security culture by having that dialogue. And I think the one thing we're all in agreement with is it's about the ABCs. It's about both awareness, behavior and culture altogether. And by talking about culture, we develop it further. And it is one of those topics that is fairly new. So we need to have that dialogue. We need to evolve it, we need to talk about it, and we need to make sure that it becomes a part of cybersecurity in a way that we all understand. And that's why I really like it that Clint focuses on this as well. The more people that help out, the. More people that talk about it, the. Better it is for the industry at large. Yeah, we absolutely need to have these discussions and kind of some understanding of this. And I think in many ways, we need to clarify and maybe define the topics a little bit better than they have. One of the big talks we have these days revolves around AI. So what I wanted to know about him is how is he seeing AI impact his business? And this is what he had to say. AI. Is interesting because there's so many different facets to it. And people a lot of the times. It'S kind of like you keep using that word. I don't think that word means you. What you think it means. It's one of those things to where. It'S been around for a long time. People think it's new. It's not new, but there's new applications and there's certainly a technological evolution and explosion happening right now. But I've been researching and using machine. Learning since 2013, going back to when. I said I became a video game developer. Well, my first foray into AI was video game quote, unquote AI. Right? And so AI means in video games. Means something different than it means in industry. In industry, it's machine learning. You can have an aspect of machine. Learning in video game AI. But game AI literally just means opponent behavior. But that sparked interest. And I started learning about the different. Aspects of machine learning because I saw potential in being able to use video game AI and the way it works. And the principles that it uses. And I'm a big risk nerd, cyber risk nerd, risk management, risk analysis nerd. I started seeing potential in the way. You can use machine learning in cybersecurity. Now, Stuart McClure was the founder of Silence, which is now a BlackBerry company. Stuart was one of the first people to start using machine learning in cybersecurity with Silence. Now, a lot of people claim that they do it. So that's kind of where we were. And so it's been around for a long time. People are using machine learning to help SoC analysts. They're using machine learning to do risk analysis using, like, Fair ontology, because you can't do Monte collar simulations in Fair. On your own with Excel spreadsheets very. Well in a huge organization with thousands of nodes. Right? Machine learning helps crunch those numbers. And it's been doing that for quite some time. But more recently in what people are excited about is the public release of generative AI technology and tools. Specifically right now, large language models. You got large language models and you. Have diffusion models, whether you're talking about art or language. And it's really exciting because what large. Language models actually allow you to do. Is it's a shortcut as far as the consumer is concerned and businesses are concerned. Large language models are a shortcut that allow you to much more quickly get. To analysis and natural language processing and responses that are more Jarvis like rather. Than machine language like. And so it makes it more accessible. It means more people can do data analysis. It means more people can do more things faster. Right? So I see there's a positive side and a negative side to the explosion and the acceleration of this AI technology. We are starting to see that on the positive side in terms of my. Business, I can do that data analysis quicker, I can write a baseline report quicker, so I don't have to do the initial typing up all the mundane stuff and the boring stuff. So it's like having a junior analyst, a junior programmer, a junior this. So it's like having someone that does the baseline boring work. And then I come in and I adjust it, I edit it, I fix it and things like that. And so that to me is for. A small business that I can't afford. To have a bunch of people. For a small business, it speeds things. Up, it multiplies my ability and my skills. That's the positive side, right? It makes some skills more accessible to people. So the negative side of that particular. Conversation is that, well, people are afraid. That it's going to take jobs and you're going to eliminate jobs and all of that. Well, for all the jobs that are. Being eliminated, there's new jobs being created. I know, I know, people are like, no, that's an easy thing to say. But if used properly, AI makes new skills more accessible, more quickly. You could actually have more people filling. More basic, more entry level jobs, because you can use AI to help you train, you could learn faster, you can augment your skills. I'm not saying cheat, but I'm saying. Use it to augment yourself, right? So there is a trade off. There's a positive and a negative with. The jobs front on what AI allows you to do. There's also the negative side in terms of cybersecurity, right? So now we have things like worm. GPT, there's a bunch of different things that allow people to write malicious code easier. The big thing right now is it. Makes phishing emails more realistic looking. And so you have that aspect. But with any technology that allows us. To advance forward, you're also going to have people that are going to use it for bad, right? So that's with anything, right? So we can use it for good. Or for bad, but that comes with. Progression and advancement in society, humanity and technology. You can't sit there and look at things and say, this AI is going. To be so bad for us, it's going to make it easier for hackers. Well, it makes it easier for defenders too. In fact, it exponentially makes it easier for the good guys. And so I think that in general. This whole foray into AI and the accelerated advancement of the technology is overall, it's going to make things better for cybersecurity, for analysts, for people. I think the net good is better. Than the net bad. That's coming. So I think it's interesting, some of the things he said, there one thing that stands out to me, is how small businesses can be impacted by it and it can help them. And I kind of see it, honestly, there's a lot of hype out there around. AI. And I do think it's very useful, but I kind of think it's useful on the same level that things like internet search engines became useful for smaller businesses, smaller organizations. Right? In smaller organizations, maybe they don't have a full on research department or someone that can go spend a day or so trying to chase something down through traditional means. I think that it's very helpful in that standpoint. What do you think, Yellow? I always like questions like these, and I think that I generally look positively at the evolution of AI. I think, as with many things that are unknown, people fear it way too much. Sure, there are malicious ways to apply AI, and we've already seen attacks in the world that are driven by AI cyber attacks. But I also see a lot of good business models are changing in organizations. Organizations find new ways to develop revenue streams. I like Clint's comment on the information part. People get more information more quickly, they get new skills more quickly through using AI. If you use it as a learner tool, it's brilliant. As with anything, time will tell. And that's the thing with AI. How are you seeing the AI impact your business? Well, in many different ways. Best prepare for it. That's very interesting. There's another thing about AI I've been seeing a lot of lately, and something to consider, and that has to do with the problems that we've seen with that in some ways, where it hallucinates or generates untrue information. And I wanted his input on that. So I asked him, what about these AI hallucinations? Are these an isolated issue or a bigger problem overall? And this is what he said, number. One, to start with, I think right now they're isolated events, but I think it can be a real problem, especially. As AI voice replication and video becomes. More of a thing, right? Because I can very easily clone anybody's voice. If I have enough sampling of your voice, I can feed it into eleven labs and replicate your voice. And so when you have things like. News and elections and things where we live in a society to where if it's in the news, it's already true. We also live in a society to where right now more than half the people in this world don't even know. The capabilities of these voice replication tools. They may have heard of Chat GPD, they don't know the extent of it. So if you do deep fakes and things like that, majority of people are going to think it's real. There's a big difference now between writing. A mudslinging story and listening to this guy's voice. It is a problem, but at the. Same time, it's not new. Slander has always been a thing, right? Smear campaigns have always been a thing. So they're just new tools. It's just new ways to do bad things. And it goes back to what I said. Yes, it's a problem. Is it easier to do these bad things? Yeah, but now it's not there yet. But you're also going to be able. To use the same AI that generates fakes. You can use that same AI to look for fakes. And again, the technology isn't there yet. And it still has a difficult time detecting fake language. It's not 100%, but as we move. Forward, as the technology evolves, we will get there. But I think it goes back to a culture thing. Never believe what you see or read. Or hear right off the bat. Always double check. Just like we have to tell people with phishing emails, check the source, check the person that sent it to you, check the context. Well now we have to tell people. If you hear something that sounds unbelievable, it probably is. If somebody is slandering somebody, double check. They're cross reference and fact check yourself. So I don't think that it's any. More of a problem than it has been. It's just one more thing we have. To tell people to be aware of. I think it just goes into the same conversation as phishing emails, but now. It'S a different type. That's a pretty interesting take on that. And the fact that we have to check everything anyways is such a true statement these days. You can't just take what it gives you. When you go to a large language model and ask it to give you. A bunch of stuff, you can't just. Take it at face value and move on. Unfortunately, many people, especially through the beauty and the wonderful world of social media, are doing exactly that. So I think the lesson here is we need to be taking best practices that we should be applying other places and applying it to AIS. Yellow, what do you think? Well, take one of those best practices from security awareness, trust but verify. I really do like that one. When it comes to this, the media in general, social media have been framing information for a long, long time. That's very common to do. And sometimes that's with the best of intentions and sometimes it is, well, not so good an intention and they provide misinformation or disinformation that has been around for ages. So the whole use of AI when it comes to media and social media hasn't changed much. It's trust but verify is still very applicable. The problem here is though, that verify part has become increasingly difficult to do when it comes to things like deepfakes, because you can make it sound so authentic. You can make it sound as if an important person, you know, has said something because you recognize his voice and your brain simply goes, hey, one on one is two. So I hear it does, it must be that person does, it must be real. And your brain basically shuts off and you don't apply any critical thinking anymore. And that is a big problem when it comes to AI. Because it makes everything hyper realistic. It makes everything so believable to your brain that it shuts our brain off. So trust but verify and combine that with critical thinking. And that's a base skill we need to apply in this digital age that we live in. So continuing on this discussion about AI, I know I've heard some things, but I wanted to know what he thought, what's a myth he's already seeing with AI that really needs to be debunked. This is what he said. Two. There's two. I have such a huge soapbox in two areas when it comes to AI. Number one is the misconception of what constitutes copyright infringement and plagiarism. First of all, there's a general rule of thumb that I follow with plagiarism, and that is the difference between referencing and plagiarism. Is referencing uses a little bit from a lot of places. Plagiarism uses a lot from a single place. Now, what's interesting here is that, and. I'll explain in a minute, but I'm. Going to say this. Humans are more likely to plagiarize than Chat, GPT or any other large language model. And the reason why is because the way these models work is not different. From the way human brains work when. It comes to how I'm going to figure out what to say, what to write. It's an autoregressive model, which means it looks at its data set and it looks at the context of the current. Conversation in order to make decisions on. What character should come next. And it bases that on a contextual history of everything in its data set to find the most likely thing that should come next. Believe it or not, your brain works the same way. Everything that I'm saying right now is based off of an algorithm. My brain has to say that based on the context of this conversation, I'm looking backwards into my entire history of conversations and language to figure out what I'm going to say next. That is the way language works. Building sentences. I am regressively looking back into my history to figure out what I'm going to say next. And that is the way large language models work. The difference is that if I'm doing. This to generate content, a large language model is basing its history, its context, its generative capabilities on everything that it knows and it does it all at once. The human brain cannot do that. The human brain can't look at the entire history of all my conversations and everything that I know immediately to grab from different sources. Now, yes, you are doing that. Yes, you are processing. But it is more likely that you're going to pull an entire section of context than the generative AI will. The generative AI is going to basically pull a scattered reference from a lot of different places. More likely because it's looking at the likelihood of what's coming next based off an entire context. It can look at different pieces a lot of different pieces all at once. Whereas the human brain, the way we. Think is that we're not looking from. All those different pieces all at once. We're looking for larger chunks of contextual reference. Therefore, we are more likely to reference. An entire chunk of something than generative AI is, large language models are. And because of that fact, because the generative AI and large language models are working based on that context, they're basically looking at everything. That is why I feel like it's not plagiarism. But either way, the long story short is if it is on the internet, if it's publicly on the internet and. It'S accessible, not behind a paywall, yes, it is copyrighted. It has to be copyrighted. But AI is not reproducing that entire. Thing and copying it. It's referencing it. And it's referencing it in such small pieces that you can't even really give. It a source or a site. Like I said, it's less likely to. Plagiarize or infringe upon copyrights than humans are. And data sets that use that information, they don't need your permission if it's publicly accessible on the Internet to reference. So that's my two cent. That's the misconception that I think needs to be corrected because I think that people are hindering progress. I think in order to make this. Technology better, in order to continue to advance it, people have to be willing. To offer their information, to use their. Information to increase the data set. That's the only way this is going to get better. And I think that when people go. Into the other thing, like actors and. Writers are striking because of the potential impacts of this technology on their career, well, learn how to use it. AI. Is not going to take your job. Somebody that knows how to use AI. Better than you is going to take your job. So learn how to leverage the technology instead of trying to stifle it because it's here, it's not going away. And then the other thing that I think there's a misconception that needs to. Be corrected, which I'm just going to. Mention it, we're not really going to talk about it because there's no way. To really debate on this. AI. Is not going to wipe out humanity. I don't think that AI Presents an existential risk in the way that people. Think that it does. If AI. Is going to wipe out humanity that's right. I'm making a plea right now, all hell to our AI overlords. If AI. Is going to wipe out humanity, it's going to be an Idiocracy type way that we're going to become so reliant. Upon it that we deevolve that can happen. But I don't think Terminator is going to happen. So I would like to envision a future where AI. Is more Star Trek like as opposed to Terminator like. It's a very interesting discussion. Some of the things he's talking about there and there's been discussions about things like derivative works, and what is considered plagiarism or what is considered infringement related to things like that. Right. Some of the things I see with AI are being done, and I understand that leaves some discussions to be had. What do you think, Yella? I think that the rapid introduction of AI to the general public has forced organizations to take a hard look at their revenue model and has forced them. To mature really quickly. And since they can't do that, they lash out. I think that in some cases they're right. In some cases there aren't. I usually use the example of a painting called The New Rembrandt. And the new Rembrandt is nothing more than all of the pictures that Rembrandt ever created. All of the paintings that he ever created, put them in a computer as a training data, and they ask the computer, create a new painting based on all the techniques that you see in these paintings, and whip up a new Rembrandt. It's not painted by Rembrandt, it's done by a computer. It's actually beautiful. And you can see if you look at it, you recognize a Rembrandt, but it's not a Rembrandt. Question is A, who should get the money? If that painting was ever sold, would that be Rembrandt because of his influence and his inspiration? Would that be his family? Would that be the guy that owned the computer that created it? Would it be the guy that programmed it? I don't know. All I care about is that, A, we should have those discussions. And a lot of times those discussions aren't being done. They're not being had because it's just fighting it's putting people's opinion out there and telling you, I think I own that money. I need that money. It's mine. Hang on. It's not as black and white as that. And the second thing is, and I do agree with Clint here, is if it is publicly available, if it's for everybody's eyes to see already, it's kind of already out there. That choice has been made for you. And I think that by leveraging that publicly available data, we can mature and evolve AI to the point where it can become even more useful for humankind. And I'm all for that part. So Clint has been an author of several books already, and I wanted to find out from him, maybe get a little sneak peek and see is he working on any books now? Is what he said. I am. Now I'm working on chat GPT for cybersecurity, which is more than just chat GPT. But I feel like that it's limited context. I'm really enjoying writing that book. It should be out on they say it's February. I think it's going to be out in November. It really depends on when the final manuscript gets done. I have to keep doing go back. And doing regressive edits. But there's so much more that I'm learning about this technology that there's like so much meat left on the bone that I've got my courses that I'm teaching on AI for cybersecurity where I can really expand upon the ideas and. Not be stuck to just what's in a book. But I do have another secret project. I've already been approached by someone, I can't say who or where to expand upon these ideas. And so I will have another book when chachi BT for cybersecurity launches. I will have another book that is. Much more comprehensive across a broad spectrum of AI technologies that I'll be working on after that. Awesome. That sounds like a good thing there. I do think it's kind of funny, his point about how it's been changing and changing and having to do regressive edits. I mean, I can only imagine the madness that comes from that. So now we're going to move on to one of my favorite questions that we ask guests what was your biggest failure and what did you learn from that experience? And this is what he said the. Biggest mistakes that I have made were. Not taking advantage of an opportunity. I always leap before I look. I just go for it. I'm ready, fire, aim. But on the other hand, there have. Been times to where I've had really. Big opportunities and I didn't seize the opportunity because I had either impostor syndrome or I was afraid of being rejected or whatever. So there's been times in my career. I know that I've done this several times earlier in my professional career to. Where I would get stressed out or. Uptight or upset because I have to do something I've never done before and. That would bother me, right? I think that people need to not be afraid to do something they've never. Been never done before. I know that that makes people stressed out, it makes people uncomfortable. But don't worry about what would happen if you fail. Instead, imagine what would happen if you succeed. Okay? And that is like my coffee mug. Quote is don't worry about what would. Happen if you fail. Imagine what can happen if you succeed. Don't be afraid of rejection. No doesn't hurt, right? So a no is simply a no. And a no might give you temporary. Remorse or a no or a rejection makes you feel bad for a little while. But not taking advantage of an opportunity and missing out on something great lasts forever, right? You never know how your life can. Be changed by taking advantage of an opportunity. And I think that again, it goes. Back to the net good. If you're not afraid to capitalize on. Opportunities when they're presented, the net positive. Is almost always better than the net bad, right? And that's the biggest thing take advantage. Of. Know there's that saying fail fast and it's not bad necessarily to fail. You just want to do it quickly and move on to the next thing. Clint is a true entrepreneur in the fact that he understands that fear is a disabler is the one reason why people who have great ideas, who can solve the world's issues, have not started a business to do so because they're fearful of something. Follow his advice. Fear is definitely something. If you feel fear, sure, think about it. But that shouldn't be the reason why you don't do something. Sometimes you just need to ignore fear. Sometimes it's okay. So this was a really cool discussion related to all things education and AI, I feel like. But a lot of cool stuff came to the top here. Things about entrepreneurship, things about very positive mindsets in this industry, too. Yellow, what was something that you really took away from this? I think that Clint is somebody that really understands cybersecurity, but he also has a lot of experience in AI, which is fairly new. And if he can combine those two like he's doing now, he can do really cool things. And I think that we in cybersecurity, the industry at large, needs to embrace AI. And Clint being on the forefront of that is something I really admire. Absolutely. And I look forward to his new book coming out, talking about AI. I think that's going to be a fascinating read. In the meantime, I really did enjoy this conversation and want to thank Clint for giving us his insights and sharing this information. But we did leave one part out. Say goodbye. Yella. Goodbye, yellow. You've been listening to the Security Masterminds podcast sponsored by Nobeform. For more information, please visit This podcast is produced by James McQuigan and Jabad Malik with music by Brian Sannishon. We invite you to share this podcast with your friends and colleagues. And of course, you can subscribe to the podcast on your favorite podcasting platform. Come back next month as we bring you another Security Mastermind sharing their expertise and knowledge with you from the world of