Security Masterminds

Special Episode - Loren Kohnfelder

December 14, 2023 Loren Kohnfelder Season 2
Security Masterminds
Special Episode - Loren Kohnfelder
Show Notes Transcript Chapter Markers

Feeling the frustration of constantly battling memory-related vulnerabilities in your code? What if I told you there's an unexpected twist in the story that could change everything? Join me as we explore the captivating journey of transitioning to memory-safe languages in programming, and uncover the game-changing solution that awaits. But that's a story for another time...

Our special guest is Loren Kohnfelder and joined by Roger Grimes.

Loren Kohnfelder, a distinguished figure in the realm of cybersecurity, is widely regarded as a trailblazer in the development of PKI (Public Key Infrastructure). His significant contributions to the RSA algorithm and its application in real-world scenarios have solidified his position as a thought leader in digital security. With extensive expertise in encryption and network systems, Loren offers a wealth of knowledge for developers seeking to navigate the transition to memory-safe languages. His pioneering work serves as a cornerstone in understanding the complexities of cybersecurity and the pivotal role of memory-safe languages in fortifying software against vulnerabilities. Loren's profound insights and experiences make him an exceptional guest, providing a comprehensive understanding of the evolution of digital security and its relevance to memory-safe languages.

I think if there are specific pieces of code that are well contained and you can rewrite those in a memory safe language, that's a fine thing to do. But, for example, if you've got a library that's in the middle of a bunch of memory unsafe language code, and you write that into memory safe code, you're going to have bridge code connecting across that boundary, because you obviously can't just slip from memory safe land into memory unsafe land, where you're now taking on risk without managing those borders. 
- Loren Kohnfelder

In this episode, you will be able to:

  • Uncover the secrets of PKI with Loren Kohnfelder.
  • Learn the benefits of transitioning to memory-safe languages.
  • Overcome the challenges of rewriting large codebases.
  • Explore the feasibility of adopting memory-safe languages in programming.

Connect with us

Website: securitymasterminds.buzzsprout.com

KnowBe4 Resources:

James:

Hello there. I'm James mcQuiggan, producer of security masterminds. And I want to thank you for joining me today for this special episode that we have with Loren Kohnfelder who's talking with KnowBe4's data-driven defense evangelist roger Grimes. About the latest document that came out from SISA relating to memory safe languages or MSLs. And Lauren presents a unique perspective. Roger was all excited about using MSLs and was promoting it. But Lauren presents a different perspective and the two of them had a very engaging and informative discussion that we wanted to share with you. So enjoy this special episode, a treat just in time for the holidays.

Roger Grimes:

Loren, I've been friends with you for a while. I've enjoyed our discussions. And I think You and I debate so many things it's hard to remember that you're considered the father of PKI. That, you in your early days in college back at MIT, you're the guy that kind of came up with the idea of PKI or something like it. I was wondering if you could explain in your own words about the story and how the world perceives you to be the father of PKI now.

Loren Kohnfelder:

Yes. A lot to say there. Thanks for asking. And let me just say, I've really enjoyed our conversations and getting to know you over the years. And I think it's a great. Example of bouncing ideas back and forth, right? We learn a lot from each other and that's the way I think we make progress being a little bit wrong. And then having a friendly, accurate, detailed critic push you in the right direction because no, no one gets that a hundred percent off the bat. So to go to MIT I'm afraid I have to start by talking about how. It was a long time ago. This was 1978. Okay, so the ARPANET was there, but I'm not sure that I ever used, I used the ARPANET a tiny bit before coming to MIT around 76. I'm not sure I used it much at all at MIT, although I'm sure people there were working on that. But it was just one thing. mostly I used the Multic system on an internal network, and that was in a building, I and two of the RSA guys happened to be there, Ron Rivest, and next door to him was Len Adelman. And somehow, I figured out that they'd written this paper some months earlier. Because I was going to that computer to use the system for my work. And, again, it was a local network tied to a big mainframe somewhere. Remember the World Wide Web is probably over ten years out. So when you go back and read how we were thinking about using RSA at the time, how did I even find out about, did I go to the library and get the ACM latest publications or did somebody mention it or what? But it was a whole new world and I got the paper, I must, again, I must've gone to the library or gotten a reprint, read it, and this is going to be a little detour into the refactoring problem, but stay with me. I'm headed in the right direction. And in the original version of the paper that they'd submitted to the ACM, they had what's called the refactoring problem, which refers to, RSA is basically a modulo and then you have an exponent. And the thing is when you're doing a digital signature and an encryption in both directions, your end and my end are going to be different. Which means whichever of us has the smaller n, we can't always encode all of the messages that might come out of the bigger n modulus, right? Because like my message might be your n plus 50 or something. And so what they said is you have to detect this case and then split into two messages with some kind of little indicator saying, okay, I'm splitting the messages. Here's part one. Here's part two. Messy, messy. So I read it and I said just. Reverse the order of your computations, right? And it was like, so obvious. I thought three brilliant mathematicians couldn't possibly have missed this. So I got up the nerve and I walked into Ron Rivest's office and I said, instead of doing the, raise it to this and then raise it to that, go the other way. And then they put the smaller modulo in first. No problem.

Roger Grimes:

So, so you actually had impact on the RSA algorithm

Loren Kohnfelder:

on that aspect of it, right? It's basically when you use it a little bit more in an application and you have two compositions of it, right? Because you would often sign something and encrypt it or encrypt it and then sign it. So you need those two functions. In that application, for example, and I think, as I recall, Ron said, obviously you're right and he called over Len and we had a little chat and he said the paper's already been published submitted. So why don't you write a letter to the ACM and they will publish it as a, as an add on to say, by the way, here's a better solution from this guy. And recently I actually contacted the ACM. Because it's not online, right? It's so old, it's not online. And they were kind enough to send me a an image of the old letter that got published in the ACM. So I, I do have that linked from my little website from the book that I wrote a few years ago.

Roger Grimes:

And for our attendees, ACM is Applied Computer Machinery. Is that it?

Loren Kohnfelder:

now that I'm quoting them, I hope I'm right. I think it's the Association for Computing Machinery

Roger Grimes:

That's it. Yeah, , yeah. So , , that's been the longstanding publication for serious computer professionals, especially computer security wise, right?

Loren Kohnfelder:

Yes, and it was more software leaning than like IEEE. wIthout any judgment, just, they were going more towards the software. And again, like at MIT, software was wedged into the electrical department at that time, electrical engineering. So that they do everything by numbers. So 6-1 was electrical engineering, pure. 6-3 was software. But like we had to do breadboards and learn about resistors and stuff like that. So I got that letter in and in the process of doing that, I had them of course proofread the letter, make sure it was appropriate and accurate and I think Ron put a cover letter on it said, just print this, and it went in the journal of the ACM, which I think is slightly different publication than the RSA, but they referenced it. So that was fun to, to help them on the, on that. And Len said why don't you write a thesis, take up the subject. And he said, for example, we're thinking about how people are actually going to use RSA in the real world. And there's a whole bunch of stuff to be done. And again, remember, no web very early internet then called ARPANET. And so I said, okay, that sounds interesting. And here I had and Len was my advisor. Ron, we had lots of discussions, helped me as well. I never met Adi Shamir, but I did see him, get to see him do a talk around that time. And so it was just like, a perfect in to get connected to these guys who were really on the bleeding edge. And I'll mention also at this time the National Security Agency was still very sensitive about this stuff, right? And, we've learned a little bit more about why and all of that. But to focus on this, I know that the RSA guys were dealing with the NSA, like asking about it. They were worried about the publication probably. And again, I don't know the details of that, but it was not just academically cutting edge. It was a little bit political and, a tricky topic, but I enjoyed it. So in the process of looking at how are people actually going to use this, the default thinking that they had at the time was the phone book was the model. So you can imagine a phone book that would have RSA keys and names.

Roger Grimes:

And you're talking paper.

Loren Kohnfelder:

Paper phone book. Thank you for that correction, Roger. Yes, like the yellow pages. And the idea would be if you bought it from an authority, right, since not that many people had printing presses or I don't know what, I don't think anybody was thinking about it has to have an imprint or any other sort of authentication. But that's, and I will say at the time RSA computations were so complicated compared to our compute power. that they were doing things like a custom circuit to do the RSA computation. So they would have a big breadboard, the old sized computer, I think it's maybe about a 24 inch type thing, and that thing was covered with ICs, bleeding edge technology, and they were doing maybe 200 digit RSA or something like that in, in decent time. And we knew that three or 400 digits was like at the time an acceptable minimum, right? So it's over a thousand bits. It's in the ballpark. And I, we were thinking in powers of 10, but obviously it's a binary computation. It's not as old as digital computers, which also existed once. So given that and the primitive state of network, I wrote my thesis. And in my view I'm not going to leap out and say, this was PKI ready for you. Because I think we were just so far ahead of all that stuff that had to go, there were networked file systems, we had TCP IP at the time, but we didn't have the web or any of those protocols. And I forgive myself for not anticipating, yes, Moore's law would make all of this. So eventually, right when I'm an old man, like today, I would have it in my phone in my pocket. We had our hands full. So this was more like a special application. Maybe a bank would use it. Or obviously the defense department or state department for the highest, most important communications, right? Again, because nobody could imagine. Everybody in the world using it all the time, like we do now. So in that context, we, again, we had this phone book and of course, people did understand you can, you could write that as a big file that everybody referenced. And I think other people were thinking about you could use a signature of something like a root so that even if the file was tampered, In the transmission, you could look at the signature. So in a sense, we were postulating that, and then I said once you have the signature on it, you don't need the book. When I communicate with you, I can send you the signed statement from the certifying authority that here's the signature on, here's me, and then here's my public key. I can just send that to you. And I think that's the seed of public key infrastructure, right? But I didn't, imagine separate for email and code signing. I didn't imagine things like intermediate certs. So they would also certify and it also is a name and key binding. And I didn't go into at all how do we know that's your real name? I think I just talked about the name should be unique. So the idea is they could be just like handles. And again, if James presents me Roger's signed key James won't have the private key, so he's not going to get very far with that. So in that sense, it was a fairly robust system just to have unique names.

Roger Grimes:

And what year is your thesis?

Loren Kohnfelder:

78. So it would have been like May uh, it was the semester system.

Roger Grimes:

Yeah, and so 81, 82 is really the introduction of IBM PC. And before that, you had a few personal computers, the average memory was maybe 2 4K, we didn't have hard drives yet, hard drives really weren't, not big hard drives for personal computers didn't come in for another couple of years. You were really and I think you've talked about this, a lot of your computing experience to that point in time was a lot of hardware based. You were actually messing with circuits and wires and

Loren Kohnfelder:

Like one of my projects in the electrical engineering department was I breadboarded a little teeny, like a four bit CPU, an adder and a program counter and did some, toy programs in there. But I will say I found it extremely valuable to have started my work in computing with a little bit of hardware. And also just at a time, even after graduation, when you could look at the printed circuit board and you could see the chip, look up the Texas instrument number, and you could see the gates on the inside. You could follow the traces by eye because they were not that small, like they are now. And boards were not multi layered very much. It was maybe three, four layer and most of it, with ground planes. You really knew what was happening all the way down. And of course, there's no caches, there's no speculative execution, right? All that stuff that now we're in a bit of a swamp sometimes, right? When that doesn't Work out so well for us in a modern, this laptop I have in front of me, I have no idea what's happening at the heart and I don't think there might be a few people in Intel who do, but then they don't understand the software stack on top,

Roger Grimes:

So you publish your thesis, I'm assuming you shared with the RSA guys probably earlier than publication of your thesis, like they knew what you were going to publish?

Loren Kohnfelder:

Yeah, because Len was my thesis advisor. So he was reviewing rough drafts and Ron was next door. So I'm sure I offered it to him and got some feedback from him as well.

Roger Grimes:

So I was wondering, you publish your thesis and does it immediately explode and people are like, we get this is it, or do you I publish my thesis, you get a job, and all of a sudden, years later, you find out you're the father of PKI.

Loren Kohnfelder:

Very much the latter. I will mention I did get an award from MIT as I don't remember the best thesis in my department or something like that, and we were invited to a nice reception, and maybe they gave me a hundred dollars or something like that which I'm very grateful for but yeah, no idea, and again, this was just something we thought that only very elite high tech institutions would be using for the foreseeable future I have to say I had no connection to X 501 sorry, 509 all of the, the current encodings and AES1 and all that stuff. Nobody called me about that. Although, who knows where I was at that point in time and that's fine. And as it's grown into a much more complex and robust and powerful thing as the world's computers have become connected. I will say it's interesting because I was just binding names to, keys that in a sense, after, we had different levels of certification there's like extended verification and things like that. But now a lot of it thanks to the Let's Encrypt project, a lot of it is just domain encryption where you, in addition to taking your domain name you have that feedback. I'm doing a matching code that they challenge you with. So they know it's really the domain holder that's getting that. And so we're back to it's based on names and that other stuff is, it's pretty complicated. What exactly they can assure you beyond that, because they do make mistakes every once in a while. Yeah, it's been amazing. And now, you said PKI. Like I said, it was, I think the kernel of PKI is how I would think about it. But I was totally in the right place at the right time. Very fortunate. They were very generous with their time. MIT was an awesome place to be at that time, and I'm sure it is still. With amazing students all over the place and wonderful professors, so Couldn't be more fortunate, and it's great. You know we had we briefly talked about patenting And as I recall I was not interested in that. I thought if this is useful, people should use it. I tell people when you do your online banking that's my technology in there, uh, keeping you safe.

Roger Grimes:

And because you did not patent it, you ended up having to work the rest of your life. I know you went to work for many companies, including Microsoft and Google. What were you doing a lot of times when you're working for them? Were you doing authentication stuff or more generalized programming?

Loren Kohnfelder:

No I was just a programmer. This was just one more step in the road. If you want to go into that I'll say that I also was extremely fortunate from the age of 12, I started using computers, which was mainframe IBMs, in the special rooms with the raised floor we had a family friend who rented computer time and had a side business. Doing billing and things like that for local customers. And he would find people who had mainframes and 50 or 60 dollars an hour go in there and he let me tag along. And I was just. Starstruck. It was like, wow, I didn't know this existed. This is the thing for me. So I was key punching away and ordering manuals from IBM and learning, basic assembly and Fortran and COBOL, RPG, you name it. So I got started really early. So by the time I was in college, I'd been doing it for about 10 years. And I was very comfortable with the programming and it was great to have this project, which was a bigger, it's an architectural challenge to look at it. Here's your RSA. What are you going to do with it? sUper fortunate in all those ways. So yeah, I worked for a, what was called a super mini computer. The company was called Alexi. I went to Japan. I wanted to live there. So I worked for a couple of companies in Japan, just doing regular programming. And then when I came back, I joined Microsoft and joined the Internet Explorer team, which was in the version three days at that point.

Roger Grimes:

Wow. Wow.

Loren Kohnfelder:

at some point so I started out again working with the ISPs like Prodigy and AOL and CompuServe because they wanted to use the browser, integrate it, customize it uh, again, how different the world is, we had the web, but then we had all the walled gardens and stuff like that, right? And then somebody started working on security. And they left the company and I was asked to step into that role. So I inherited what was called security zones in IE, which was not my baby, but I tried to do what I could with it. And then from there went into the NET project. And I was the security program manager for that which had code access security which was a generalization of the Java stack model for permissions. aNd yeah, from there I went took a few years out of the industry, did a little consulting and then Google enticed me. I was so curious and they have so many good people on the security team there. It was really fun. And I was living still in the Seattle area, so I was able to be the security team implant in the Northwest. And so I would, work with people in Mountain View, but then also be local and work on projects. In the Kirkland and the Seattle offices when they opened that one later. And so I got to do security reviews of all these great things like Google maps and a bunch of other projects that they had centered there., and again, , a great experience and and then I was able to retire and, and here I am.

Roger Grimes:

I can't think of a better person for our next topic then, because you've got of programming experience across multiple languages critical infrastructure. You worked with, different interpretive languages as well, Java and other things. And I know so recently CISA, the Cybersecurity Infrastructure Security Agency, released a paper on roadmaps to get organizations or vendors to start developing what they call memory safe type languages so the idea is that a lot of the traditional languages we would think C, C++ and things like that. have this inherent maybe something, maybe I don't know, this inherent error or this misfortune that they don't they don't require that programmers definitively declare the type of field they're using when they're capturing data and that lack of definitive memory type leads to a large class of bugs. A lot of, I don't know if all of them are buffer overflows, but I think a lot of them are. And CISA released this paper saying, Hey, we want vendors to go from languages that are not memory type safe to languages like Rust. net and other things that are memory type safe. And so I read that paper, I published it on my LinkedIn and mastodon feeds, and I said, this is a great thing. Uh, CISA, one of the things they said was. 70 percent of the programming errors, vulnerabilities that are abused by attackers are because of this lack of memory type safeness. And I thought this is great. I love that CISA is trying something. And they've said, Hey, here's a roadmap. They understand that it's hard. And it's a roadmap and you came back and you said, Roger, it's essentially not as great of a solution or more challenging and I want to push back and say, I'm glad that CISA is doing something. I think that. They're, they've recognized a big problem. They've defined it with data saying it's 70 percent of the problem, and they have a solution, which is, hey, everybody, Microsoft, Google, all the other vendors need to take their existing code bases. And move them to a memory type safe language. And I know you have an issue with that. So I want to see if, can you convince me that it that the roadmap of moving, getting people to move from what 70 percent of the problem. To what might get rid of 70 percent of the problem is a bad thing. I'm like, I'm cheering CISA for, hey, this is a tough problem. You've met it head on and you have a recommendation, which is people need to move to other safer languages. What say you?

Loren Kohnfelder:

sure. First of all, let me. Give a few caveats and we're not, I'm not going to go to the paper and read and quote and things. So please forgive me if I'm simplifying details here. Also, I'm not a Rust programmer. I'M not a programming language expert, et cetera, et cetera. As you said, I, I've, I have had a lot of experience. So I wrote C from a long time ago.

Roger Grimes:

You even wrote one of the definitive books on how to do secure coding. Designing secure software. Guide for developers. So I think it's even so you have a at least a little bit of insight

Loren Kohnfelder:

Yeah. But there, there are experts who know the ins and outs of Rust, and so I'm just going to be speaking from my experience and basic principles of things that, that I I have some knowledge of. So first of all, I think moving to the CISA, I'm pretty sure the CISA term is Memory Safe Languages. They call MSL. And type safety. I think is also part of that , what I think they're focusing on is basically, is this a language where you can do a buffer overrun? And I think type safety is slightly different. I'll just throw this out because, we always have terms and their technology is complicated, but. For example, you can have a memory safe typecasting error that I believe may or may not be in what you call a memory safe language. another way of putting that is, you're managing your memory yourself. You can access any address in your process is memory unsafe. And when you're constrained like Java and Python and others do to only have an allocated buffer and when all the references to the buffer are gone, it's magically garbage collected or through some other scheme, like there's ownership schemes for example, for Rust uh, you can't reach out of those boundaries of what you've properly allocated. You're going to be okay. So I absolutely think it's great if we can start writing new programs. In memory safe languages, right? With very few exceptions. I think if there are specific pieces of code that are well contained. And you can rewrite those into memory safe language, that's a fine thing to do. But for example, if you've got a library that's in the middle of a bunch of memory unsafe language code, and you write that into memory safe code, you're going to have bridge code connecting, across that boundary. Because you obviously can't just slip from memory safe land into memory unsafe land, where you're now taking on risk without managing those borders. So I'm saying when you get into a lot of bridge code, it gets very tricky because, for example, if you have a huge million line piece of code are you going to rewrite that whole thing all at once? 100 percent of it gets converted to memory safe language. Or are you going to pick and choose and then maybe iterate? But if you're iterating, then all those moving boundaries, you're gonna have to have bridge code.

Roger Grimes:

And bridge code, define bridge code. Bridge code is the code you would write to interface the new stuff with the old legacy code.

Loren Kohnfelder:

the old stuff because when you're in a memory safe language and you call into unsafe, you need to have preconditions and all kinds of protections around that thing to make sure it doesn't violate memory safety. Inside there.

Roger Grimes:

Okay.

Loren Kohnfelder:

Okay. So you need to know what the conditions are at the boundary, enforce them, and then you also have to know that within the unsafe code, you're going to be okay, right? That without those conditions that you've checked and enforced, it's not going to, go and stomp on your program or leak some data or something like that, right? So it's a non trivial thing, whenever you do those boundaries. And if you do an iterative conversion, you're You can imagine the, like a country expanding, into the next county and the next county and the next county, you've got all these bridges and then as this all fills in, now you don't need those bridges anymore, right? So it's quite an undertaking to do that. And again, a huge program doing it all at once. Like how many versions behind are you going to be by the time you get done and then you're going to have to merge all of that.

Roger Grimes:

Yeah, and I guess that's one of the bigger concerns is that, let's say for Microsoft and Google, they're potentially looking at converting from their, most of their operating systems and browsers have been written in C++

Loren Kohnfelder:

Yes, a lot of it is C

Roger Grimes:

Which is not memory safe and if they go to trying to get to memory safe languages They're talking about converting I mean I'm just guessing but many tens of millions of lines of code if not hundreds of millions of lines of code somewhere in

Loren Kohnfelder:

Yes. Yeah. I believe chrome is in the tens of millions of lines and also remember as soon as you convert any of your public interfaces to memory safe language, you've broken all of the people using memory safe language, memory unsafe languages that have been calling your APIs unless again, you provide like some alternate API You know what I mean? So you've got the memory safe version and the memory unsafe version,

Roger Grimes:

Yeah, so let's say that Microsoft and Google, the way that I envision it, if I kind of look at the road map that CISA talked about, is that if you have a project, you start to make a road map of, hey, we're going to convert this. Let's say that Microsoft is going to convert Windows. I can, well, I can foresee in my head, okay, you create this big team, I don't know, 100 guys, 200 guys, I don't know scale, and their goal is you're going to convert 5 or 10 percent of Windows a year, and at the end of 10 years, You've got it converted.

Loren Kohnfelder:

Oh. Except for the changes those other team has made in the interim, yeah. And like I said, are you going to have dual? You're going to have memory safe APIs and memory unsafe APIs for compatibility?

Roger Grimes:

You know, and I guess the new guys, the people that are making new features, they can't exist, they can't code in the new stuff unless you create bridge code, right? So they're gonna either continue to develop in C++ or they're gonna, you're gonna have to create this, I assume, really huge bridge code.

Loren Kohnfelder:

Yeah. And by the way, some people call those wrappers. And Rust, I believe, has a foreign function interface, FFI. So the moment I, I just say bridge, that's my term. But, it might do something like you get a pointer and then it will check is the memory allocated? Is it, are you using the right size? If there's a length associated with the pointer, but exactly how you do those it's very tricky. And it's very important to point out that bugs can be introduced in that bridge code. Either by not properly securing when you go across into the dangerous zone. I think of a, like you're in a factory and you stay behind the ropes and you're safe, but then if you unhook the rope and you walk in and get close to the machine or the smelter or whatever, you're in the unsafe world there. So are you protecting correctly so that the unsafe code won't cause trouble? And then secondly did you mess it up? Maybe at the boundary, maybe some sort of a subtle bug has been introduced and like something that was supposed to work before now doesn't work. You know, you say, okay, it has to be 1000 or or less. And like the applications as well. I used to work for 10, 000, right? Why can't I do that anymore? So it's, in words, the conversion is always like any code change. You always risk introducing new bugs, and some of those could be vulnerabilities

Roger Grimes:

okay. I gotta tell you what that's huge. That's a huge point I hadn't thought about if you're developing all this new code And the wrapper code or whatever, it is absolutely going to introduce new bugs. We've yet to be able to develop bug free code. So absolutely you cannot expect all the new code to be bug free.

Loren Kohnfelder:

yes, and with any language and, when you're approaching a major release of software, you've got testing going on, you're looking at the bug database, you're triaging those, right? So what are the ones we think we want to fix? What are the ones we're going to let go? Et cetera. And the reason all shop software ships with bugs, one is if you tried to fix it all, it might be endless. And two is that at some point you see there's a very minor bug. You think it's not going to impact people that much or there's a work around and you think I don't want to mess with that code because the person who wrote it is not available and we don't know exactly how it works or if we mess with this right and like a big operation like Microsoft. Even back when I was there, years and years ago, a full test passed on Windows. Is multi week and they have labs full of every kind of computer and peripheral because there's a, these patch Tuesdays or whatever, there's a, very well known cycle where you try to be a hero and fix a little bug at the last minute and you find out, Oh, this brand of computer with this printer and this device, this display, the image doesn't come through on the printer or the display gets messed up or something. Yeah. We're always taking that risk. Just if you work on your house, you're trying to repair everything, you might weaken something because you've been sawing away at it and drilling holes all over the place, right? You're always taking that risk. To go back to the high level, we can drill into more of this, and there are things like performance and stuff like that, that, they're manageable, and usually you can do performant memory safe. But it's something to think about. I guess two points are, I wanted to introduce into all of these factors. One is, I support the CISA effort. I like memory safe languages, but I think it would have been very valuable if I could just make one suggestion to them. To say, here are some cases where memory safe languages are definitely the way to go. Here's some where it's a judgment call and here are the tradeoffs. You might get this, but you might, like some of the things like introducing errors, et cetera, we talked about. Not to mention the effort and the lost opportunity of having all these developers do something new or actually fix bugs that already exist. And the other categories these categories of memory unsafe language, we can trust. These are probably better not tackled. I think that would have been useful just to help people. And again, to be clear, they do have a prioritization section and they do call out things like new code, self contained code are the first things you want to do. And, in that sense, we're totally on board, but I really wanted them to go to the other side and say, here, you've got memory unsafe language, but still it's probably going to be fine. And for example I don't recall the 70%, the detail in the fine print, but and I'm always a little skeptical of that because I don't think. Anybody in the world knows about all of the vulnerabilities that get found. I think sometimes people fix them and they don't want to talk about it and things like that, right? Or custom enterprise software. You're not going to tell the world you had this vulnerability because nobody else has that software. But, the point is Google Chrome I've actually in looking at this, they've told us we're interested in third party libraries in Rust around Chrome. But the way they do memory allocation, they don't think it's appropriate for Rust and without Rust, going from C to a memory safe language without a huge performance hit, there's not much of a story, right?

Roger Grimes:

Yeah, that's a big point. Teams aren't going to develop and rust if there's a performance hit. So do you think there is a performance hit going from traditional memory, unsafe language to a memory safe language. Do you think that there'd be some sort of performance hit?

Loren Kohnfelder:

This is where it gets complicated. Sometimes I think it can be very minimal, right? But for example, if you have a garbage collected system like Java and Python are the big popular examples. But when you have a garbage collector, that thing can hit you at unexpected times, right? Because at some point, everything is moving along. And at some point you've exhausted your memory pool. They know what what needs to get released, but they have to do some work.

Roger Grimes:

Yeah, garbage collection, by the way is some languages have this process that actually try to reclaim memory or memory that's been freed up and they go through and they look for chunks of memory that are supposedly not being used or allocated and they try to clean it back up and they call that process garbage collection.

Loren Kohnfelder:

right. And there are some more sophisticated efforts to do it fairly incrementally, but the point is every once in a while, you're going to have to take a hit. Where all of that deferred releasing of memory has to happen. anD of course there is always the overhead of it's got to track where all the pointers are. When there's no more pointers to a buffer, then it can free it. It's got to always be index checking. When you index into the buffer, is it within bounds where if you write the code just right and you don't screw up, you can just take your pointer and do math on the pointer and move around inside the buffer just fine. But of course, that's extremely tricky.

Roger Grimes:

yeah, performance wise, I can see that. let me say, if there's a performance hit of, let's say, 2%, I'm just making it up, I don't know if Google and Microsoft would go there. I mean, they're in a, a fierce competitive landscape, where they can't afford to slow down. They can't afford to inject additional slowness.

Loren Kohnfelder:

Yes. I think it's a huge code base. I don't know much about it, but as I said they're not looking at rewriting Chrome and the way they do allocation, they have a sophisticated system, but it's not perfect as we know from the regular memory errors. But I don't think they're looking at doing that anytime soon. And it's also important when you talk about the bugs in Chrome is that Chrome may be the most exposed piece of software on the planet, Think of all the people using Chrome and it talks to the Internet directly, And of course, it's got to be a good browser and protect the web apps that are built on top of it. But Microsoft Windows. Or chrome. Those are probably the two most exposed. Of course, there's going to be tons of bugs found and people are pounding on it constantly. it's, 50 or 60 million lines of code, but it's a very special piece of code compared to the billions of lines of enterprise code, right? Which I think is mostly what this is really talking about here. and that I think again, like I said, there's a lot of stuff that maybe, it's best left alone. And I think some more work on helping people understand what's off the table for now. And I guess one more idea along those lines is that I don't think they mentioned it in the paper. I, I could be wrong, but I was unable to find like a pilot study of somebody who tried doing this. Like here's my 50, 000 line for, business application. It was in C++ we rewrote it into Rust, or we rewrote it into Python, whatever. Here's how it went, We thought these things were easy, these things were hard.

Roger Grimes:

That's actually a huge point. They're making a pretty big global recommendation. If they've got some pilots, it would be nice to be here about the pilot, just about the things you talked about. What, how long did it take? What was the performance hit in their particular application experience? What additional bugs? What compatibility issues? API issues? No, no, that's a brilliant point.

Loren Kohnfelder:

like new feature development. Do you do it in the memory safe, before it's ready for prime time, or you do it in the old one, and then you got to translate and merge again, Yeah, I think a pilot study, because as we all know, in theory, this is a wonderful idea, right? You can't go wrong.

Roger Grimes:

yeah, maybe there's a version 2 of the paper. Version 2 of the paper will bring out some pilot programs and tell us how they went.

Loren Kohnfelder:

And again, as always, these are my impressions from a quick read. I'm not an expert in the field, But I would enjoy being in a dialogue with people, who maybe there is, everything should be rewritten into memory safe languages, if. If someone wants to talk to me about that, I'm open to it I may be missing a lot of opportunities.

Roger Grimes:

Well, yeah, you mentioned the opportunity cost. You said opportunity costs, and that's true, because when, if I say, like, in my vision that Microsoft takes 200 programmers and tells them, go here, start developing in Rust, you're right. That's an opportunity cost. Those guys aren't making new features. They aren't making a faster browser. They're not making API's and you're asking vendors Hey, you're making all these, these opportunity cost tradeoffs for less, for potentially 70 percent less bugs. Is it worth it for you? Is it worth it for the customer?

Loren Kohnfelder:

Yeah. I don't know. And again, the 70%, I don't know exactly where that number comes from. But for example, I'm not sure that just because 70 percent of the bugs we see now are of this class. If we eliminated that class, then we would have only 30 percent as many vulnerabilities in the world and 30 percent as many exploits, typically attackers just move to the next easiest target

Roger Grimes:

Yeah, yeah.

Loren Kohnfelder:

again, I think the chances of chrome being rewritten are basically zero, right? And it would be, I would think, billions of dollars if somebody, wanted to try to take that on themselves. And when you talk about allocating the programmers, again, from my past experience in the industry, I think it's a pretty safe generalization to say most programmers prefer writing new code to rewriting existing code, or fixing bugs in old code. No, seriously,

Roger Grimes:

No, that is a very safe assumption.

Loren Kohnfelder:

so You know, I can see this big room of programmers and say, please raise your hand if you'd like to rewrite, the this part of windows in type safe language. I bet it. Not a lot of hands are going to go up, right? I want to do a new feature, right? I want to, do an AI thing or whatever, right?

Roger Grimes:

entire team would probably feel punished.

Loren Kohnfelder:

yeah I won't name names, but I know on the Internet Explorer team, the number of people who would go back and fix old version, because like when we're doing it, IE3, we had IE2, right? Nobody even knows what that is at this point. And there was like one guy who would do bug fixes in that. And he did some IE3 development too, because. He was like, I'm not going to do this full time. It's infinite. anD I guess another thing is imagine you've got like a C++ programmer, 20 years of experience, Let's say working, and like 10 of it in Chrome, they can write really good C code for Chrome, you going to really take them and say, okay, take I don't know how many months. Learn how to write Rust. And now we want you to start doing conversion. We don't want you writing any new C++ code. Want

Roger Grimes:

is he gonna be? How bug free?

Loren Kohnfelder:

And again, this thing of, is it fun or not? Because the other thing about a lot of programmers is they say, like in the right stuff, when all the jet pilots would say, I wouldn't have crashed that experimental jet like that guy, they say I can do this in C plus a hundred percent, that they may not see the value of this. And in some cases they're going to be right. Another example of this is Python. It's got CPython underneath it. That's a very mature, very well looked at piece of code. I'm not too worried about that piece of code, right? And I don't even know how you get rid of that. There is Jython. Which is a Java, but then you've got a JVM written in C, I think, underneath the Java, underneath the Python, right at the bottom, going back to the old computer days at the bottom, you've got, assembly language and, we can raise up to the level of C or C plus, there's a certain layer of that you need down there. And I think we've done pretty well. I'm not aware of any memory management problems in CPython that have been a problem anytime soon. There are these cases. And so that's, that would be my top level thing. Let's identify, this, let's do this later. And again, let's get some early experience. Here's the low hanging fruit. And that would have been the style of an approach. And again, I totally understand when you're putting out this initiative, you want to be gung ho. You want to say, Everybody should be doing this. It's the best thing in the world. And,

Roger Grimes:

Okay, I gotta

Loren Kohnfelder:

My heart is with those guys in that sense.

Roger Grimes:

I came into this talk thinking that you were wrong, and you were just being a curmudgeon about a new idea, but you've said at least three things, just one of which, performance, if there's a performance issue, minor, I think it kills the initiative. I don't think you're going to be able to talk a lot of people into going to a slot, so that's something that has to be addressed. Number two, compatibility. When you're rewriting some big chunk of code, now you've got to retest how it works with all the printers and works with, Windows or Apple. That's huge! Number three, it's going to introduce its own class of errors. That's something I hadn't thought about at all. I thought, oh, they're going to go fix all these bugs. But of course, we're going to introduce a bunch of new bugs, of course.

Loren Kohnfelder:

Yeah. Cause when you have different memory management, the lifetime of buffers is different. Code paths shift around, your interfaces have to change. It's not a subtle thing.

Roger Grimes:

Yeah. Yeah. So I think Just those three things Makes it a it's I think it's gonna be difficult for and then you oh throw in the opportunity cost. I'm, sorry That's huge. That's number four. That's the fourth huge thing. I think any one of those four alone Would make it difficult for a vendor to go, this is what we're going to do. Any one of those is a good idea. It is a killer of that idea. The roadmap, it is a killer of the roadmap.

Loren Kohnfelder:

Yeah. And let me just say for all of, every software project is different, right? They're really unique. And there will be some cases where the performance hit is small or maybe even negligible or negative, right? I don't know. But everyone is so unique, right? It's very hard to make a generalization. So again I'm thinking of one set of cases. CISA is thinking of another, I don't know, but when you talk about all the software in the world, because I didn't see any narrow scoping of this, it was software makers, I think, or publishers, as they put it you've got a really broad class of code, and these are factors that I would just say won't always all appear, but in various instances will definitely pop up, and be problematic. And at a minimum, you need to see them coming and anticipate them. If you're going to spend your efforts wisely doing this kind of thing.

Roger Grimes:

Yeah. And I think your recommendation of a pilot program or two to

Loren Kohnfelder:

Yeah.

Roger Grimes:

that to me seems to be a very reasonable ask, to go, Hey, great idea. Can you give me an example of people that have actually pulled it off?

Loren Kohnfelder:

Yeah. And again, maybe some of the AI stuff, can do the heavy lifting here. I don't know, but I do think people are going to have to do some work to make sure everything is right. And again, there's all this, the wrappers, the bridge code, are you going to break your non, non memory safe language people that are calling your APIs or what are you going to do? yEah that's my opinion on that. And again good discussion, Roger

VoiceOver:

You've been listening to the Security Masterminds podcast, sponsored by KnowBe4. For more information, please visit KnowBe4. com. This podcast is produced by James McQuiggan and Javad Malik with music by Brian Sanishan. We invite you to share this podcast with your friends and colleagues. And of course, you can subscribe to the podcast on your favorite podcasting platform. Come back next month as we bring you another security mastermind sharing their expertise and knowledge with you from the world of cybersecurity.

Introduction
Loren Kohnfelder Origin Story
Memory Safe Languages Discussion