Linux kernel security uproar: What some people missed

Linux kernel security uproar: What some people missed

Commentary: It’s not really very interesting that University of Minnesota researchers introduced bugs into the Linux kernel. What matters is what would have happened next.

security-remote-work.jpg

Image: iStockphoto/Igor Kutyaev

Recently the Linux kernel community was aflame due to efforts by researchers at the University of Minnesota to intentionally torpedo Linux security by submitting faulty patches. While the University’s Department of Computer Science apologized, the damage was done, and Linux kernel maintainer Greg Kroah-Hartman banned the University from contributing to the kernel

However you feel about what these researchers did (Chris Gaun, for example, argued, “A researcher showed how vulnerabilities can EASILY make it through [the] approval process”), this isn’t really about Linux, or open source, security. It’s always been the case that it’s possible to get bad code into good open source projects. Open source software isn’t inherently secure. Rather, it’s the open source process that is secure, and while that process kicks in during development, it’s arguably most potent after vulnerabilities are discovered.

SEE: Top 5 programming languages for systems admins to learn (free PDF) (TechRepublic)

Tell me something I don’t know

Organizations of all sizes have depended upon Linux for performance and security for decades; in fact, those same organizations depend upon a wide array of open source, generally. A new Synopsys report suggests that the average software application depends on more than 500 open source components. We’ve never depended more on open source, and we tend to justify at least some of that dependence based on the idea that open source is secure. 

This doesn’t mean that the open source, generally, or the Linux kernel, specifically, is somehow impervious to security flaws. In fact, Linux kernel developer Laura Abbott has written, flaws are standard operating procedure:

The problem with the approach the authors [University of Minnesota researchers] took is that it doesn’t actually show anything particularly new. The kernel community has been well aware of this gap for a while. Nobody needs to actually intentionally put bugs in the kernel, we’re perfectly capable of doing it as part of our normal work flow. I, personally, have introduced bugs like the ones the researchers introduced, not because I want to bring the kernel down from the inside but because I am not infallible.

To get these particular flaws to combine to create a significant security problem, she went on, would be a multiyear effort, with a lot that could go wrong (or, rather, right) along the way:

Actually turning this into an attack would probably involve getting multiple coordinating patches accepted and then waiting for them to show up in distributions. That’s potentially a multi-year time frame depending on the distribution in question. This also assumes that the bug(s) won’t be found and fixed in the mean time….[T]here’s no guarantee that code you submit is going to stay in the form you want. You’d really have to be in it for the long haul to make an attack like this work. I’m certain there are actors out there who would be able to pull this off but the best fix here is to increase testing and bug fixing, something Greg [Kroah-Hartman] has been requesting for a long time.

OK, OK. But let’s assume someone did pull it off. What then? Well, that’s when open source security truly shows its mettle.

It’s a process

I’ve written about this before, but it’s important to remember that security is always about process, not the software itself. No developer, no matter how talented, has ever written bug-free software. Bugs, to Abbott’s point above, are a constant because human imperfection is a constant. Yes, we can try to test away as many bugs as possible, but bugs will remain, whether intentionally deposited in a project or unintentionally created. So true security kicks in once the software is released, and people can either discover the faults before they become serious issues, or they’re reported and acted upon after release. 

Or, as System Initiative CEO and Chef cofounder Adam Jacob has posited, “The question is, how quickly can you react to the disruption in your supply chain?”

Way back in 2007, Mitchell Ashley articulated how this might work in practice:

[In open source] security issues are most often the first to be reported. If security problems aren’t fixed pronto, the open source project will be labeled as lame by users, who will move on to the next option. Also, the openness of vulnerability disclosure means software authors are incented to fix security problems fast. If they don’t respond quickly, they risk others forking the project and taking over from authors who won’t keep up with the market of open source users.

Later, I expressed similar thoughts, arguing that “Open source software isn’t inherently more (or less) secure, rather it offers an inherently better process for securing code. Bugs in open source code, when uncovered, are quickly fixed through an open process.” As such, the fact that University of Minnesota researchers were able to inject flaws into the Linux kernel isn’t the real story. Nor is the story that the kernel community caught the bad actor before the code shipped in production, though that is a real benefit of open source development practices.

No, the real story is that even had those flaws remained, if ever they became an issue, the process for fixing them would be swift. There would be no waiting on some company to determine the optimal time to inform the world about the issues. Rather, fixes would be available almost immediately. That’s the process by which open source becomes, and remains, secure.

Disclosure: I work for AWS, but the views expressed herein are mine.

Also see

Source of Article