By Wolf Richter, a San Francisco based executive, entrepreneur, start up specialist, and author, with extensive international work experience. Cross posted from Testosterone Pit.
IBM announced today that it would throw another billion at Linux, the open-source operating system, to run its Power System servers. The first time it had thrown a billion at Linux was in 2001, when Linux was a crazy, untested, even ludicrous proposition for the corporate world. So the moolah back then didn’t go to Linux itself, which was free, but to related technologies across hardware, software, and service, including things like sales and advertising – and into IBM’s partnership with Red Hat which was developing its enterprise operating system, Red Hat Enterprise Linux.
“It helped start a flurry of innovation that has never slowed,” said Jim Zemlin, executive director of the Linux Foundation. IBM claims that the investment would “help clients capitalize on big data and cloud computing with modern systems built to handle the new wave of applications coming to the data center in the post-PC era.” Some of the moolah will be plowed into the Power Systems Linux Center in Montpellier, France, which opened today. IBM’s first Power Systems Linux Center opened in Beijing in May.
IBM may be trying to make hay of the ongoing revelations that have shown that the NSA and other intelligence organizations in the US and elsewhere have roped in American tech companies of all stripes with huge contracts to perfect a seamless spy network. They even include physical aspects of surveillance, such as license plate scanners and cameras, which are everywhere [read…. Surveillance Society: If You Drive, You Get Tracked].
Then another boon for IBM. Experts at the German Federal Office for Security in Information Technology (BIS) determined that Windows 8 is dangerous for data security. It allows Microsoft to control the computer remotely through a “special surveillance chip,” the wonderfully named Trusted Platform Module (TPM), and a backdoor in the software – with keys likely accessible to the NSA and possibly other third parties, such as the Chinese. Risks: “Loss of control over the operating system and the hardware” [read…. LEAKED: German Government Warns Key Entities Not To Use Windows 8 – Links The NSA.
Governments and companies overseas paid rapt attention. They’re big customers of our American tech heroes – and they’re having second thoughts, and some are cancelling orders. Tech companies are feeling the heat. A debacle IBM apparently decided not to let go to waste.
IBM, which has long known about the purposeful security issues of Windows 8 machines, has banished them from desks where certain sensitive work is done. These employees, classed as “Privileged,” are required to run Red Hat Linux as operating system on their laptops or desktops. And if they must use applications that run only on Windows, they have to get special permission. Then they have to run Windows 7 – not Windows 8 – as a virtual guest on top of the Linux operating system. IBM’s stated reasons: stability, security, protection from viruses, and reduced risk of remote takeover of the computer.
It would be an enormous competitive advantage for an IBM salesperson to walk into a government or corporate IT department and sell Big Data servers that don’t run on Windows, but on Linux. With the Windows 8 debacle now in public view, IBM salespeople don’t even have to mention it. In the hope of stemming the pernicious revenue decline their employer has been suffering from, they can politely and professionally hype the security benefits of IBM’s systems and mention in passing the comforting fact that some of it would be developed in the Power Systems Linux Centers in Montpellier and Beijing.
Alas, Linux too is tarnished. The backdoors are there, though the code can be inspected, unlike Windows code. And then there is Security-Enhanced Linux (SELinux), which was integrated into the Linux kernel in 2003. It provides a mechanism for supporting “access control” (a backdoor) and “security policies.” Who developed SELinux? Um, the NSA – which helpfully discloses some details on its own website (emphasis mine):
The results of several previous research projects in this area have yielded a strong, flexible mandatory access control architecture called Flask. A reference implementation of this architecture was first integrated into a security-enhanced Linux® prototype system in order to demonstrate the value of flexible mandatory access controls and how such controls could be added to an operating system. The architecture has been subsequently mainstreamed into Linux and ported to several other systems, including the Solaris™ operating system, the FreeBSD® operating system, and the Darwin kernel, spawning a wide range of related work.
Among a slew of American companies who contributed to the NSA’s “mainstreaming” efforts: Red Hat.
And IBM? Like just about all of our American tech heroes, it looks at the NSA and other agencies in the Intelligence Community as “the Customer” with deep pockets, ever increasing budgets, and a thirst for technology and data. Which brings us back to Windows 8 and TPM. A decade ago, a group was established to develop and promote Trusted Computing that governs how operating systems and the “special surveillance chip” TPM work together. And it too has been cooperating with the NSA. The founding members of this Trusted Computing Group, as it’s called facetiously: AMD, Cisco, Hewlett-Packard, Intel, Microsoft, and Wave Systems. Oh, I almost forgot … and IBM.
And so it might not escape, despite its protestations and slick sales presentations, the suspicion by foreign companies and governments alike that its Linux servers too have been compromised – like the cloud products of other American tech companies. And now, they’re going to pay a steep price for their cooperation with the NSA. Read… NSA Pricked The “Cloud” Bubble For US Tech Companies
The fun issue to wonder about is SELinux which was started by the NSA and is now widely used and accepted for backend stuff. The nakedcaptialism servers may very well use it.
Ubuntu uses apparmor.
Perhaps this will put the “distro wars” back in action.
Linux is a big place, along with the toolchain (gcc/glibc), and other applications I would guess the NSA relies more on high level bugs and misconfiguration than Linux “backdoors”. SELinux itself is a mofo to configure (one of the reasons apparmor has failure widespread usage is it’s ease of use which does help security.
Um what? SELinux access control isn’t “backdoor”, it allows you to define more fine grained access policies inside the system (i.e. more restricted). It’s like far more powerful file permissions that can confine programs to smaller cages. So for example normally your browser can read almost all of your system files and write almost all your home dir files, with SELinux its access can be restricted to reading only those system files it absolutely needs to function, and writing to own configuration/state files and to Download/ directory. So if someone then hacks your browser thorough compromised web page, his options are more limited. And it’s built upon the default security mechanism, so in itself it can’t allow access to something that couldn’t be accessed without it.
Anyway, SELinux is probably the last place where you will find NSA backdoor in Linux. The point that the code is from spying agency was not lost on people back then, so the design and the code was scrutinized quite a bit. I’m pretty sure NSA won’t toss around broken code publicly and under own name. With Linux open development process nothing is easier than creating anonymous mail address on gmail and starting submitting code without really revealing who you are.
And I don’t think they need to bother with this. Linux is complex piece of code and there are enough inadvertently introduced bugs that can lead to exploits without adding new ones.
Richter completely mischaracterizes what SELinux is. It’s meant to restrict access to programs or users who should have it. If there’s a backdoor there, it’s a well-hidden one. Since Richter hasn’t pointed out how that backdoor works, it certainly looks to me like that’s not what he means – IOW, he means that the role-based security is the backdoor.
Let me rephrase: It’s meant to restrict access to only those programs or users who should have it.
The whole problem with this article is that the statement “It provides a mechanism for supporting “access control” (a backdoor) and “security policies.” seems to be based on a simple misunderstanding of the terms used.
I am not saying Linux doesn’t have backdoors. And you probably wouldn’t find them by reading the source code (or the name of the feature!) – The backdoors would likely look like tricky program bugs or magically chosen constants. Possibly hidden independent of OS as hardware issues within the CPU, memory controller or GPU. But who would ever be able to armchair diagnose those?
Theodore Ts’o, the Linux kernel developer who wrote /dev/random, made some interesting comments about Intel’s random number generator.
Later in the thread, he comments:
Well, interesting. C++ aside, compilers like gcc could also in coorperation with certain structured code, create backdoors. Data run as code … the 80286 instruction itself may also be designed with PRISM in mind, or later 80386 … till whatever 80686 now assembly language with MMX,SIMD,3D now … well I’m really paranoid, and this is the matrix :-)
Weren’t you a Gentoo Linux dev? Did you every run into anything like this?
When things get highly specialized it gets harder and harder to tell if someone is stupid or malicious.
SELinux is easy enough to disable: https://duckduckgo.com/?q=disable+selinux
First thing I do after every fresh install.
There may be other backdoors in Linux, but at least lots of smart people are looking at the code.
IF there’s a backdoor/malware in the SELinux policy features, installing the feature (assuming you’re compiling the SELinux code along with the OS) and then disabling it is likely to defeat the purpose. It’d be better to modify the makefile.
Where is the code to back up this assertion?
The backdoors are there, though the code can be inspected, unlike Windows code.
Isn’t this a bit hyperbolic:
It provides a mechanism for supporting “access control” (a backdoor) and “security policies.”
Although the NSA was involved (their interest in this stuff is a no brainer), SELinux type features have long been the subject of cutting edge computer science research. They don’t fit perfectly with this corrupt government+evil corporations narrative. Perhaps go back 20 years and look at the Flask project. It is an interesting mix of government, research, and corporations (as it is with the history with the Internet). Maybe you should ask Robert Watson to comment on this? He’s pretty approachable. Personally I use TrustedBSD features to look for suspicious activity on my systems and they aren’t that cutting edge anymore. Maybe Wolf should focus his code review on something current like Capsicum: http://www.cl.cam.ac.uk/research/security/capsicum/
There aren’t any obvious back doors in Linux; it’s been audited by paranoid people.
There are quite likely to be no back doors in the core code of Linux. (Back doors in random device drivers or in various network subsystems? Maybe.)
The article is spreading FUD about Linux, bluntly. If you know what you’re doing, you can audit Linux code — and some pretty paranoid anti-NSA types have done so. Can’t do that with Microsoft code.
The corruption of Linux. A story as old as humanity:
“While they promise them liberty, they themselves are the servants of corruption: for of whom a man is overcome, of the same is he brought in bondage.”
Savers of Humanity should be sure to put Poison Pills in their creations.
Linux is a story heard over and over, a supposedly liberating idea which is then put into the service of Evil. Hey, that sound like the INTERNET!
Way back in 1984 Ken Thompson, famed author of the original Unix operating system, won the Turing Award (top computing award) for his demonstration of self-replicating binary code for the login command that would evade detection, allowing back door entry to the system.
During his acceptance speech to accept the award, he gave this oft-repeated advise:
Correction: Ken won the award in 1983.
The title/subtitle of his speech is equally apropos:
REFLECTIONS ON TRUSTING TRUST
To what extent should one trust a statement that a program is free of Trojan horses? Perhaps it is more important to trust the people who wrote the software.
Mhh, thanks for reminding! Don’t know how after that anybody could have trusted compilers. Luckily memories are short and humans imperfect.
There’s a way around this: https://www.schneier.com/blog/archives/2006/01/countering_trus.html
I tried to post this comment earlier, but the site seemed to be having some trouble…
I’m surprised to find myself in a position where it sounds like I’m defending TPM & the NSA, but I’m afraid this article is chock-full of hyperbolic misinformation.
Yes, TPM is a problem. I can see why the Germans don’t trust it and yes, it means we (consumers) “lose control” of our computers. The issue is what “lose control” actually means. It doesn’t mean that Microsoft can suddenly remotely run whatever software they want on your computer. TPM is a cryptographic technology designed to restrict what software can run on a computer; the justification is that this will help protect users from malicious software. TPM doesn’t enable someone else to run software on your computer, but it does create the potential for the manufacturer (et al) to block you from running software. This is the sense in which you “lose control” of the computer.
Another big problem with TPM is that there isn’t a mechanism for the user (ie, us) to replace the cryptographic key that’s the basis for the whole chain of trust. This means that we simply have to trust that the manufacturer (et al) didn’t store a copy of the key before we got the computer. The problem here is that TPM is designed to be the starting point for all sorts of higher-level security stuff (eg, disk encryption, password storage, etc). If you use TPM to actually secure those, you should be sure others haven’t had access to the TPM key. Since you can’t set the key and the gov’t may have stored it during manufacture, the security model breaks. TPM can also monitor what’s happening on the computer, which is another potential source of concern.
As for SE Linux, there seems to be a misunderstanding of what “access control” means — it’s not (inherently) a backdoor. Access control doesn’t mean someone else has or controls access to your computer system. It’s about controlling which users have access to read/write files, run programs, etc. Linux is designed as a multi-user operating system, so each user should only be able to access and change their own files, and only the admin should be able to change system files and install programs. SE Linux incorporated a stricter, more finely-grained system for controlling access in order to comply with DoD specifications. That’s the “access control” the NSA helped plug into linux.
Of course, there may be backdoors in code contributed to linux by the NSA (or by anyone else). The fact that linux is open-source means that anyone can review the code and check for such vulnerabilities. As a linux user myself, I’m far less concerned about the NSA compromising the linux kernel than I am about the plethora of other ways they are (and can) monitoring our communication and interactions.
Linux isn’t a panacea, but at least the code is available. If Germany wants to feel more secure about their computing platform, they can have their intelligence guys review every line of code in whatever version of linux they want to deploy.
Sorry for the long comment. I find the NSA/GCHQ surveillance program dreadful (and worrying), but I think it’s important to keep the facts straight. Bruce Schneier is a well-known security expert and a good source on the subject; you can read more on his blog.
Excellent explanation. And to repeat, as you said, Linux is open-source. I’d just add that places the final burden on the user to inspect the source code of the kernel (and all subsequent code prior to compilation). You’re talking roughly 15 million lines of code for a linux kernel. Quite a daunting task. In the end, as Thompson said, it falls on trusting the authors of the code and subsequent users. Even with trusted code, there are other sources of security concerns, e.g. hardware, like CPU.
So, if it were me, I’d be thinking about if I were them, with resources they have available, what would be easiest means of mass surveillance. Probably the means that Snowden has revealed….. monitoring traffic, gathering metadata, data mining algorithms, gathering communications (including content) of known suspects and their contacts, etc. Instead of breaking crypto I’d get the content prior to (or after) encryption, or as alleged, obtain keys. KISS. I’d resort to “less than legal” tactics, and if not thwarted, I’d feel progressively more entitled to over-reach and less ethical and secret practices, even flagrantly illegal in the name of efficacy and shortcuts through bureaucratic red tape. After all, the intelligence community was on a mission from God. Nobody else need know the details.
Even easier yet, as I advised my students back in the early 2000’s, when laws were still being drafted governing internet communications and the government was already issuing subpoenas to network providers, never consider anything put on the Internet as being private. If you’re going to participate in illicit activity, it’s not the place to communicate about it. There is NO means of communication that can’t be intercepted by our government.
Whether or not Linux is truly spy-proof is almost irrelevant, since only the most sophisticated users will be able to tell. And, if the comments here are any illustration, these sophisticated users will argue among themselves.
Quite apart from technical issues, there is the issue of trust. Thanks to the NSA, US industry has lost it, and why anyone would pay to be spied on is beyond me. I am stunned that we only get a few observations like “They’re big customers of our American tech heroes – and they’re having second thoughts, and some are cancelling orders.”
This is huge.
The one remaining light of the US’s once legendary innovation culture has been smashed to bits. A tragic combination of spineless Democrats seeking primarily to avoid blame, with Hillary and Barack leading the pack, and the wicked opportunists of the klepto-facist police state, has allowed the apparently incremental undermining of the nation’s most prestigious, cutting edge industry.
Was this the “creative destruction” the Schumpeterians were trumpeting about so triumphally as manufacturing collapsed?
For now, the NSA and the whole US government apparatus are giving off a global bad smell, but as the reality sinks in, the effects will be unmistakable. Has 40 years of neoliberal bad government finally knocked the US off the last pedestal, leaving the country number one only in income inequality, poverty, illiteracy, and the other social ills that were inconceivable a mere generation ago?
And, if the comments here are any illustration, these sophisticated users will argue among themselves.
Actually no, most of the technical comments in this thread agree with each other; SELinux isn’t a back door, rather, it’s concerned with access permissions to the file system. If I have any concerns with these comments, it’s with the degree of consistency of agreement – not the reverse, and…
1) File access isn’t a back door to one’s data??? HUH???
2) The ostensible purpose of a module(s) doesn’t mean it can’t be used to hide other functionality which would fit perfectly the rather “narrow” definition of back door (remote control – I assume) that these commenters use to poo poo the possibility that there is one in SELinux.
3) The fact that one can read the source doesn’t mean that SELinux is any more or any less secure from being a back door than any other part or version or release of Linux.
However, the last point, Linux/SELinux is open source, is the one that might possibly lend the most credibility to an argument that Linux generally, and SELinux in particular, is somewhat safe from Trojan horses (by which I mean anything that is there that shouldn’t be that relates to the security of your machine/data from non authorized use or observation (including, no, particularly including, file access).
I would be curious to know if Linux distributions come with any pre-compiled libraries or code modules of any sort? Is every line of source available? Does it require a particular compiler? Is the source available for that/those compilers? Who writes the automated software to look for security issues mentioned in this thread and could they have any possible interest in lots and lots of NSA money or be subject to NSA threats of one sort or another? Outside of automated software that can read the source code, it would be extremely difficult and time consuming for anyone that isn’t a highly specialized software engineer to find such back doors. But it is true that if they existed, one would expect somebody to find them and at least try and make others aware of it. Would the NSA come barging in at that very point and force you to swear to secrecy, or else?
There are other available versions of Linux than Red Hat, no? Would any of them be “safer” by any criteria such as smaller equals less likelyhood of collusion with NSA?
Probably the only truly safe version of Linux would be one written by boot-leg hooch distillers in the Appalachian mountains, say of some remote part of Kentucky, and somehow I suspect those are rare.
To summerize, the most compelling reason one can use to argue that Linux or SELinux does not have a back door, either to access your files or to comandeer your OS and monitor, is that it is open source. That leaves the real question to be would we indeed be informed of it if someone or some group from a company or a university or whatever DID come accross such back doors hidden, as it were, in plain sight?
Of course then you have the hardware and as has been pointed out elsewhere, there is plenty of opportunity there for all sorts of NSA mischief that would be just about impossible to find for anyone but a specialized group with deep pockets.
SELinux is part of the kernel, and therefore could potentially be a good place to put a back door. The code has been looked at suspiciously because of its source and has probably been more carefully vetted than anything else in the kernel. However it is very hard to detect some bugs – check out the Obfuscated C contest sometime.
The nominal purpose of SELinux is to allow tighter access control internally on that computer. That’s a helpful security feature and not likely to cause problems.
> I would be curious to know if Linux distributions come with any pre-compiled libraries or code modules of any sort?
Most have precompiled packages that are downloaded from an online repository. Some distros download the source and compile from that, e.g. Gentoo.
> Is every line of source available?
For open source, yes. Some other commonly used software is provided as binaries, such as Flash and Java.
> Does it require a particular compiler?
Mostly gcc is used in Linux. It depends on the source code. Some stuff is tough to compile.
> Is the source available for that/those compilers?
Most open source software uses gcc and other GNU compilers, so yes.
> Who writes the automated software to look for security issues mentioned in this thread and could they have any possible interest in lots and lots of NSA money or be subject to NSA threats of one sort or another?
I don’t work with this myself but I believe there are many commercial code checkers available.
> There are other available versions of Linux than Red Hat, no?
There are lots of distributions available, and they share most packages – generally a separate group maintains the software, and specific distributions package it up in their way for their users.
> Would any of them be “safer” by any criteria such as smaller equals less likelyhood of collusion with NSA?
Some are more determined to be free than others. I’m about to switch back to Debian from Ubuntu because Ubuntu is trying to commercialize.
> Probably the only truly safe version of Linux …
Nothing is safe unless it’s turned off and unplugged, and it wouldn’t surprise me to find an exception.
> would we indeed be informed of it if someone or some group from a company or a university or whatever DID come accross such back doors hidden, as it were, in plain sight?
A reputable group would disclose the problem, and a disreputable group would sell their knowledge to Vupen or another bunch that sells zero-day exploits. Vupen has an exploit list that the NSA subscribes to for $$$$.
It’s not impossible, but highly unlikely.
The way Unix security works is to split access permissions into three levels, “User”, “Group” and “Others”. Each level can either Read, Write or eXecute the file (referred to as rwx). So, for example, the User (owner of the file) may be able to do all three (rwx), the Group (others who can access but are not necessarily the owner) may be able to read and execute (r-x), and others can’t execute (r–). You could, for example, create a file in a web server that’s writable by the owner, and anyone in the Web group, and only readable by others.
SELinux addresses two further cases not covered by this model. First, you may want to restrict access to specific programs, even when run by an authorized user (say, something you downloaded and don’t trust), or to a user trying to access it in a specific context (say, the user logged in on the physical console is allowed to execute, but not the sam user logged in remotely). In SELinux, this takes the form of an extra layer of permissions on top of the existing Unix permissions. Typically, a set of permissions is pre-shipped by the distribution.
All of this is entirely local to the system. There’s no interaction with anything from outside the actual system where it runs. It has no open external ports or any means of communication with the outside world. There are two possible ways of creating a back door with this setup.
1) You could write it in such a way that EvilNSABackdoor.exe (well, not exe – Unix executables don’t use extensions) is always granted permission to do what it wants. The trouble with this is that you’d have to get EvilNSABackdoor onto the system somehow in the first place. Then somehow have it run on the system without the user’s intervention. Even if you did, on a server, or a machine with a sufficiently paranoid admin running a firewall, nmap and/or a packet sniffer/packet logger, any activity by EvilNSABackdoor would be pretty obvious. He might now be able to figure out exactly what’s being sent, or where, but the fact that something that shouldn’t be happening is indeed happening would be blindingly obvious.
2) You could try to hide the backdoor in special circumstances like a buffer overrun or some special sequence of events, like a buffer overrun or something, but that’s how benign security problems also look, so it’s very possible that your carefully crafted backdoor is just bug-fixed out of the way by some lumbering do-gooder free-software hacker with an RMS t-shirt. And you can’t even silently reintroduce it, because it’s now a known exploit, and people will be looking out for it in any new patches to the kernel.
So, fairly difficult, next to impossible to actually execute this plan of action.
1. There are some things – the binary nvidia drivers, for example, that are closed, but you can pretty easily build a distro with only free software components. Heck, I did it a few times myself.
2. Generally Linux uses GCC as the compiler. There’s no question on this account – all of GCC is and will always be free and GPLd. There are also several independently compiled binaries of gcc, and its own bootstrap process compiles gcc on a freshly compiled gcc, so there’s enough entropy in there for techniques like Diverse Double Compiling.
Hundreds and thousands of people on the LKML, basically. There’s no regression test suite per se (the task of writing one would be mind boggling). But there are always test suites running on the kernel. Besides, most changes (including SELinux) came in as patches, and several people review them when they’re still not monstrously huge before they go into the kernel. Subverting so many people is possible, but highly unlikely.
Depends on the backdoor in question, and anyway, techniques such as DDC can be used to see if there actually is one.
What I’d expect, especially if they went with the buffer overrun route, is that somebody would just come in behind them and patch it back to sanity. Probably completely unaware that anything had happened.
Honestly, I don’t see how they’d have the chance – the primary means of communication would be the LKML, which is completely open; they’d have a hard time blackmailing hundreds – even thousands – of devs to keep their mouths shut once it’d been announced on the list. Since that’s the first place a kernel dev would go to say that something had been found, there’s very little they could do before that point either.
In fact, the majority of Linux installations are probably not RedHat. Anyway, this is at the kernel level we’re talking about. I think that would be a bad place for them to attack, especially when there’s so much horrible code, unaudited code and closed-source code around on the userspace (ie, non-kernel programs) to fiddle around with. Ironically, SELinux makes it that much more difficult to fiddle there, too.
You’d be surprised…
That would be my bet, yes. For desktop Linux systems, Mint and Ubuntu are the most popular as measured by downloads. Even if you count all CentOS and Scientific Linux installations as RedHat, there would still be more Ubuntu/Mint installations.
Interestingly, Ubuntu and Mint are based on Debian Linux, which uses a different package manager, and has made many other design choices that are different from RedHat and its kin. Neither uses SELinux, either, instead having chosen AppArmor as the access control software.
Servers are probably another story, but these days those installs are hard to track.
This is another reason that Richter is badly misinformed. Even if one accepts that SELinux is some sort of backdoor (which it isn’t – if anything, it’s the opposite of a backdoor), there are still plenty of Linux distributions that don’t use it.
Thanks @J and @Shash, those are both exceptionally clear comprehensive replies. @lb, below, also makes some unusually good points about minor – but sufficient – modifications for subsequent exploitation, that can plausibly be denied as code errors, as well as his points about conditioning user expectations and acceptance of such things as automatic updates, and other points, etc., etc.
Sometimes unanimity of agreement is groupthink, but it’s quite often an indication that something written or said is just plain wrong. That’s the case here.
Unix and Linux have file system controls, but in recent years they have proved inadequate for a number of reasons. SELinux is another level of access control. If you want to do the NSA and other would-be hackers a favor, turn it off (or turn off AppArmor, if that’s what your Linux distro has for access control). It will make their jobs easier, not harder.
SELinux is like putting a dead bolt lock on your door with its own key. If you still use the lock on your doorknob, they still have to break your door (or door frame) to break in, even if they have a copy of your dead bolt key.
See above. Also, it’s important to remember that in Unix and Linux, all system resources are abstracted as files, so access control limits access to those things, as well.
Others have dealt with this pretty well, but I’ll just reiterate that this is an extra layer of access control. Even if the NSA has managed to sneak something past all the folks who have audited that code, what they are getting would be, in essence, what you could give them just by turning off SELinux.
Yes, that is largely true. As a central point of security policy, though, it would be a fairly lucrative target. Plus, if SELinux (or AppArmor) is properly implemented on a Linux system, exploits from other sources are a whole lot less likely to be effective. Aside from understandable concern about this security-related code having been contributed by a spy agency, that’s a reason SELinux received so much auditing attention.
[BTW, I should disclose that I was in the defense industry for many years. I might be accused of having a bias here. In reality, the only work I’ve done related to SELinux is administering or designing systems that used it.]
SELinux is like putting a dead bolt lock on your door with its own key. If you still use the lock on your doorknob, they still have to break your door (or door frame) to break in, even if they have a copy of your dead bolt key. – Cujo359
I find your argument compelling except in one respect. Namely, you seem to imply that the nature of SELinux, that is, greater granularity of file access permissions (better dead bolt – in addition to the normal lock) is what makes it so unlikely a candidate for modification or back-doors. First, as @lb says below, such modifications can be very small and very subtle and still be effective as part of a larger strategy, indeed, far more effective than a dramatic all out back-door. Second, if I were going to attack the security of a system, I would consider these enhanced file access mechanisms a particularly good candidate for doing so in subtle ways since, 1) People might not yet familiar with them and 2) Being part of the kernel, they would be called on consistently to modify bits associated with files. It’s the consistency that I would be interested in.
It’s a little as though you were saying, “Hey this is a super lock on top of a lock, and therefore it can ONLY be used to protect the door.” Um, no, that might be its ostensible purpose, but not necessarily it’s actual one. Or, it might do a pretty good job of adding security to the normal lock while at the same time switching some little unnoticeable thing consistently that could be subsequently used in conjunction with other parts of the locking mechanism to weaken it.
The part of your (and others’) argument I find more persuasive however, is that being part of the kernel, this code is particularly subject to review and audit.
Also, the fact that one can turn the “SE” features off doesn’t necessarily mean it isn’t useful. Au contraire,insofar as it makes it less suspect it becomes all the more useful as part of a larger strategy. And remember, no one is going to put their whole “back-door” or trojan-horse strategy into one single area.
Code unrelated to software’s ostensible function is usually pretty easy to spot. If it isn’t, it’s likely so subtle that it won’t last past the next compile, much less the next revision.
As people have mentioned, there are far more plausible ways to get unauthorized access to a system.
That may well be true and the post only mentions SELinux in one paragraph. The billion IBM is spending is presumably on Linux generally, not simply the features in SELinux.
Code unrelated to software’s ostensible function is usually pretty easy to spot
Not necessarily, and particularly not if making the code look innocous is one’s intention and one’s specialty. We are not talking about methods like MyObject::SpyOnUser(const char *UserName), or necessarily functions at all. Just a bit flipped here, a constant declared there, depending on conditions that may for all intents and purposes look entirely related to the ostensible purpose of the module.
The stuff about SELinux being a backdoor to Linux is total nonsense.
That code is open, has been out there for a decade, and has been accepted into the mainline Linux kernel. You don’t get code accepted into Linux without it being scrutinised in depth. Just go read some of Linus’s rants when people who should know better attempt to submit sub-par code for inclusion.
It’s far, far easier to believe that the NSA has gained access (legally or not) to the private root keys needed to generate signed code that the TPM and Windows 8 requires, than to believe that it has contributed backdoors to Linux that have resisted scrutiny for over a decade despite the code being open and vetted by many groups.
PS Thinking that ‘access control’ means ‘backdoor’ shows that the author has no idea what ‘access control’ means in this context.
As a Linux and Windows Systems Admin, selinux is the least of my worries in the security realm of my day-to-day concerns. And considering the fact that Mr. Richter is an investment adviser (and an average one based on his multiple and regular postings over at ZeroHedge) he would not ever be my “go-to” guy when it comes to technical issues regarding Linux and security.
As many have pointed out above, selinux is a permissions-based access control system that denies access by default, not gives access. The code is open-source so may be easily reviewed by experts and it may also be turned off easily (or set to logging-mode only).
My concern would be Redhat’s embracing of TPM (since the 5.2 distribution) although that is also optional as long as the platform bios allows boot-up into “legacy” mode (non-TPM), combined with the fact that the local user does not have the certificate, and if he does, probably cannot change it.
The first time it had thrown a billion at Linux was in 2001, when Linux was a crazy, untested, even ludicrous proposition for the corporate world
Really? Coz I remember that linux was running ~25% of servers by 1999…
Good point, I’d forgotten about that and another reason why Mr. Richter is out of his depth in this article. I also specifically remember that the majority of HP’s print servers were linux systems and they also had a major amount of linux samba systems online to stabilize their “Network Neighborhood” infrastructure by the late 90’s.
As an IT Company, IBM was actually slightly late into the game, although they did pour in a lot of support starting in the early 2000’s.
Most of the IT guys i spoke with at the time (some being very sophisticated old code heads who had specialized in writing OS modules for Dec such as Dec’s flavor of Unix) were deeply skeptical of Linux as an OS for commercial use. This remained true at least up until 2004. Granted, they might not have chosen those same words, “when Linux was a crazy, untested, even ludicrous proposition for the corporate world”.
The old guys might have been resistant…but Google was built on Linux around that time. Doesn’t change the fact that 25% of servers ran it in ’99…
I meant “old guys”, not in the sense of sailors arguing about the validity of ghost stories, but rather about highly skilled and experienced professionals very familiar with Linux who would not make such assertions out of mere “resistence”.
25% of all servers, perhaps – that can be argued – but which servers??? All the ones for small start ups (such as Google at that time)?
25% of all servers, perhaps – that can be argued – but which servers???
Webservers running Apache. Here’s a CNet article from 2002:
(Note: This article is also rather apropos since it mentions U.S.-imposed crytpography export restrictions were still affecting the market at the time.)
I can easily envision old school, closed source OS programmers (VMS? lol) being skeptical of a Unix clone written by kids on the internet. After all, their paradigm was basically the opposite. And open source development on that scale truly was a new thing in the 90s.
The fact the people you spoke to didn’t realize Linux had already taken over such a large chunk of servers simply shows they were not in touch with what was going on in web development that decade.
IBM was throwing serious cash behind Linux back in the late 1990s on the mainframes to run hundreds of Linux instances as guests under VM-this was the first attempt by IBM to market the s/390 series as server consolidation boxes.
Sadly, the lessons of Multics have been lost.
We could all learn a lot from Wall Street and realize the only way one might hope to hide truth is by concealing it behind a bodyguard of lies. Per controlling unauthorized access whose intent is more malicious than spying, it’s good to see Linux still gaining on MSFT.
The NSA… the NSA… I’ll bet they have no less a mole problem than the FBI or CIA. If Wall Street generally has not been worrying about this, I’ll bet the adage “sunlight is the best disinfectant” is really being put to the test with all the darkness the Street has been experiencing these past couple weeks.
Richter means well and his suspicion is appropriate, but this article will have the autistic jerkoffs of the NSA laughing into their neckbeards. Now, if NC really wants to piss those panty-sniffers off, how about enabling TLSV1.2?
If you know what you’re doing, your browser supports encryption that’s not bad at all. Check yourself out: https://cc.dcsec.uni-hannover.de/
But most sites don’t use the latest versions of TLS. If we configure our browsers to use it, most of the Internet can’t even talk to us. NC can lead by example here.
This article is proof that one should stick to writing about what one knows. This guy doesn’t have a clue. If the NSA has any backdoors in Linux, they’re almost certainly not in SELinux, code to allow that would stick out like the proverbial sore thumb. Now, there might be code that would allow access to things you shouldn’t once you’re on the computer, but even that’s pretty doubtful; the SELinux code has been vetted by experts and they haven’t found anything and there haven’t been published exploits base on it, either.
I suspect that the NSA is far more interested in what you send across the Internet than what’s on your disk drive, anyway. Where they’ve been caught, it’s generally been in efforts to undermine encryption protocols, not for being able to hack into individual machines.
The REAL problem is the ability of CPU makers (Intel, AMD, ARM) to insert microcode.
For the record, NSA itself considers any computer connected to a network as unsecure.
In fact, the only computer that is truly secure (according to an old NSA white paper) is unplugged from the wall and in a locked room.
I tried to post this 7 hours ago and will try again now….
At least linux and other spawn of unix can be built and maintained without software backdoors…and I expect versions of such to emerge and prosper in the future OS space.
Whether Intel and AMD have or will develop hardware doors into all OS has yet to be determined but is assuredly being fostered by TPTB.
Having been part of the techie world for a few decades I have hope that the world of techiedom will reject Big Brother or out it at every turn…. i.e….if I was a astronaut, I would have a serious talk with NASA techies asking them about the “hardness” of their boxes.
The world operates on trust, which has been inexorably broken and IBM can’t buy it back for a billion or two……how can you build a Matrix when all know what Nero knows? The plutocrats and their puppets have no clothes!
Your comment confuses me. You say you hope the world of techiedom will reject Big Brother or out it at every turn[…] but trust has been inexorably broken? Which is it?
At least linux and other spawn of unix can be built and maintained without software backdoors…and I expect versions of such to emerge and prosper in the future OS space.
Any system, even Android, can be built without back doors, but how can you tell? And as to versions of Unix/Linux, how can you tell which is which?
I’ve yet to find a back door in Ubuntu.
You know what that means?
It means I haven’t found it yet.
I’m a little surprised at the comments to this post. The consistency of agreement seems odd. I would have expected more energy spent discussing how Trojan horses might be introduced into open source (or why it is absolutely impossible) than such a consistent, “makes no sense”, rejection that SELinux as written by IBM owned, Red Hat, embeds nefarious code.
To me, given the recent revelations regarding the NSA, the idea that they would not request a back door into SELinux from IBM or that IBM would not fall all over itself to comply, is what would be truly remarkable. The only question I would have, is how does one hide it in open source and I imagine for those who actually know the subject, there MUST be ways. Otherwise, I very much doubt IBM would bother.
Also, the post does not focus solely on SELinux. It mentions it in one paragraph with regard to “access” by which I assume he means file access and access granularity to the file system definately fits within some definitions of back door.
Again, what would strike me as unbelievable would be if IBM, partnered with Red Hat, did NOT not put back-doors into their version(s) of Lenix.
I’m a hardware/software systems guy who has also worked in security R&D, so here’s my two cents.
Laypeople tend to lack awareness of a few things regarding system insecurity:
1) subtly weakening protections is probably more valuable to someone sneaky (and has plausible deniability) than a sensationalizable backdoor insertion. There does not need to be a static, ever-present backdoor, for damage to be done. Someone just needs to leave a window open a tiny crack.
2) systems of software and hardware have a _lot_ of places to hide. You could make a system subtly compromisable by flipping one bit in one place reliably… if you know which bit and under which conditions.
3) The practice of auditing code, while valuable, only goes so far because a lot of ‘code’ is not generally available/auditable by independent parties (technical obstacles and legal leaning via laws like the Digital Millenium Copyright Act can prevent examination and publication in this regard). This is far more true when you consider the layers the auditable sofware must trust: the CPU running microcode (something only the CPU vendor and a very select few partners, probably including 3-letter agencies, ever get to see), all devices such as network interfaces on that particular machine (hardware bugs are, ahem, not unheard of) AND their firmware (again, visible largely to vendors and seldom audited independently/by the public).
What I’d worry about is the NSA’s influence architecturally, to induce vulnerable points in these systems which are invisible, subtle, and which could easily be chalked up to honest errors along the way. Beyond that, assuming this practice of influencing industry is part of a long game, I’d expect architecture for transit once a system is entered. For example, there’s one particular pain-in-the-ass mode of modern x86 processors which hardware vendors (including device vendors) can cause to be transparently entered, run arbitrary code of their choosing with access to all of your computer’s memory at that point, and exit… and it’s been around a LONG time. Your x86 computer does this all the time and you don’t know it, and nearly nobody has looked at the code involved.
Lastly, the culture fostered by automatic updates, including those of unaudited firmware, makes people compliant and unaware that a single such push could compromise their systems at a particularly low level. I can’t help but assume the social conditioning of security comes at least partially at the behest of people who would benefit from consumers being lax in this regard, though it’s always chalked up to convenience and consumer ignorance.
All of this said, let’s not let Richter’s misstatement and hyperbole here detract from this very serious area of concern.
Talking about VGA driver code run in real mode? I can think of several other attack vectors for x86 chips off the top of my head.
The reason people are dismissing Richter’s claims is that SELinux is one of the *least* likely attack vectors.
He is talking about System management code: http://en.wikipedia.org/wiki/System_Management_Mode . Funny thing it is, it completely bypasses the operating system and runs closed firmware supplied by your HW manufacturer.
SELinux really is way down the list of places where to look for backdoors in your computer.
there is no such thing as security, except being two steps ahead…
As soon as you think your “safe and secure” your screwed in a cannibalistic market battlefield.
Rumors about the integrity of free software, i.e. licensed with GPL, BSD, MIT licenses, have been around for some time. The targets for backdoors would be OpenSSH, IPSEC, wireless NIC firmware, ethernet NIC firmware, keyloggers…
Here’s an unconfirmed disclosure from 2010 about IPSEC development being influenced by Law Enforcement:
I’m just going to leave this here.
Ok people, compiling a Linux kernel once or twice doesn’t make one an expert on computer security.
First, open source projects get source contributions from all over the place.
Second, these source contributions rarely get a decent code review; to believe otherwise is delusional.
Third, it’s incredibly easy to obfuscate bugs that create a security weakness. These weaknesses could take years for really good hackers to find, and often, finding the bug will be an accident.
There are unknown interactions between proprietary device drivers and the hardware they control. Heck, even open source drivers can contain any number of obfuscated weaknesses.
Fourth, encryption is actually very fragile if the wrong settings are applied. Things like proper padding and when it’s appropriate to use ECB vs CBC make a huge difference in the encryption’s level of protection. Additionally, most encryption implementations use bit-shift operations to perform multiplication, and this type of code can be very hard to read.
So, in other words, it wouldn’t surprise me at all to learn that there are effective exploits unknown to the programming community contained in these operating systems.