Meltdown and Spectre FAQ: Crapification at Scale

By Lambert Strether of Corrente.

Yesterday, Yves posted a “primers on Meltdown and Spectre”, which included several explanations of the two bugs from different viewpoints; if you feel you don’t have a handle on them, please review it. Today, I want to give an overview of the two bugs. I will dig into the details of these two bugs in the form of a FAQ, and then I’ll open a discussion of the larger business and political economy issues raised in the form of a MetaFAQ. First, I should make one point: Meltdown is a bug; Specture is a class of bugs (or, if you prefer, a strategy). ThreatPost explains:

Meltdown is a well-defined vulnerability where a user-mode program can access privileged kernel-mode memory. This makes patching Meltdown much easier than Spectre by ensuring kernel memory is unmapped from a user-mode, which is what we see in the form of kernel page-table isolation (KPTI),” said Jeff Tang, senior security researcher at Cylance.

Ben Carr, VP of strategy at Cyberbit, said there is not a single patch that can be applied for Spectre and mitigation efforts will require ongoing efforts. He said Spectre attacks do not rely on a specific feature of a single processor’s memory management and protection system, making future attacks part of a generalized strategy to undermine a CPU.

In the case of Spectre, it is a class of attack not a specific vulnerability… Exploits are based on the side effects of speculative execution, specifically branch prediction. This type of exploit will be tailored and continue to morph and change making patching extremely difficult,” Carr said.

Researchers say Spectre also represents a larger challenge to the industry because it requires a greater degree of coordination among stakeholders to mitigate.

This distinction is important to make because press coverage, in lumping the two together, will have the tendency to make people think that both are fixed when only Meltdown is fixed, when in fact Spectre will require years of remediation.

The Meltdown and Spectre FAQ

1. Is there a really idiotic headline that shows how problematic press coverage is?

Yes. Here it is: “CES 2018: Intel to make flawed chips safe in a week”, from the BBC. It’s idiotic because at best what Intel will have done is release patches that when downloaded and installed by system owners patch the problem. And patching systems isn’t always easy.[1] Bruce Schneir explains:

[S]ome of the patches require updating the computer’s firmware. This is much harder to walk consumers through, and is more likely to permanently brick the device if something goes wrong. It also requires more coordination. In November, Intel released a firmware update to fix a vulnerability in its Management Engine (ME): another flaw in its microprocessors. But it couldn’t get that update directly to users; it had to work with the individual hardware companies, and some of them just weren’t capable of getting the update to their customers.

We’re already seeing this. Some patches require users to disable the computer’s password, which means organizations can’t automate the patch. Some anti-virus software blocks the patch, or — worse — crashes the computer. This results in a three-step process: patch your andi-virus software, patch your operating system, and then patch the computer’s firmware.

We’ll see more examples of this below. This assumes, of course, that the bug fixes are not buggy; a risky assumption. Wired:

“You can’t bring down a power grid just to try out a patch,” says Agarwal. “Industrial systems, hospital machines, airline control systems—they will have to wait. They can’t just patch and hope that things will work out.”

2. How bad are are the Meltdown and Spectre bugs?

They are very bad. From the Guardian:

Meltdown is “probably one of the worst CPU bugs ever found”, said Daniel Gruss, one of the researchers at Graz University of Technology who discovered the flaw.

They will stay bad for a long time. The Register:

The critical Meltdown and Spectre vulnerabilities recently found in Intel and other CPUs represent a significant security risk. Because the flaws are in the underlying system architecture, they will be exceptionally long-lived.

3. How many chips are affected?

Billions and trillions. MIT Technology Review:

How many chips are affected? The number is something of a moving target. But from the information released so far by tech companies and estimates from chip industry analysts, it looks as if at least three billion chips in computers, tablets, and phones now in use are vulnerable to attack by Spectre, which is the more widespread of the two flaws.

(Here is a list of CPUs affected.) And that’s before we get to the Internet of Things:

CPUs made by AMD, ARM, Intel, and probably others, are affected by these vulnerabilities: specifically, ARM CPUs are used in a lot of IoT devices, and those are devices that everybody has, but they forget they have them once they are operating, and this leaves a giant gap for cybercriminals to exploit. According to ARM, they are already “securing” a Trillion (1,000,000,000,000) devices. Granted, not all ARM CPUs are affected, but if even 0.1% of them are, it still means a Billion (1,000,000,000) affected devices.

(Yes, an insecure IoT matters[2].)

4. Am I at risk?

Only if you browse the Internet or store data in the cloud. Kidding! Those are the highest risks:

For both Meltdown and Spectre,] an attacker actually needs to run some code on the target machine to exploit these vulnerabilities. This makes vulnerabilities highest risk for the following:

Anything that runs untrusted code on your machine (a browser typically),

Anything running in virtualization or clouds.

So, for a typical company, on your Domain Controller (for example), the risk is actually very, very low: since you are not running untrusted code there (hopefully), an attacker should not be able to exploit these vulnerabilities in the first place.

(Meltdown and Spectre can attack your system through the browser because they can be coded in JavaScript.) I like that “hopefully” rather a lot. And the notion of “trusted.”[3]

My personal advice is the same advice some investors give: Don’t do anything that means you won’t sleep at night. For me, that would mean not patching any initially released patch, for the same reason I never upgrade to a *.0 release; only the *.1 release will have the bugs worked out! But your business or the firm for which you work may demand different priorities; see the example of the power grid, above. And do be extra, extra careful to watch for phishing email.

5. Is there a fix?

Yes, and it looks like there are going to be many, many such fixes for quite some time; see the discussion of patches at #1. Since the bug fix situation is so dynamic, I won’t go into detail; here is a good roundup of consumer-grade fixes. Here is a linux roundup, and the site of the stable branch maintainer. Here are news stories on Apple’s MacOS and iOS and Apple’s browser, Safari. Windows is, of course, a breed apart, and has been having its problems. Microsoft’s patching process has been complicated by requirements for Anti-Virus software (status list), and has been temporarily halted for AMD devices because it bricks them. It’s not clear whether IBM mainframes are affected or not[4]..And if you own bitcoin, consider a hardware wallet.

6. Do the Meltdown and Spectre fixes cause a performance hit?

Yes. The Register collected a good deal of anecdotal data, and concluded:

These figures are in keeping with the estimates first reported by The Register, a performance hit of roughly five to 30 per cent, with the caveat that any such results are highly variable and depend on a number of factors such as the workload in question and the technology involved.

(However, if you’re a gamer, you should not be affected, unless you’re gaming in the cloud, I suppose:

For most end users, they’ll never notice a difference. “The client type desktop applications, gaming included, execute almost entirely inside of the user space,” Alcorn said. “So they’re not really doing a lot of calls to the kernel. They don’t issue a lot of system calls. The performance impact is negligible.”

(Cloud vendors say they have no performance hits; but they would say that. I would like to hear that from customers, since cloud vendors bill by the second and the hour. Even though cloud vendors have enormous resources to brute force a solution, somehow I don’t think they’ll want to eat any costs.)

7. Will there be more bugs like Meltdown and Spectre?

Yes, of course. Bruce Schnier:

Spectre and Meltdown are pretty catastrophic vulnerabilities, but they only affect the confidentiality of data. Now that they — and the research into the Intel ME vulnerability — have shown researchers where to look, more is coming — and what they’ll find will be worse than either Spectre or Meltdown. There will be vulnerabilities that will allow attackers to manipulate or delete data across processes, potentially fatal in the computers controlling our cars or implanted medical devices. These will be similarly impossible to fix, and the only strategy will be to throw our devices away and buy new ones.

8. Is there an XKCD comic?

Yes, of course there is.

The MetaFAQ

The following questions move out of the technical space, and into the business and political economy space:

9. Could Anyone Have Arbitraged Meltdown and Spectre?

Yes. There are several actors who employed or could have employed what might be called predictive execution based on prior knowledge of the bugs later named Meltdown and Spectre.

A) Intel CEO Brian Krzanich. Business Insider:

Intel CEO Brian Krzanich sold off a large portion of his stake in the company months after Google had informed the chipmaker of a significant security vulnerability in its flagship PC processors — but before the problem was publicly known. Intel’s CEO saw a $24 million windfall November 29 through a combination of selling shares he owned outright and exercising stock options. The stock sale raised eyebrows when it was disclosed, primarily because it left Krzanich with just 250,000 shares of Intel stock — the minimum the company requires him to hold under his employment agreement.

B) Analysts who watched Linux closely. It was clear in retrospect something was up[5]

Commenter duffoloniou linked to this fine example of measured, linux-style language, on the KAISER path for isolating user space from kernel space, back in November:

Even so, there will be a performance penalty to pay when KAISER is in use:

KAISER will affect performance for anything that does system calls or interrupts: everything. Just the new instructions (CR3 manipulation) add a few hundred cycles to a syscall or interrupt. Most workloads that we have run show single-digit regressions. 5% is a good round number for what is typical. The worst we have seen is a roughly 30% regression on a loopback networking test that did a ton of syscalls and context switches.

Not that long ago, a security-related patch with that kind of performance penalty would not have even been considered for mainline inclusion. Times have changed, though, and most developers have realized that a hardened kernel is no longer optional. Even so, there will be options to enable or disable KAISER, perhaps even at run time, for those who are unwilling to take the performance hit.

All told, KAISER has the look of a patch set that has been put onto the fast track. It emerged nearly fully formed and has immediately seen a lot of attention from a number of core kernel developers. Linus Torvalds is clearly in support of the idea, though he naturally has pointed out a number of things that, in his opinion, could be improved. Nobody has talked publicly about time frames for merging this code, but 4.15 might not be entirely out of the question.

Now, do I know that there were any analysts who doped out that they might want to short Intel, whose stock did indeed take a hit when Spectre and Meltdown became public? No. Could there have been? Yes. Should there have been? Indeed, yes (and see #7, supra).

C) No Such Agency

Washington Post:

Current and former U.S. officials also said the NSA did not know about or use Meltdown or Spectre to enable electronic surveillance on targets overseas. The agency often uses computer flaws to break into targeted machines, but it also has a mandate to warn companies about particularly dangerous or widespread flaws so that they can be fixed.

Rob Joyce, White House cybersecurity coordinator, said, “NSA did not know about the flaw, has not exploited it and certainly the U.S. government would never put a major company like Intel in a position of risk like this to try to hold open a vulnerability.”

“Would never” is a fine example of what I call the Beltway Subjunctive, because whether or not the NSA would have, they have. Tech Dirt:

While it is conceivable the NSA did not know about the flaw (leading to it being unable to exploit it), it’s laughable to assert the NSA wouldn’t “put a major company in a position of risk” by withholding details on an exploit. We only have the entire history of the NSA’s use of exploits/vulnerabilities and its hesitant compliance with the Vulnerability Equities Process to serve as a counterargument.

The NSA has left major companies in vulnerable positions, often for years — something exposed in the very recent past when an employee/contractor left the NSA in a vulnerable position by leaving TAO tools out in the open. The Shadow Brokers have been flogging NSA exploits for months and recent worldwide malware/ransomware attacks are tied to exploits the agency never informed major players like Microsoft about until the code was already out in the open.

These recently-discovered exploits may be the ones that got away — ones the NSA never uncovered and never used. But this statement portrays the NSA as an honest broker, which it isn’t. If the NSA had access to these exploits, it most certainly would have used them before informing affected companies. That’s just how this works.

10. What Are The Costs of the Meltdown and Spectre Bugs?

A few billions. The Next Platform does some arithmetic:

First, let’s assume that the average performance hit is somewhere around 10 percent for a server based on microbenchmarks, and that the heavily virtualized environment in most enterprise datacenters washes out against the lower impact expected for enterprise workloads. Call it something on the order of $60 billion a year in worldwide system sales. So the impact is $6 billion a year in the value of the computing that is being lost, at the grossest, highest denominator level. For modern machines, this is like giving up two, four, or maybe even six cores out of the machine, if the performance hit pans out as we expect on existing machines across a wide variety of workloads. Add this up over the three or four generations of servers sitting out there in the 40 million or so servers in the world, and maybe the hit is more to the tune of $25 billion without taking into account the depreciated value of the installed base. Even if you do, it is still probably north of $10 billion in damages.

That’s not a lot of money compared to the Gross World Product. But it would bring a big payday for a law firm, even Big Law. The Guardian:

Three separate class-action lawsuits have been filed by plaintiffs in California, Oregon and Indiana seeking compensation, with more expected. All three cite the security vulnerability and Intel’s delay in public disclosure from when it was first notified by researchers of the flaws in June. Intel said in a statement it “can confirm it is aware of the class actions but as these proceedings are ongoing, it would be inappropriate to comment”.

The plaintiffs also cite the alleged computer slowdown that will be caused by the fixes needed to address the security concerns, which Intel disputes is a major factor.

11. Are There Winners?

Hard to tell at this point, but if everybody has to buy a new machine (unlikely) than, perversely, Intel might be a winner, because all those machines will need new chips. Speculating freely, I’d guess that Cloud vendors would be winnners. From a Google FAQ that reads like a marketing pitch:

Spectre and Meltdown are new and troubling vulnerabilities, but it’s important to remember that there are many different types of threats that Google (and other cloud providers) protect against every single day. Google’s cloud infrastructure doesn’t rely on any single technology to make it secure. Our stack builds security through progressive layers that deliver defense in depth. From the physical premises to the purpose-built servers, networking equipment, and custom security chips to the low-level software stack running on every machine, our entire hardware infrastructure is Google-controlled, -secured, -built and -hardened.

In other words, a monoculture inside a walled garden. But I can see management finding such a pitch very attractive.

12. Is Code a Lemon Market?

Yes, any market that sells code is a lemon market. From George Akerlof’s famous paper:

The Lemons model can be used to make some comments on the costs of dishonesty. Consider a market in which goods are sold honestly or dishonestly; quality may be represented, or it may be misrepresented. The purchaser’s problem, of course, is to identify quality. The presence of people in the market who are willing to offer inferior goods tends to drive the market out of existence -as in the case of our automobile “lemons.” It is this possibility that represents the major costs of dishonesty -for dishonest dealings tend to drive honest dealings out of the market. There may be potential buyers of good quality products and there may be potential sellers of such products in the appropriate price range; however, the presence of people who wish to pawn bad wares as good wares tends to drive out the legitimate business. The cost of dishonesty, therefore, lies not only in the amount by which the purchaser is cheated; the cost also must include the loss incurred from driving legitimate business out of existence.

For me, the essence of the personal computer is that it’s personal; and the same goes for my tablet, would go for my cell phone, if I had one, and would go for my machine in the Cloud, if I had one. Quoting from Google’s writeup on Spectre and Meltdown at Project Zero:

We have discovered that CPU data cache timing can be abused to efficiently leak information out of mis-speculated execution, leading to (at worst) arbitrary virtual memory read vulnerabilities across local security boundaries in various contexts.

To me, that means my personal data is no longer personal. That means that the processor in my PC (or tablet, or cell, or virtual machine) is not fit for purpose (even though somebody with more knowledge about it that I have sold it to me as being so). This an inferior good sold to me by a dishonest (in this case, artificial) person under conditions of information asymmetry. It is a lemon. It’s just as much a lemon as a used car with a cracked engine block (J.B. Weld or no.)


And so we have crapification at scale: The largest lemon market in the history of the world. As far as thoughts on policy, I confess that at present I have none, though I hope to work through these issues in future posts. It’s easy to say Intel is a ginormous monopoly and a monoculture; but fab plants don’t come cheap. It’s easy to say we should open-source our chip designs, but is IP really the main cost driver here? It’s easy to say software engineering shouldn’t be an oxymoron, but how is that to be accomplished? It’s easy to say that we should give up our devices — we got along perfectly well without them until about half-way through the neoliberal era — but what about our “standard of living”? Frankly, the billions — or is it trillions — of insecure processors and devices out in the wild, the great bulk of them lemons and none likely to be recalled, it’s hard to see what to do. Other than put our noses to the digital grindstone and patch, patch, patch.


[1] Here’s material on patches in the financial industry, from Dark Reading:

Take the FS-ISAC, the financial services industry organization that shares threat intelligence among banks and other financial institutions, which said it’s well aware of the possible performance and productivity hits and costs, as well as testing, for the processor patches.

“There will need to be consideration and balance between fixing the potential security threat versus the performance and other possible impact to systems,” the FS-ISAC said in a statement last week. Cloud-based and shared, virtualized platforms, are likely to be more at risk than dedicated servers and endpoints.

William Nelson, president and CEO of FS-ISAC, says while Meltdown and Spectre “are a big deal,” the good news is that it’s a vulnerability discovery and has no known exploits in the wild as yet, which gives financial institutions some breathing room to assess and analyze their risk and any performance tradeoffs with patching.

I think alert reader Clive can translate this much better than I can; but I’d certainly like to know more detail about that mysterious “balance” of which the FS-ISAC speaks.

[2] More from We Live Security:

Now I can hear already someone say “What kind of sensitive data can be stolen from my Wi-Fi-controlled light? Or my refrigerator? Or from my digital photo frame? Or from my Smart TV?” The answer is simple: lots. Think about your Wi-Fi password (which would make it possible for anyone to get onto your local network), your photos (luckily you only put the decent photos on the digital photo frame in your living room, right? Or did you configure it to connect automatically to Instagram or DropBox to fetch your newly-taken pictures?), your credentials to Netflix? Your… Eh… There is a lot of information people nowadays store on IoT devices.

[3] See this important, classic paper from Ken Thompson: “You can’t trust code that you did not totally create yourself. (Especially code from companies that employ people like me.)”

[4] A post at Planet Mainframe that said the following has been retracted:

For mainframe IT shops, there’s some good news, but bad news as well. The good news is that the folks at IBM long ago put protections in place for things like out-of-order executions and other security risks. Doubly good news is that with mainframe hardware memory encryption, you’re in pretty good shape either way. The bad news is that your consoles may be vulnerable, especially if they’re x86-based, and they connect to your mainframe systems; so you need to pay special attention there

Updates welcome!

[5] And indeed a kind reader sent me a heads-up, which IIRC I published in Water Cooler, but for which I am too lazy to find the link just now.

Print Friendly, PDF & Email
This entry was posted in Free markets and their discontents, Guest Post, Market inefficiencies, Risk and risk management, Surveillance state, Technology and innovation on by .

About Lambert Strether

Readers, I have had a correspondent characterize my views as realistic cynical. Let me briefly explain them. I believe in universal programs that provide concrete material benefits, especially to the working class. Medicare for All is the prime example, but tuition-free college and a Post Office Bank also fall under this heading. So do a Jobs Guarantee and a Debt Jubilee. Clearly, neither liberal Democrats nor conservative Republicans can deliver on such programs, because the two are different flavors of neoliberalism (“Because markets”). I don’t much care about the “ism” that delivers the benefits, although whichever one does have to put common humanity first, as opposed to markets. Could be a second FDR saving capitalism, democratic socialism leashing and collaring it, or communism razing it. I don’t much care, as long as the benefits are delivered. To me, the key issue — and this is why Medicare for All is always first with me — is the tens of thousands of excess “deaths from despair,” as described by the Case-Deaton study, and other recent studies. That enormous body count makes Medicare for All, at the very least, a moral and strategic imperative. And that level of suffering and organic damage makes the concerns of identity politics — even the worthy fight to help the refugees Bush, Obama, and Clinton’s wars created — bright shiny objects by comparison. Hence my frustration with the news flow — currently in my view the swirling intersection of two, separate Shock Doctrine campaigns, one by the Administration, and the other by out-of-power liberals and their allies in the State and in the press — a news flow that constantly forces me to focus on matters that I regard as of secondary importance to the excess deaths. What kind of political economy is it that halts or even reverses the increases in life expectancy that civilized societies have achieved? I am also very hopeful that the continuing destruction of both party establishments will open the space for voices supporting programs similar to those I have listed; let’s call such voices “the left.” Volatility creates opportunity, especially if the Democrat establishment, which puts markets first and opposes all such programs, isn’t allowed to get back into the saddle. Eyes on the prize! I love the tactical level, and secretly love even the horse race, since I’ve been blogging about it daily for fourteen years, but everything I write has this perspective at the back of it.


  1. Louis Fyne

    Meltdown/Spectre = another reason to disable/toggle javascript when you are online.

    Firefox and Chrome addons make javascript toggling easy, can’t say about Edge or Opera.

    (eg Firefox + the NoScript or Quickjava addon)

    1. ChiGal in Carolina

      For some reason the email (AT&T via Yahoo ( on my Sony laptop (Windows 7 using only Microsoft security and Chrome browser) recently demanded I turn on Java to function. Same for my mom who uses Gmail (which I only use on my Android phone) and has a Lenovo with Windows 10 and also uses Chrome

      I sign out of my email when not actively using it, but how the heck can I avoid having Java on when connected to the internet if email is what is requiring me to use it? Aren’t I connected to the internet when using email?

      Thanks for any clarification you can provide!

      1. Louis Fyne

        generally mail websites need javascript to work. “NoScript” has a whitelist where it will automatically allow javascript from legit./brand name sites.

        should mention that disabling javascript generally is overkill for surfing security—and primarily for blocking autoplay video adverts and pop-ups.

        easiest recommendation is to download Firefox and download the “noscript security suite” addon. If you like it, you like it. If you don’t, you don’t

        1. ChiGal in Carolina

          Thx! To clarify, when you say
          Meltdown/Spectre = another reason to disable/toggle javascript when you are online
          you are not suggesting disabling completely as that is Overkill.

      2. Wisdom Seeker

        Re: “how the heck can I avoid having Java on…”

        Get better email?

        At least there’s a competitive market in email services.

    1. Lambert Strether Post author

      Yes. The simultaneous discovery of the problem by independent researchers (even if the architecture had been lying in plain sight) is really interesting. So far as I can tell, it really is coincidental and happens a lot in intellectual history and history of technology, so I didn’t put it under the arbitrage section.

      1. Oregoncharles

        The “coincidence” is usually because some prior requirement has been met, or in the case of science, because data have piled up, often failures in the old scheme. The joint discovery of evolution is the classic example.

        But I wouldn’t have the slightest idea how that works in the world of IT.

        1. Mikkel

          That’s exactly what happened. When I first read about these issues I was confused because it had nothing to do with extracting the information. The actual flaw around the processing is easy to understand but the reading of data isn’t.

          I had to read the paper to learn that the big issue is that the data can be read by a very complex process called a cache side attack. This attack is very ingenious and was only demonstrated as viable relatively recently.

          Once that key vulnerability was confirmed then it was just a matter of time before people figured out how to manipulate the information that went into the cache and explains the joint discovery.

      2. Hana M

        Quite. And I wonder whether the flaws would have been discovered/uncovered as soon had Linux not been open source.

        By the middle of the year, the Graz researchers had developed a software security patch they called KAISER that was designed to fix the KASLR break. It was made for Linux, the world’s most popular open-source operating system. Linux controls servers — making it important for corporate computing — and also supports the Android operating system used by the majority of mobile devices. Being open source, all suggested Linux updates must be shared publicly, and KAISER was well received by the developer community. The researchers did not know it then, but their patch would turn out to help prevent Meltdown attacks.

        The Graz team’s attitude quickly changed, though, as summer turned to fall. They noticed a spike in programming activity on their KAISER patch from researchers at Google, Amazon and Microsoft. These giants were pitching updates and trying to persuade the Linux community to accept them — without being open about their reasons sometimes.

  2. cm

    You’re falling for Intel’s spin. Spectre (Intel-only) is far serious than Meltdown (Intel/AMD/PowerPC/etc). Spectre can be implemented via javascript, so browsing becomes an even riskier activity.

    To date, most articles fall for Intel’s spin. Intel is trying to bring AMD down by grossly overstating the risk of Meltdown. One also wonders about Microsoft’s fundamental inability to test their patches with AMD machines. We know Microsoft has colluded with Intel in the past.

    Here is the definitive technical paper from Google:

    1. Lambert Strether Post author

      Do consider reading the post. In no way do I suggest that Meltdown is more serious than Spectre, unless you think “years of remediation” somehow translates to “less serious.” I also explain JavaScript. And the link to your Google paper is also linked to in the post. Do better than value-free ad homs, please.

    2. Samuel Bierwagen

      Meltdown is the one that’s Intel-only. Spectre is the bug that affects speculative execution, not out-of-order execution, and is the one that affects Intel, AMD and some ARM chips. (PowerPC chips haven’t been used in consumer machines since 2006. Spectre does affect POWER chips, which are almost exclusively used in IBM mainframes:

      Fun fact: The Raspberry Pi 1 and Pi Zero aren’t affected by either bug, because the ARM11 chip series didn’t have any superscalar features at all:

      1. Marlin

        Yes, but Meltdown is the more serious bug. Meltdown allows to get out of user process memory into the kernel. Spectre so far seems to “only” be able to read memory assigned by the kernel to the process.

        1. Lambert Strether Post author

          We quibble on “serious.”

          I agree that the immediate effects of Meltdown are worse. But since Spectre is a class of bugs (or a strategy for exploiting bugs) we will be dealing with Spectre for years to come (and with a lot of bad actors working out how to exploit it). That makes Spectre the more serious flaw in my mind, in the same sense that a lung cancer is worse than, say, malaria (assuming that quinine is available to the infected population). I think the chronic and persistent is worse than the acute and curable.

  3. rd

    I disagree that this is crapification. This is very complex design that turns out to have a fundamental bug at the heart of it. This is more like earthquake design where bigger and more complex things are being built before the knowledge is fully known about how every aspect of every component will work. The fact that it took years of many people working with and programming with these chips before somebody figured out there was an issue indicates how unobvious it is. The good news is that we did not have the computer equivalent of the October 1987 crash, 2008 financial crisis, or 1971 San Fernando earthquake ( ) in order to identify it as an issue.

    Unfortunately, this is how engineering design progresses. The good news to me was that there was a small technical group inside the affected community of large companies that was diligently working on making fixes before it became publicly known instead of just selling or shorting the stock (hello, Intel CEO Krzanich). I haven’t seen any evidence that somebody on the inside knew about it and tried to just cover it up or ignore it (e.g. VW emissions, GM ignition switches, subprime mortgages).

    1. Lambert Strether Post author

      Perhaps if you came up with another term, then, for “one of the worst bugs ever found” that affects billions of devices? It’s not like the possibility was unknown, so I think your earthquake analogy is false. From the Bloomberg timeline:

      Researchers began writing about the potential for security weaknesses at the heart of central processing units, or CPUs, at least as early as 2005. Yuval Yarom, at the University of Adelaide in Australia, credited with helping discover Spectre last week, penned some of this early work.

      (Granted, “at the heart of central processing units” is vague). I would bet money there are earlier examples, too.

      I think the better example would be Fukushima, with Intel standing in for TEPCO.

      From a discussion with Richard Smith:

      LAMBERT: And if (again as Thompson suggests [see Note 3]) the only trustworthy code is code you write yourself, can we even be said to be doing “engineering”? Can you really do engineering with a rubber yardstick?

      RICHARD SMITH: These qs have clear answers (no, both times).

      I can’t see it ever changing, to be honest. There doesn’t seem to be any prospect of an agreed and enforceable abstraction level, beyond ‘the transistor’. Sometimes I wonder if the demise of Moore’s law would shift attention from speed to trustworthiness, but I can’t really see why it would inevitably do any such thing.

      1. vlade

        TBH, on the “trustworthy code” – it goes down to the HW level I’d say. That said, the real bugger is that we’re now more or less “required” to be constantly attached to the internet. The worst a virus could have done 20 years ago was to destroy your HD. Not that it was a small thing – but for people whose livelihood depended on that HD not being destroyed, they could take same steps.

        With internet and always-on connection, your life can be destroyed and you’d not even know it for quite some time.. But again, this is a fundamental problem with internet – it has been architected with zero privacy and security (except against being bombed-out) in mind.

      2. mikkel

        I think the better example would be Fukushima, with Intel standing in for TEPCO.

        I disagree. The pioneers of nuclear power knew that the Fukushima design was fundamentally unstable. Even the inventor of the light water boiling reactor said explicitly that it should never be scaled up to utility size.

        There was a conscious decision to ignore a fundamental principle of physics and attempt to contain it with a series of extreme engineering whose performanceimpossible to prove.

        These flaws are more akin to discovering a new scientific principle that invalidates the original design criteria, which in part is why it’s so massive. These differences have tons of implications relating to governance, industry oversight, organizational design, etc.

        1. Lambert Strether Post author

          > These flaws are more akin to discovering a new scientific principle

          That seems like a strange model for how corporations behave; I agree “akin,” but clearly Intel’s program of research isn’t “academic”, for want of a better word. Disinterested science is not going on — which is one reason the bugs took twenty years to be implemented! (I’m not willing to say “discovered” until I better understand the design trade-offs that were made at the time. However, it does seem, if you look at the history of technology from the beginning of neoliberalism onward, that when “privacy” (property rights in data?) was weighed in the balance, it was always found wanting.

        2. flora

          Thanks. This is nothing like the early Intel Pentium KDIV floating point error* bug , imo. Afloating point calculation error in a cpu! That Intel tried to poo-poo as nothing. Now that was crapification in the ususal sense of the word.

          In this instance the cpus work as designed, and while the potential ‘leakage’ has been known for a long time the possibility of an exploit seemed too remote to worry about…. until a recent Proof of Concept (PoC). Once the PoC was shown all the chip vendors started work on mitigations, quietly, so as not to alert the bad guy hackers before mitigations were in place. The Register sort of jumped the gun on this one, I think. Now the orderly rollout has become a bit of a scramble.


      3. Oguk

        I agree with the main point, that whatever it is, it’s not crapification – as Mikkel says below “Crapification should refer to lack of quality for the purpose of forcing consumers to buy more or having higher profits.”.

        I think this falls in the area of uncharted territory like GMOs – unknown unknowns. The Precautionary principle would require withholding the release of such organisms into the wild, but as discussed, the horse has left the gate. Perhaps all software- enabled devices should come with something equivalent to the Surgeon General’s warning?

    2. Mikkel

      I totally agree. Crapification should refer to lack of quality for the purpose of forcing consumers to buy more or having higher profits.

      In this case the flaws have been around for decades and are only just noticed because of their complexity. I believe that there shouldn’t be any class action suit either, assuming that proper protocols were followed once they were made aware of it.

      The lesson here is actually about the nature of complex systems and increased vulnerability that society is embedding in its infrastructure.

      It’s dangerous to lump this in with other failures that have different sources of culpability. Banks were abusing math to reap profits off subprime; GE knew that the Fukushima reactor was fundamentally unstable before it was built but gambled it could be controlled; DuPont covered up the effects from Teflon production; planned obsolescence is deliberate, etc.

      Society needs much more engagement and enforcement around complexity and the first step is to accurately type it.

      1. Tom Bradford

        “GE knew that the Fukushima reactor was fundamentally unstable before it was built but gambled it could be controlled;”

        Slightly unfair. Fukushima failed because it was hit by devastating, low probability, outside event. Had that not happened we don’t know whether GEC’s ‘gamble’ would have paid off as it has at other light water reactors.

        Yes you can argue that building it where tsunamis were slightly more likely than far inland was less than wise, and took inadequate precautions against the possibility, but that’s easy to do with hindsight and doesn’t relate to the technology itself.

        1. The Rev Kev

          Not the full story. Japanese geologists found evidence of past tsunamis that had they just occurred would have slammed Fukushima and informed the power company about their findings which were duly ignored. That company knew the risks but ignored it anyway and took no measures to build a fail-safe at that site in case something happened. Neoliberalism Rules!

          1. Tom Bradford

            Agreed. My point is only that Fukishima didn’t occur because of a fundamental flaw built into the nuclear technology itself. Hence it can’t be equated with the Intel situation. Indeed, the Intel situation is only a ‘flaw’ because a few bright losers choose to stick their fingers up at the rest of us by exploiting a weakness.

        2. mikkel

          Watch “A is for Atom” by Adam Curtis where he talks to multiple pioneers that were working on the system. They are very clear that there was no evidence that the safety systems would work in the case of coolant failure.

          You’re right that the event was low probability, but given enough time and locations, the overall probability that it something would happen leading to loss of coolant is rather high.

          In the documentary, Alvin Weinberg — the inventor of the technology — states that looking back, he saw that it should have been a collective social decision about whether the risk was worth it. He died before Fukushima, so we don’t know what he would have said as it happened.

          This isn’t a trivial point. There are other nuclear designs that are different on a fundamental level and would not have a meltdown even with coolant removed. Perhaps if they had become the standard then it would have changed the nature of our whole power system and the fate of climate change.

      2. Lambert Strether Post author

        > I totally agree. Crapification should refer to lack of quality for the purpose of forcing consumers to buy more or having higher profits.

        Your view is, then, that people aren’t forced to buy from oligopolies? Or that what they are forced to by isn’t constrained by the ability of those oligopolies to structure the market? And your view is that Intel is engineering with a view toward lower profits? (How do their “stakeholders” feel about that?)

        All these seem odd.

        1. Skip Intro

          I think crapification also implies a pre-crapified state. I don’t see how these vulnerabilities can be attributed to cutting corners, or the other typical sources for producing crapified versions of something. I think this is much more a matter of hubris, as we get very far out over our skis with complexity of our infrastructure.

          p.s. I suspect the post has dropped a penultimate ‘e’ from the name of IT security god Bruce Schneier.

    3. OhBilly

      Thank you, I was going to post something very similar. This is not crapification. This is sort of an inevitability due to the demands of the market. Intel and others are in sort of a catch-22 here since they are making a LOT of money selling insecure devices, but if they had tried to do a perfect testing job before release, they would never have made it past the 8085 and competitors who were willing to sell insecure stuff would have driven them out of existence.

      These chips have billions of transistors in them, arranged as all sorts of logical elements forming extremely complex subsystems. Given the incredibly rapid pace of product development, it is sort of inevitable that bad stuff like this exists. So on the one hand, chip makers are selling a product that is not tested for 100% of all possible exploits (despite the testing that is done being extensive), but on the other hand the market demands advances to be rapid enough that no chip maker could stay in business if they sat around running every possible permutation of logic attacks.

      Then there is the entire debate of whether or not it is even possible to actually test for all attacks before releasing a chip. The software side is evolving even faster than the silicon side, and many of the exploits are only discovered through a combination of OS functionality and hardware architecture. Similar complaints are constantly levied against Microsoft Windows. Here and there the bugs are due to gross negligence, but many of them are incredibly esoteric exploits that come from avenues that nobody had ever considered (but since there are millions of hackers not on Microsoft’s payroll, any exploit will eventually be found). Again, a company is selling an extremely complex product that is not “fully” tested, while at the same time the company would probably be in the dust bin of history if they tried to sell a perfect product. This is usually about the time that Mac and Linux users jump in and claim that their choice of OS can do no wrong, which is hilariously silly.

      1. Jamie

        You make a compelling case for cutting Intel some slack. But it brings in focus that there are really two senses of ‘crapification’. One is the sense in which a company makes a deliberate decision to offer substandard wares without concern for the welfare or user experience of the consumer. In our current mass market some companies are quite willing to piss off consumers because there are always more suckers who haven’t yet purchased the crap, and repeat customers and treating consumers as customers are no longer a thing… or because there is no transparency about where the goods come from, so no bad rep attaches to the company… or because the exercise of monopoly power makes the company immune from customer complaint… or…

        But there is another sense in which ‘crapification’ means exactly what you are describing… i.e., an inherent trend in competitive markets to reduce the quality of goods over time in order to survive in the marketplace. Under this second sense of the term, one cannot fault an individual company that is honestly trying to make a decent product for succumbing to the pressures of the market. It is not a deliberate choice to offer substandard goods. It is more a lowering of standards in general that is inherent in the system and that is not the fault of any one company or its officers. I submit that it is ‘crapification’ still. Just not in the same sense as, say, planned obsolescence, and cannot be blamed on any particular company or discovered in any trail of corporate memos.

    4. Paul Jurczak

      I also strongly disagree with term “crapification”. I’m not a fan of Intel’s near monopoly, but we are dealing here with consequences of immense design complexity not a corporate malfeasance.

      Speculative execution is a feature of all high performance CPUs today, regardless of their manufacturer. Until bunch of security researchers spent considerable man-years to find an exploit, there was no clear indication of a problem. If you dig deep into technical description you will appreciate the high sophistication of these exploits.

      There is no such thing as secure CPU outside of marketing speak. Tools and methods to design and validate 100% secure computer hardware do not exist and will not exist at least for a while. This is the painful reality.

  4. Jason Boxman

    The lack of consequence for loss/theft of personal data for executives at companies predates this epic vulnerability. If the Equifax incident didn’t cause enough outrage for laws that hold executives accountable for failing to keep citizen’s data secure, I’ll be impressed if much of anything else does. Maybe if embarrassing stuff about large numbers Establishment members comes to light, that might trigger some movement on this.

    I’ve always felt that any data on my computer may be subject to theft, given that I’m always connected to the Internet. The only surefire solution is to keep a device disconnected; I don’t think you can get a laptop or other device without wireless anymore, so that’s perhaps a hard thing to do today.

    Thanks for the detailed coverage of this.

    1. Amfortas the Hippie

      given the origins of the web/internet, etc. I’ve always suspected claims that we can be safe.
      during the Bush Darkness, it was confirmed.
      Since that time, my foundational assumption regarding the internet machine is that everything i do is bagged and tagged.
      after the first revelations of massive surveillance, I was as freaked out as anybody…but then i got over it, and really don’t give a hoot if somebody(nsa(waves) or scriptkiddie) learns what porn i like or what blog i’m reading today.
      Of course, we don’t use this tech like most folks do…occasional e-commerce, with a dedicated debit card that usually has like a dollar in the account(put it in when i need to take it out), and a bank manager aunt who gives heads ups; all our imperial entanglements(like ssi or whatnot) is by phone and carrier snail(i never allow our ss#’s to be typed into one of these machines, and have zero credit score any way); and i’d rather live by beeswax candles and a cold cellar than have any of my light bulbs or appliances capable of listening in to me, or(shudder) accessing a network.
      Ed Gibbons said, “our desires and possessions are the strongest fetters of despotism”. That was true 2000 years ago, and it’s just as true now. Convenience and hyperefficiency led us here.
      Principiis obsta; caveat ruinam.

      to my luddite mind, this looks like the equivalent of a super virus/botnet/trojan…just another avenue for hacking by various and sundry bad actors. the hijacking of my mom’s machine by microsoft(win10) proved that none of these boxes of miracles are really “ours”.
      so I back up the stuff i care about in an air gapped machine and various sticks, and approach all things internet just like i approach going into a seedy part of Houston at night to get to the cool blues club: don’t bring anything you couldn’t part with, and be as mindful of your surroundings as possible.
      I wish a sincere “good luck” to all who are more embedded than i.

    2. Octopii

      It’s not malfeasance, so why the witch hunt? No, it’s not good to discover a fundamental CPU architecture problem so far down the line. However, it has nothing to do with information security policy, or Equifax, or corporate executives, or The Establishment. It’s not crapification. It’s a result of a long history of successfully pushing a limited instruction set architecture far beyond its original intent, for various significant reasons. There’s nobody to string up and punish.

    3. Johnny Pistola

      “Frankly, the billions — or is it trillions — of insecure processors and devices out in the wild, the great bulk of them lemons and none likely to be recalled, it’s hard to see what to do. Other than put our noses to the digital grindstone and patch, patch, patch.”

      How about severe criminal prosecution for anyone (including gov’t agencies) caught engaging in hacking? Relying solely on technical remedies is like providing the public with bullet proof vests while treating murder as a misdemeanor.

  5. duffolonious

    Re: fixes – if you read Greg Kroah-Hartman’s blog post – the Spectre fix answer is “no” for Linux.

    And here is the handy tool I’ve been using (this example is with CentOS 7.4’s latest kernel with the Meltdown fix:

    $ git clone
    $ cd spectre-meltdown-checker
    $ sudo ./
    Spectre and Meltdown mitigation detection tool v0.16

    Checking vulnerabilities against Linux 3.10.0-693.11.6.el7.x86_64 #1 SMP Thu Jan 4 01:06:37 UTC 2018 x86_64

    CVE-2017-5753 [bounds check bypass] aka ‘Spectre Variant 1’
    * Kernel compiled with LFENCE opcode inserted at the proper places: YES (112 opcodes found, which is >= 70)

    CVE-2017-5715 [branch target injection] aka ‘Spectre Variant 2’
    * Mitigation 1
    * Hardware (CPU microcode) support for mitigation: NO
    * Kernel support for IBRS: YES
    * IBRS enabled for Kernel space: NO
    * IBRS enabled for User space: NO
    * Mitigation 2
    * Kernel compiled with retpoline option: NO
    * Kernel compiled with a retpoline-aware compiler: NO
    > STATUS: VULNERABLE (IBRS hardware + kernel support OR kernel with retpoline are needed to mitigate the vulnerability)

    CVE-2017-5754 [rogue data cache load] aka ‘Meltdown’ aka ‘Variant 3’
    * Kernel supports Page Table Isolation (PTI): YES
    * PTI enabled and active: YES
    > STATUS: NOT VULNERABLE (PTI mitigates the vulnerability)

    The fixes for the Spectre sound interesting but not enough testing to put on production systems.

    1. Synoia

      Spectre and Meltdown mitigation detection tool v0.16

      Call me when it’s at v2.0 or higher.

      Its a good effort, However anything below v1.0 is generally a prototype.

      V1 Is what you gt in the time allotted. It neither works nor is complete.
      V2 Fixed V1 so that it works. It’s not complete, and is probably unusable.
      V3 Is what one would have liked for V1, it both works, and is usable. It also 18 month to 2 years after V1.0.

      Writing software is like having a baby. Both have a lack of maturity at birth and Babies are cute, and loved.

    1. Lambert Strether Post author

      Can you provide your assessment of the link? (In general, though, see “Reflections on Trusting Trust,” at note 3. Every time I see the word “compile,” I think of that article.

      1. Maurcie Hebert

        Intel’s Management Engine and AMD’s Platform Security Processor are riddled with remotely exploitable bugs such as buffer overflows and illegitimate privilege escalation. These “trustworthy computing” features were specified ostensibly to allow for remote control and command of fleets of computers behind a corporate firewall, and even as a anti-theft mechanism for laptops. On Intel’s ME, at least there is obsfuscated code, some of it hard wired in ROM, to make the mechanism practically impossible to reverse engineer and security audit.
        AMD’s PSP also had vulnerabilities announced recently. I am not sure if they are quite as severe as Intel ME, but like Spectre the vulnerabilities are in early stage of discovery, so it is difficult to say that this is a bad as it will get for AMD. These security holes are much more easily exploitable than Meltdown or Spectre, and like those are baked in to systems in a way that makes them very difficult to fully remediate.

        Here are some top-level bullets for Intel ME:

        – Every recent Intel based system is running a security backdoor. There is an undocumented workaround provided to the security community known as “NSA HAP” mode.

        – The backdoor is very difficult (that Slashdotter said “next to impossible”) to decode or reverse engineer.

        – The backdoor is active even when the system is not powered on

        – Onboard ethernet and wifi are part of the backdoor

        – The backdoor uses encrypted communication internally

        – Recent backdoors run [signed] Java applets

        – It appears SIGINT have certs needed to exploit these backdoors

        – The “Vault7” release forced Intel to report the vulnerability of a web authentication backdoor in the backdoor. Exploitable versions authenticate with a zero-length password

        My reading of these (admit that I have not read *all* of the underlying CVEs) is that the far worse trustworthy computing exploits have been long known by SIGINT.

        1. Self Affine

          With respect to “NSA HAP” and the Intel ME, just google “nsa hap intel backdoor” and you will find a solution (its claimed) to disable the ME.

          Here is a quote from the site:

          “The researchers discovered an undocumented field called “reserve-hap” and that HAP could be set to “1” for true. Apparently, the NSA wanted to ensure the agency could close off any possible security risk by disabling Intel ME. The researchers wrote, “We believe that this mechanism is designed to meet a typical requirement of government agencies, which want to reduce the possibility of side-channel leaks.”

          “Oh! What a tangled web we weave, when first we practice to deceive” [Sir Walter Scott]

          We are definitely in the old Mad Magazine Spy vs. Spy world. Not for the average user I would expect.

  6. XXYY

    It’s hard to say whether people are reacting appropriately or overreacting.

    On the one hand, security problems in computerized systems (and mechanical and every other type of system) are nothing new. We have been finding, and where appropriate fixing, them for as long as anyone can remember. The two problems discussed here are at least somewhat hard to exploit by their technical nature and because you need carefully tailored malicious code running on the target system. So from a certain perspective, these vulnerabilities in and of themselves are not worse than things we have already seen. E.g., the Equifax breach seems vastly more damaging in terms of the actual release of personal information, but that seems to have faded from the news after a few weeks

    Perhaps the interesting/scary thing about these vulns is that (a) they are baked into the device silicon rather than being SW problems that can be solved by patching, and so are comparatively resistant to being fixed, and (b) they are extremely widespread since they involve widely used architectural features of modern CPUs. But hardware-based vulnerabilities are not new under the sun, either.

    BTW, I think we should try to avoid calling the designers of the vulnerable systems sloppy, lazy, stupid, uncaring, or whatever (not referring to anyone here, just the general tone in the press). As we are seeing, speculative branch execution is an important optimization that seems to have added a 5 to 30 percent speedup and saved the world billions of dollars in additional hardware. I have no doubt it was implemented in good faith by brilliant people. It’s easy to come along later and say that the obscure, arcane abuse of complex features by bad actors should have been foreseeable. I doubt most of the people in the media and elsewhere making this charge could themselves have forseen it, or even understand the details of the technical picture at all for that matter.

    Security is extremely hard to get right, especially as our systems become more and more complex. This is not to say we shouldn’t try!

    1. Wisdom Seeker

      Security is extremely hard to get right, especially as our systems become more and more complex. This is not to say we shouldn’t try!

      You came so close, and then missed … the point should be NOT to make the systems more and more complex. it might NOT be “better” to have 5 or 10 or even 30% faster execution on a single processor, if the added complexity makes it possible to hack the hardware.

      I would also argue that “Internet of Things” is NOT better, either. Not without genuine data security, which we don’t have, and may never have given the IT-industry mindset and incentive structures.

    2. Tom Bradford

      I have some empathy with XXYY’s stance – to expect the designers of hideously complex technology to foresee and protect against every possible exploit is unfair. Do we castigate the designers of the locks on our front-doors or motor-cars if some reasonable competent thief with the right tools is able to break in? Yes, you can have a thief-proof lock if you’re prepared to pay for it in cash and inconvenience. Otherwise it’s a trade-off, cash and convenience against risk, and if you’re sensible you recognise the risk and do what you can reasonably do to mitigate it.

      “That means that the processor in my PC (or tablet, or cell, or virtual machine) is not fit for purpose (even though somebody with more knowledge about it that I have sold it to me as being so). This an inferior good sold to me by a dishonest (in this case, artificial) person under conditions of information asymmetry.”

      I think Lambert’s above comment is also slightly unfair. These vulnerabilities are so esoteric very very few if any sales staff, their managers and even employees of the manufacturers could be expected to be aware of them. Moreover as most of us are aware that there are always scum seeking to exploit the technology at all levels it behoves us to take what precautions we can just as we would recognise it as foolish to leave our life’s savings in a jar on the kitchen table.

  7. lyle

    This situation is more a reflection that chips are designed by human beings (at least at a high level) and humans often overlook things and also don’t know the long term effect of changes they make. It is in no sense crapification but rather just humans making human mistakes. There are similar mistakes in every field of human endeavor, thinking of civil engineering there are various bridges that fall down either during construction/reconstruction, or just at random. Walkways fall in hotels, houses are build on landslides, etc. (none of these things are intended by the designers or builders but the result of overlooking some known or unknown factor)
    Of course the larger feature is that as we make things more and more complex the more likley things will come back to bite us.

    As Henry Petroski states in the title of one of his books ” To Engineer is Human” (a book about engineering failures which the bugs qualify as)

    1. visitor

      I tend more towards an explanation as stated by XXYY above.

      There is a dictum that “a system cannot be retrofitted with security, it must be designed with it right from the start”.

      The fact is that, for the past 50 years, in computer engineering priority was given to performance, with other aspects like power consumption, miniaturization or backwards compatibility coming next, and security afterwards or offloaded to operating system or program software. Hence the importance of such techniques as speculative execution or out-of-order execution, that, without proper thought for security aspects (which are quite involved) result in those exotic attacks.

      Of course, there have been attempts at co-designing hardware, firmware and OS to ensure security. Historically, a major one occurred in the late 1960s with Multics and its associated hardware; the experience showed that it was a complex endeavour, the performance penalty was severe, and the resulting systems costly. Just how severe that penalty can be is demonstrated by the mitigation of Meltdown alone implying a 5% to 30% slowdown. Try to address security at all levels of the hardware, and the performance impact may well be one or more orders of magnitude greater.

      In the following decades, performance and price were given much more importance over security — and still are, as we have seen not just with Spectre and Meltdown, but also with UEFI, the Intel on-chip Management Engine, and assorted tricks that favour convenience (for Intel or PC manufacturers) over security. Does that count as crapification? Perhaps. But a design policy emphasizing, say, power consumption over security would also have had a detrimental effect on security. They want it fast, cheap and secure; pick two. So far, the combination “cheap and fast” was selected. Hence Spectre and Meltdown.

      1. Self Affine

        I agree – let’s not forget that Intel (and everyone else) lives in a hyper competitive consumer and production marketplace. Cheap and fast will always trump security in today’s computational/networked/data processing world.

        And its not only computer chips; the whole cloud paradigm depends on a dopamine driven user feedback loop, which can only be achieved by sacrificing transactional integrity and coherence at the edges. So what – that’s where we are at in terms of personal convenience.

        All this gnashing of teeth seems misplaced – its just another reflection of the world we have constructed, and no amount of patches or hardware re-designs will fix that.

        1. Self Affine

          I almost forgot.

          I kind of disagree that this is crapification in the sense usually used here.

          There really is no known precedent for this kind of infrastructure problem in terms of scale, complexity and impact particularly because (due to the internet and modern communications technology) it transcends any and all social boundaries.

          In a weird way it reminds me of climate instability and change. Collateral damage everywhere.

          1. Lambert Strether Post author

            That’s why the headline reads “crapification at scale.” The scale is indeed new (and a scale only enabled by a chip monoculture, a ginormous monopoly, and a constellation of factors that have made privacy (private digital property?) impossible.

            But when you’ve got a billion chips (a trillion, if you count the IoT) that all got shipped with the “worst CPU bug” ever, it’s hard to think of a better word that “crapification,” although I’ll certainly entertain alternatives.

            1. XXYY

              The term “crapification” is a great addition to the lexicon. If I had to define it, it would be something like “the deliberate reduction of quality or value for the customer for the purpose of increasing profits.” Examples might be making airline seats narrower, charging customers a fee for what used to be an amenity, outsourcing a key task to people who don’t know how to do it, and so on. It’s all around us and it pervades modern life.

              However, not every seeming reduction in quality or value is crapification. Making what was a metal part out of plastic is frequently done to save weight or improve manufacturability, though it may also reduce cost, and it may also make it seem “cheap” to the user. Good engineering is very definitely about reducing costs, in balance with maintaining and improving other characteristics of the object. Cars, which used to be almost entirely metal, now have a huge plastic component, which saves weight, reduces corrosion, improves functionality, and eliminates a lot of painting operations. This to my mind is *not* crapification, even though it’s “making things out of plastic.”

              So I don’t think there’s a huge bright line between good faith attempts to improve things and crapification. Often it’s in the eye of the beholder. Often, people will disagree. Often, of course, the crapification is pretty blatant.

          2. Carla

            “In a weird way it reminds me of climate instability and change. Collateral damage everywhere.”

            Great analogy.

      2. Lambert Strether Post author

        I mostly agree with your comment, but I note the lack of agency in “priority was given “, “without proper thought”, “the experience showed”, “were given much more importance”, “they want it fast, cheap and secure”, and “was selected.” Who is the “they,” here?

        We’re dealing with interlocking oligopolies with enormous technical, financial, and political power. Perhaps we should give consideration to the idea that their power, and how it is exercised and two what purpose, should play a role in our analysis of public policy. We seem to be taking this lemon market as a given, something natural, when in fact it is structured by these same oligopolies, and to their advantage.

        1. visitor

          Who is the “they,” here?


          For manufacturers, time-to-market and cost advantages are their weapons in a competitive market. Taking time to ensure that a system is secure and coming late with an expensive and comparatively underperforming offering may be alright for a niche player at most.

          For customers (both individual and corporate), features, performance and cost are paramount. Few are interested to buy a hardened system if it means paying more, and getting less performance and features.

          For insurance companies, it does not actually matter whether systems are provably safe — what counts is the tradeoff between the damages they have to compensate, and the risk of occurrence. In the present situation, tangible damages would be cumbersome to identify and prove, and the risk does not appear to be prohibitive.

          For governments, it would be a question of setting up legal requirements that only “fit-for-purpose” products be marketed, and, as a preliminary, setting up a competent organization to evaluate, determine and specify the norms to follow, check for violations, and, in the first place, weed out improper submissions in governmental call for tenders.

          So all in all, the entire socio-economic system is responsible for those choices where you suspect a troubling lack of agency. Intel did not impose such decisions as the EU rules that “cheapest submission wins”, or trade agreements prohibiting safety regulations deemed as unacceptable market barriers. Insurance companies never refused to insure firms even when they acquired systems that were fundamentally unproven — they accepted the risk if their customers followed “standard security procedures” (such as installing antivirus and the like). The widespread lack of practical concern for privacy exhibited by the population has been documented several times — including in NC.

          As for “the experience showed” — look up those few projects that attempted to build security at the hardware level for general-purpose computing, such as Multics or Intel iAPX432. IBM had some projects too. These were projects pushed by big players. They were not terribly successful.

          It is possible to go another route. In railway signalling and aircraft embedded control software, there are (partial) regulations that force manufacturers to provide reliability guarantees. There are specification methods, programming languages, operating systems and verification tools that serve to prove (formally, mathematically) the properties of the systems being developed on special-purpose hardware with well-controlled characteristics. It requires developers trained in such methods. I was a bit involved in that area long ago — and it is far from a widespread know-how. That is also why I am convinced AI-based self-driving cars will never be rigorously certified under similar stringent conditions for general usage.

          1. XXYY

            Really nice analysis. Thanks for taking the time. It’s a good point that certain technical areas (aerospace, medical, etc.) really do make strenuous efforts to do reliability and (sometimes) security extremely well, and often succeed though in general my impression is that this is all beginning to unravel a bit in the last couple of decades.

            As for “the experience showed” — look up those few projects that attempted to build security at the hardware level for general-purpose computing, such as Multics or Intel iAPX432. IBM had some projects too. These were projects pushed by big players. They were not terribly successful.

            One possible counterexample is/was Windows NT, a start from scratch project by Microsoft to redesign the Windows O/S to implement a real security model (though not a the hardware level AFAIK), primarily so they could bid for government projects. (Interestingly, the project included a team historian who ended up writing a book about the whole thing.) NT was the basis for all subsequent Windows O/S work, so this attempt to improve security *would* be considered terribly successful in my book.

            1. visitor

              Since Spectre and Meltdown are security issues originating at the hardware level, I only mentioned projects that attempted to design security right from the hardware up.

              As for the unravelling of quality in safety-critical systems, my impression is that older products were smaller, more self-contained, whereas newer ones have become so large that, while core functions continue to be verified exhaustively (with more modern, more efficient tools), they interact with a growing number of modules not deemed “critical”, and that are therefore less thoroughly validated. Thus, you might have a motor controller with all its sensors and actuators 100% specified and verified, but if the GUI serving to operate it did not undergo the same extensive validation, nasty surprises may occur. At least this is how I apprehend the state of the art from some distance.

    2. Lambert Strether Post author

      > There are similar mistakes in every field of human endeavor

      Really. Can you supply me a case where every single one of a billion manufactured products was defective, not fit for purpose?

      If there are a billion Gothic cathedrals, and every single one fell down, would you still be saying “Sh*t happens,” “engineers are just human,” “nothing to see here,” and “move along, move along, there’s no story here”?

      I’m making a systemic indictment; that’s what framing the issue as a “lemon market” does. I’m not saying that individual programmers are morally culpable (though in some cases they clearly are; I mean, somebody coded the functionalitiy in the Uber app that defrauded drivers, for example). But when I see some of the defensive handwaving on this thread, I start thinking I need to broaden my indictment, and start thinking seriously about the cultural (as well as institutional and financial) factors that enabled this debacle.

      1. XXYY

        Can you supply me a case where every single one of a billion manufactured products was defective, not fit for purpose?

        One very famous case was the original pop top aluminum can. Older readers may recall that the ability to open a can with bare hands rather than needing a can opener tool was heralded as a major breakthrough, and the design was manufactured for years in much higher volumes than CPU chips (probably hundreds of billions of units per year).

        However, in the original design, the key used to open the can was detached after use, and the discarded key became a huge environmental and medical problem. Billions were dropped on the ground and cut bare feet, birds would swallow them, people would drop them in the can after opening and swallow them when drinking, and so on. Definitely not fit for purpose!

        Eventually, the design was changed to what we see now, where the key remains attached to the can after opening. This successful design has been in manufacture for decades on all types of self opening aluminum cans.

        1. Clive

          The ring-pull was really a user-error issue. Certainly in the European market, the cans came with instructions (and a little picture on the side of the can) telling users to put the ring-pull in the empty can before discarding the can in an appropriate place.

          Of course, far too many irresponsible can users simply threw the ring-pull where they were sitting (or standing) while consuming the beverage.

          The redesign for the non-separable ring-pull was as a direct result of users being given an option to ignore the manufacturers’ instructions.

          Here, the chip vendors were specifically encouraging software developers to code in a way which relied on what is now clear is a defect in the product design. Kind of like telling the opener of a ring-pull can with a discardable ring-pull to go right ahead and drop it near a wildlife sanctuary.

      2. Bob Swern

        For the past seven or eight years, I’ve occasionally worked with a handful of programmers that have/had extensive intelligence community programming experience (NSA/gen’l intel community/CSEC), and every single one of them has repeated the same phrase: “Everything is hackable.” We’re talking: virtually all firewalls, servers and everything on down the hardware/software food chain. End of story. (I tell my clients this all the time.)

        That being said, the public (at least those that are paying attention) is finally getting a clue.

        P.S.: Lambert, you’re really doing an outstanding job covering this story. Thanks for your efforts.

      3. knowbuddhau

        I’ll see your systemic indictment, and raise you a level of analysis: where does the neoliberal system come from?

        I completely agree with the charge of crapification. A lot of commenters have alluded to the level I’m getting at. Self Affine happens to put it most succinctly (sincerely no disrespect), despite going on to dispute the charge.

        Let’s play Spot the World View (hint hint)

        I agree – let’s not forget that Intel (and everyone else) lives in a hyper competitive consumer and production marketplace. Cheap and fast will always trump security in today’s computational/networked/data processing world.

        And its not only computer chips; the whole cloud paradigm depends on a dopamine driven user feedback loop, which can only be achieved by sacrificing transactional integrity and coherence at the edges. So what – that’s where we are at in terms of personal convenience.

        All this gnashing of teeth seems misplaced – its just another reflection of the world we have constructed, and no amount of patches or hardware re-designs will fix that.

        Are we remembering that “everyone lives…in a marketplace,” or projecting it?

        That we conceive of the world in those terms doesn’t make it so. I don’t care how often it’s repeated. I’d like to see the evidence, please. Looks more like an organism/environment field to me.

        My question is, where has this neoliberal world come from? If it’s just a natural order, then SA et al. are right: no, it’s not crapification, that’s just the way the hypercomplex cookie naturally crumbles.

        But it isn’t natural, in the sense of a necessary outcome of immutable forces. This neoliberal world of pain is the embodiment of the world view embedded in the assumptions so ably described by all and sundry, not just Self Affine.

        Is there anything about the natural world that compels us to believe it to be the marketplace as SA describes it, other than the social order we’ve built based on those assumptions?

        It’s tautological to believe the world to be a marketplace (perhaps even God’s own perpetual motion, justice-dispensing, holy war cash machine itself, if you’re a true believer), then go and make the world in that image, and then try to turn around and claim markets are natural.

        So I’m sorry, all and sundry who point to neoliberalism’s misbegotten crap world as evidence against crapification in this case. It’s crap all the way down.

        I’m not satisfied with a technical approach, though. This hypercompetitive world of ours is one of choice. It comes from the assumptions and beliefs of the faithful. Even those of us who reject it end up reifying it just to stay fed and sheltered.

        Conceiving of the world as a marketplace isn’t natural. Believing in the necessity of quickly producing crap before someone else does and you go broke and die isn’t the only way of being human in the world.

        It doesn’t even accurately portray the known universe. Why reduce the world to a marketplace? Where have I heard that idea before, that the cosmos is the construct, and thus private property, of a cosmic tyrant-engineer? Neoliberal rhetoric is ostensibly secular, but scratch the surface and it’s that Old Tyme Religion of sin-and-damnation.

        Neoiberalism’s bullshit mythology brings into being a crap world in which crapification is the order of the day, seemingly naturally. But there’s nothing necessarily natural about it.

        Thankfully, as you yourself, Lambert, often say, we can walk and chew gum at the same time, At the same time that we’re detailing neoliberalism’s operational failure cascade, we can also take on the beliefs, the world view, the assumptions, that bring that world into being in the first place.

        It’s necessary, that is, to build up as we take down. It’ll take a cosmogenetic narrative of our own, one that embodies our own world view, to bring into being the world we’d all rather be living in.

  8. Wade Riddick

    This is what you get for legally redefining software from a product to a service. It completely changed how the industry approached liability just at the time computers were getting widely networked and handing off code to each other – exactly the time you would want to start stressing public safety.

    Until software services are treated like a well-regulated public utility, this will keep happening. Of course, the entire legal system is designed to represent corporate interest over citizens so this has the effect of transferring risk from those who generate it – and have the most expertise in mitigating it – onto unsuspecting, inexpert citizens. This amplifies the damage. Lemons, indeed.

    It also gets to the fundamental heart of the contradiction in this “libertarian” utopia. We have created a digital economy with no real, enforceable private property rights because that would require a functioning government enforcing our rights and we don’t want that.

      1. knowbuddhau

        I’m so old I remember when Bush the Elder derided the importance of “the vision thing.”

        Bush the Younger, otoh, brought back “the crazies.” And he had Rove as an advisor. GWB called him “Turd Blossom.” Fun fact: Rove “calls himself ‘Grendel,’ ‘Moby Dick,’ and ‘Lord Voldemort,'” (emphasis original) according to attorney Scott Horton. I think it’s because he takes the power of myth, to bring worlds into being, or destroy them, seriously.

        For those who came in late, a bit of background.

        It was in a moment of irritation during the 1988 campaign that the Republican presidential candidate, Vice President George Bush, first derided ”the vision thing,” as he called it, thus employing an ungainly piece of Bush-speak to describe a leader’s ability to set forth inspiring national goals. Mr. Bush, who may have been one of the most selfeffacing presidents in recent American history, went on to become a one-term incrementalist with little taste for big schemes.

        Sixteen years later, the second President Bush has inherited his father’s syntax but not his cautious goals from a less traumatic time. As last week proved again, this president has embraced not only ”the vision thing” but the idea of a very big presidency: big ideas, big costs, big gambles. More than many presidents, historians say, Mr. Bush seems to understand how to use the powers of the office and to see the political benefits in risk. He may leave the details to others, but when backed into a corner, he doubles his bets.

        Of course, this is also magic-show time in Washington, when White House advisers work feverishly backstage to roll out what they hope will be dazzling ideas to lead in to the State of the Union address on Jan. 20, the day after the Iowa caucuses. In a re-election year, Mr. Bush’s plan is to steal the show. So the warm-up acts started last week.

        On Wednesday, Mr. Bush proposed a sweeping overhaul of the nation’s immigration laws that could confer legal status on millions of illegal immigrants in the United States. On Thursday, the White House officially leaked the broad outlines of a presidential speech this week in which Mr. Bush will propose establishing a base on the moon and sending humans to Mars.

        And that “magic show” in turn reminds me of the devices featured in Machines of the Gods, in which cutting-edge tech is employed to “shock and awe” people into joining the flock, or at least dropping a few coins in the plate.

        Same as it ever was.

  9. Optic7

    I’ve been obsessively reading every Slashdot thread about these vulnerabilities and have a few thoughts. Keep in mind that more information is still slowly coming in (for instance, Microsoft only today finally published some guidance on performance impacts), so some of this stuff may not be entirely complete or accurate.

    Note: all links below are to specific comments on Slashdot stories:

    Intel is looking terrible here. Meltdown seems to be a more serious flaw (more easily exploitable), and it almost exclusively affects Intel processors (the only exception appears to be a related exploit similar to Meltdown on 3 ARM CPU models):

    With that in mind, I’ve read a couple of other posts that provide some interesting background information:

    1. Intel apparently patented the performance-enhancing “feature” that allowed meltdown to happen – this turns out to be the reason why it mostly only affects Intel (direct link to a comment):

    2. Neoliberal crapification has been apparently ongoing at Intel (the story happens after meltdown design decisions were made, but it still illustrates the kinds of forces at work here):

    3. More information about how much was known about these risks before these specific exploits surfaced (answer: it’s been known for a long time, but appears to have been deemed not feasible before it became feasible):

    1. Lambert Strether Post author

      This is very useful and interesting. Thanks for the research.

      This “We just did what the market wanted!” stuff. Honestly… One of the reasons to become a monopoly is so you don’t have to worry about that…

  10. D

    What Wade Riddick said above.

    Though, I would add the BIG FOUR Accounting Firms™ – which all trace their ancestry to London Inc. of Charles Dickens’ noted infamy – in with those Corporate Legal Services™ who operate to the detriment of life itself; along with deleting the word WE versus: those exactly culpable who own and run those entities sucking the oxygyn from any meaningful life.

  11. ewmayer

    So I tried to have a look at MSFT’s announcement yesterday that, as a result of hardware-brickage reports, their patch for AMD processors has been delayed.

    As I said I *tried* viewing the page, but it required JS to be enabled – ok, temporarily enabled JS via NoScript – only to find that it *also* needs cookies to be enabled, so I gave up. Why on earth would a sinple patch-delayed blurb-page need to store cookies on every viewer’s device? Fvckin data-vacuuming twits.

    1. Amfortas the Hippie

      noscript is one of my favorite do-dads. (altho i liked it better before it was glommed into firefox)
      I’m continually amazed at the sheer number of things that certain sites try to run on my machine.
      the gates empire is one of the worst.

  12. Daryl

    > (However, if you’re a gamer, you should not be affected, unless you’re gaming in the cloud, I suppose:

    From what I’ve read, it seems like it will heavily depend on what kind of mitigation tactics are used by compiler/language runtime developers. Anything that generates native code based on arbitrary input — i.e. your web browser, will have to do something. And while games and desktop applications don’t necessarily execute random things from the internet, they often rely on language runtimes which will have to do something to mitigate this.

    What I’ve been most unhappy with about the coverage of this is its uncritical repetition of Intel’s PR stuff.

  13. synoia

    The hardware solution for speculative execution is simple and fixable with virtual memory machines.

    These is real memory, and virtual memory, and the difference is a virtual page is at one address, and mapped to a different real memory page.

    For real memory changed due to speculative execution, unmap it from virtual, and do not map it to virtual until the speculation is complete, that is the branch dis no loner speculative.

    I don’t know enough about Intel processor architecture to think this through. I might know enough about IBM z mainframe architecture to make it work, but much of that is way back in my memory.

    There is also a mapping between main memory and CPU cache memory, which has a similar, but different , mechanism.

  14. Optic7

    One more try without any links. If you want to read the comments that I had previously linked, just search for the story with this title on Slashdot and read the comments moderated at 5 points: “OpenBSD’s De Raadt Pans ‘Incredibly Bad’ Disclsoure of Intel CPU Bug”

    I’ve been obsessively reading every Slashdot thread about these vulnerabilities and have a few thoughts. Keep in mind that more information is still slowly coming in (for instance, Microsoft only today finally published some guidance on performance impacts), so some of this stuff may not be entirely complete or accurate.

    Note: all links below are to specific comments on Slashdot stories:

    Intel is looking terrible here. Meltdown seems to be a more serious flaw (more easily exploitable), and it almost exclusively affects Intel processors (the only exception appears to be a related exploit similar to Meltdown on 3 ARM CPU models):

    With that in mind, I’ve read a couple of other posts that provide some interesting background information:

    1. Intel apparently patented the performance-enhancing “feature” that allowed meltdown to happen – this turns out to be the reason why it mostly only affects Intel (direct link to a comment):

    2. Neoliberal crapification has been apparently ongoing at Intel (the story happens after meltdown design decisions were made, but it still illustrates the kinds of forces at work here):

    3. More information about how much was known about these risks before these specific exploits surfaced (answer: it’s been known for a long time, but appears to have been deemed not feasible before it became feasible):

    1. ewmayer

      Thanks – i”ve been hitting Skynet issues with recent posts-contanng-links, too – just spent 20 minutes nursemaiding a long-ish post in today’s, um, Links past the watchdogs, first tried cutting number of links for 2 to 1, still no joy, so finally switched to source name and article title to make it easy for folks to dig out the links themselves.

    2. Lambert Strether Post author

      Here is the Slashdot link.

      Quoting from it:

      In the interview de Raadt also faults intel for moving too fast in an attempt to beat their competition. “There are papers about the risky side-effects of speculative loads — people knew… Intel engineers attended the same conferences as other company engineers, and read the same papers about performance enhancing strategies — so it is hard to believe they ignored the risky aspects. I bet they were instructed to ignore the risk.”

      “People knew.” As I’ve been saying. So, even if you are, as it were, a strong-form crapificationist, where ill intent is required, and not just systemic confluences with emergent results, the intent is there. More:

      He points out this will make it more difficult to develop kernel software, since “Suddenly the trickiest parts of a kernel need to do backflips to cope with problems deep in the micro-architecture.” And he also complains that Intel “has been exceedingly clever to mix Meltdown (speculative loads) with a separate issue (Spectre). This is pulling the wool over the public’s eyes…”

      Readers will note that distinguishing the two was the very first thing I did in this post [lambert blushes modestly].

  15. Fastball

    So, by the Volkswagon example not require Intel to replace all chips affected by these vulnerabilities FOR FREE, and pay for their installation, as the less vulnerable chips are developed?

  16. none

    Meh, IMO this is still overblown. My comment from last night’s thread:

    Elaborating a little more: there are two kinds of computer security issues (ok they overlap a little) that we have to think about, using “guys” gender-neutrally:

    1) Bad guys use their own computers to attack the good guys’ (e.g. your) computers over the internet, a so-called “remote” attack.

    2) Bad guys and good guys are both using the same computer at the same time, so the bad guys’ programs can attack the good guys’ programs without the interposed network connection. This is a “local” attack and it’s harder to defend against than a remote attack.

    In the 1960s-70s-80s, local attacks were important because computers were usually expensive, thus shared by multiple people, some of them bad. Remote attacks were less important because there wasn’t a pervasive internet like there is now.

    In the 1990s-2000s, local attacks became less important because computers got cheap enough that everyone could have their own instead of sharing one with bad guys. So local attacks became harder to carry out. Remote attacks meanwhile became more important because of the expanding Internet.

    Meltdown and Spectre are both local attacks, so what has gone wrong, why do we have this problem–why are we letting bad guys run code on our computers in the first place? This should be 100% stoppable, but we messed up in two relevant ways (plus a third that’s a separate but also horrible issue):

    1) On the client side (“client” roughly meaning a networked computer belonging to a regular person, e.g. your laptop or smartphone), partly from the relentless pressure of advertising companies like Google, we again started taking to letting bad guys run code on our computers, relying on software safeguards to stop them from completely taking over. An awful lot of client security problems could be solved by one weird trick: fucking ban Javascript from all web browsers everywhere forever. Javascript is one way for bad guys to attack our computers with Meltdown, though the browser vendors have patches in development. By all means, take the patches. They’ll slow down Javascript in your browser but who cares–as far as I’m concerned slowdown down JS is a good thing.

    2. On the server side, a hell of a lot of security sensitive businesses (banks, medical, etc.) have been seduced into using virtualized computers (virtual machines or VM’s, aka cloud computing) which again means lots of people, some bad, are sharing the same server. VM’s are not a bad thing since they’re an economical way to run a small, cheap-ass web site. I do that myself but I have no sensitive financial data on them worth attacking. Using VM’s for financial or medical data is like, to misquote someone, shopping for a tombstone.

    Anyway, don’t panic. If you run a not-too-sensitive server on a VM, let the VM host install mitigations–they’re all aware of it. If you’re running a client (e.g. web browser), take the patch, but either way you’re in more danger from running shitty apps than you are from Meltdown. If you’re running a sensitive service on a VM, switch to a dedicated or single-tenant VM host. That will cost you a little more but it’s common sense anyway. (Single tenant means there are still multiple VM’s on the physical machine, but all of them are yours).

    So I’m more worried about the existing spyware, malware, and people trusting stuff they shouldn’t (*cough* Amazon Alexa) than I am about Meltdown or Spectre.

    1. Lambert Strether Post author

      > Don’t panic

      Who’s panicking? I wrote:

      My personal advice is the same advice some investors give: Don’t do anything that means you won’t sleep at night. For me, that would mean not patching any initially released patch, for the same reason I never upgrade to a *.0 release; only the *.1 release will have the bugs worked out! But your business or the firm for which you work may demand different priorities; see the example of the power grid, above. And do be extra, extra careful to watch for phishing email.

      I think your comment has two layers of abstraction confused. There is the consumer layer, for which panic is not advisable, as I write above). And there is the architectural layer, which is a function of the (lemon) market as structured by Intel’s monopoly power. Here, we have a billion — or a trillion, but who’s counting? — products sold with the CPU “flaws” that make them unfit for purpose. There’s no reason to panic about that either, because panic is rarely useful, but I think a certainly amount of quiet and contained boggling of the mind is not only permitted but mandatory at the scale of the debacle. Alas, this level of introspection seems not to be available to many members of the more technical cohorts programming community. Perhaps that will change.

  17. xil

    my first thought after reading the quoted Ken Thompson quote, “You can’t trust code that you did not totally create yourself.”, was that he sounds like someone who has never actually coded.

    or maybe I’m misinterpreting his meaning of trust to be safe

    1. Alan Edwards

      it’s less that you’re misintrepreting and more that you’re betraying a complete lack of knowledge. My suggestion is that you google “Ken Thompson” and then decide if you should rethink what you think he sounds like.

      1. Lambert Strether Post author

        Alan, you put the matter far more politely than I would. The following explains Thompson’s article in footnote three:

        Ken Thompson’s 1983 Turing Award lecture to the ACM admitted the existence of a back door in early Unix versions that may have qualified as the most fiendishly clever security hack of all time. In this scheme, the C compiler contained code that would recognize when the login command was being recompiled and insert some code recognizing a password chosen by Thompson, giving him entry to the system whether or not an account had been created for him.

        Normally such a back door could be removed by removing it from the source code for the compiler and recompiling the compiler. But to recompile the compiler, you have to use the compiler — so Thompson also arranged that the compiler would recognize when it was compiling a version of itself, and insert into the recompiled compiler the code to insert into the recompiled login the code to allow Thompson entry — and, of course, the code to recognize itself and do the whole thing again the next time around! And having done this once, he was then able to recompile the compiler from the original sources; the hack perpetuated itself invisibly, leaving the back door in place and active but with no trace in the sources.

        The Turing lecture that reported this truly moby hack was later published as “Reflections on Trusting Trust”, Communications of the ACM 27, 8 (August 1984), pp. 761–763 (text available at Ken Thompson has since confirmed that this hack was implemented and that the Trojan Horse code did appear in the login binary of a Unix Support group machine. Ken says the crocked compiler was never distributed. Your editor has heard two separate reports that suggest that the crocked login did make it out of Bell Labs, notably to BBN, and that it enabled at least one late-night login across the network by someone using the login name “kt”.

        Here’s Wikipedia’s bio of Ken Thompson. Among other things, he did the first two or three versions of Unix. So I think he knew a thing or two about coding. And trust.

        As a sidebar, many of the great programmers of the 1970s, like Thompson, or Donald Knuth, wrote lucidly and beautifully about complex technical issues. (Fred Brooks is another.) This seems to have been lost.

  18. Anarcissie

    Doesn’t this (Spectre-Meltdown, or S&M) mean the end of the world?

    My thinking is this: Most money, in the larger sense –all that is not specie or immediately tradable for it — depends on faith: faith that the money is actually worth the claims made for it, and faith that these claims can be successfully exercised in a reasonable way. In the last few decades, money has moved largely to computer systems and the connections between them. This is especially true of recently devised instruments of great fabulosity which I need not name.

    The way S&M are being handled, with secrecy, confusion, and lack of assurance and closure, will cause a great many people will lose faith either in the money itself or in the vehicles through which it is moved and used — because it is based on seriously flawed technology. It is as if the government had printed its paper money with disappearing ink.

    As faith is lost in money, two contradictory movements will occur: pure-faith money, like ledger entries somewhere somehow, will become ghostly and vanish; and ‘paper’ money will multiply as authorities try to replace the ghosted money with paper. Radical deflation and inflation will occur simultaneously. The now-untrustworthy monetary system will not be able to handle the traffic or the serious political and social problems. Real estate, equities, and other rich folks’ wealth will evaporate. The ‘real economy’ (bread and circuses) will become gravely impeded. The system will most likely seize up as it started to in 2006-2008.

    ‘Change’ will be called for. It may not be very nice change.

  19. Lyle

    It seems to me that the much simpler to exploit vulnerabilities of various Internet of Things devices are far more threatening than these. After all if someone can set you thermostat to 30 in the winter or 120 in the summer it could be worse. Also many of the vulnerabilities in IOT devices are well known but not patched, ranging from issues that were first found in the early 1980s on Vaxes such as a defined system manager password and an install procedure that did not force it to change (today it is the root account but the principle is the same)

    So since there are a multitude of risks one has to decide which ones to mitigate first,

    1. Lambert Strether Post author

      > since there are a multitude of risks one has to decide which ones to mitigate first

      Of course. And the entire computer industry, for good or ill, is telling us to mitigate these two (and that we will be continuing to mitigate one for years to come). So what’s your point?

      1. lyle

        Of course the computer industry also expects it to increase PC sales as newer chips with fixes come about. PC sales have been flat at best so they might view this as a way to goose sales. IMHO the IOT issues are far worse in that they are far easier to use

  20. JBird

    All this chatting over who, what, and why is responsible is just that. Chatter. Most people cannot follow this nonsense, and more they probably don’t care. They just want things to work, just work, reliably, safely, consistently like the car, a gun, the stove, the police, or whatever. I certainly do. None of this dysfunctional crapification of darn near everything because of reasons.

    This systemic ongoing slowing speeding up of everything is driving me nuts. The earlier post this week on how modern neoliberal society is causing mental illness has this Spectre/Meltdown meltdown as support. The System demands perfection, without it you are likely doomed to poverty, certainly more suffering, and yet there are breakdowns in government, business, technology, climate and so. So the neoliberal drone is supposed to be perfect in a system that demands perfection, yet that very system cannot even guarantee that the desktop, or the phone, or my internet free coffee maker, will reliably, consistently, work for any length of time. I guess it is good that this Luddite still takes all his class notes longhand using a fountain pen.

    Maybe the worshipers of the Neoliberal Flying Spaghetti Monster and the Acolytes of the Free Market Capitalism are not insane, or deluded, they are just seeking refuge in something that they hope works.

    1. Lambert Strether Post author

      > So the neoliberal drone is supposed to be perfect in a system that demands perfection, yet that very system cannot even guarantee that the desktop, or the phone, or my internet free coffee maker, will reliably, consistently, work for any length of time.

      Yves linked to this story, “Is Your Child Lying to You? That’s Good,” last week. Here’s the conclusion:

      You can also simply pay kids to be honest. In research involving 5- and 6-year-olds, Professor Lee and his colleagues attached a financial incentive to telling the truth about a misdeed. Lying earned children $2, while confessing won them anywhere from nothing to $8. The research question was: How much does the truth cost? When honesty paid nothing, four out of five children lied. Curiously, that number barely budged when the payout was raised to $2.

      But when honesty was compensated at 1.5 times the value of lying — $3 rather than $2 — the scales tipped in favor of the truth. Honesty can be bought, in other words, but at a premium. The absolute dollar amount is irrelevant, Professor Lee has found. What matters is the relative value — the honesty-to-dishonesty exchange rate, so to speak.

      “Their decision to lie is very tactical,” Professor Lee said. “Children are thinking in terms of the ratio.” Smart kids, indeed.

      There’s that word, “smart.” Thinking of the constant breakage and crapification of neoliberalism as a selection process, where people can display adaptive behavior, or not (and those who do not are, well, discard). Somehow I don’t think the survivors will be very pleasant people to be around. (Elon Musks’ Mars colony will no doubt be 100% neoliberal survivors, so it would be interesting to speculate on how long it will survive, and what it will look like.)

      1. JBird

        >>>Thinking of the constant breakage and crapification of neoliberalism as a selection process, where people can display adaptive behavior, or not (and those who do not are, well, discard).<<<

        I thought Social Darwinism was a reviled idea. At least no one, yet, has suggested its evil sibling eugenics.

Comments are closed.