Yves here. I hope tech and security-savvy readers will treat this post as a critical thinking exercise and voice corrections, quibbles, and points of agreement in comments. This topic is generally over my pay grade so I am not ideally positioned to do so. I’m not keen about giving presumed hacking groups cute names to indicate their supposed backers, as well as hackers from particular nations being insinuated to be state actors. During Russiagate, experts called out gimmickry like the use of Russia tools to claim that certain hackers were Russian when those tools were widely used, and the inept and gratuitous use of Russian in what looked to be manipulated or even fabricated published details.
I also note the absence of mention of gangs as a cyberthreat, when my impression is that they are becoming more effective.
Nevertheless, it’s telling that DOGE managed to degrade the operations of CISA, which is part of the intelligence apparatus, but has left the armed forces largely alone. Note that the title elevates Iran to the status of lead cybersecurity menace.
By Lynn Parramore, Senior Research Analyst at the Institute for New Economic Thinking. Originally published at the Institute for New Economic Thinking website<
As international tensions increase, cyberwarfare and ransomware attacks loom—and America’s digital defenses face a perfect storm of foreign attacks, criminal behavior, and self-inflicted damage.
Few understand the stakes better than Dr. David Mussington, former head of Infrastructure Security at CISA, who’s spent decades crafting strategies at RAND, the Pentagon, and DHS. In this eye-opening conversation with the Institute for New Economic Thinking, Mussington warns that while countries like Iran, China, and Russia grow more aggressive, the Trump administration has gutted the very agency designed to protect America’s most critical systems—cutting CISA’s budget by nearly half, eliminating nearly a third of its staff, and driving out decades of cybersecurity expertise.
With experienced defenders gone, vital federal structures disbanded, and state and private actors left to pick up the slack, Mussington raises a chilling question: are we really ready for what’s coming — or are we falling dangerously behind? He discusses our vulnerabilities as consumers, citizens, and as a country.
Lynn Parramore: Can you say a bit about your background in cybersecurity?
David Mussington: I have over two decades of experience in different aspects of the cybersecurity risk challenge. My career began at the RAND corporation, where I participated in and led research teams evaluating U.S. critical infrastructure security. This included exercises, assessments, and policy analysis. Later on I worked at the Department of Defense – writing cyber strategies and advising the Office of the Secretary on USCYBERCOM’s standup, and on supply chain and other challenges posed by U.S. adversaries in the cyber domain.
Most recently, I led the Infrastructure Security Division for CISA (Cybersecurity and Infrastructure Security Agency – a part of the Department of Homeland Security) for four years during the Biden/Harris administration, overseeing physical security, Internet of Things (IOT), operational tech, and critical infrastructure risk assessments and remediation programs. That role gave me a front-row seat to threat levels, vulnerabilities, private sector responses, and global collaboration—especially around nation-state threats, our top concern.
LP: Let’s talk about CISA. How has the agency changed under the current administration, especially with the recent cuts and firings?
DM: First, I think it’s fair to say the changes have been dramatic—but it’s still early to know where they’ll lead. CISA doesn’t have a permanent director yet—Sean Plankey’s been nominated, but not confirmed. We do have a national cyber director, Sean Cairncross, and some key roles at DHS and the Sector Risk Management Agencies are finally taking shape. But the team hasn’t fully come together yet. It takes time when we change administrations to reestablish new priorities.
My biggest concern is losing experienced people. Fewer staff means a heavier burden on those that remain, weaker national critical infrastructure security, and lessened resilience. These public servants are dedicated, but constant attacks on their patriotism are unfair and hurt morale. I deeply respect the CISA staff and others facing sudden changes after decades of service. Leaving a mission they believed in—defending the country—is hard.
We must support those still serving—they’re fighting for all of us. I loved this work, and moments like representing the U.S. at Australia’s National War Memorial reminded me why it matters. That mission and honor must never be forgotten.
LP: One of CISA’s past priorities has been securing our elections. But under the current administration, there’s talk of that mission shifting. What can you tell us about the changing focus?
DM: You mentioned the election mission. It’s clear from recent decisions that CISA’s role in that area has been deliberately targeted for reduction—if not outright removal. That’s a decision made by this administration. Obviously, the administration I was in didn’t agree with that. We were focused on a specific cluster of activities – mostly foreign malign threats. Russia was a key concern, among others, given their efforts to manipulate American opinion using tactics we discussed in our last conversation.
On the other hand, nation-state threats still exist. This administration’s rhetoric remains focused on them — particularly China’s campaigns like Salt Typhoon, Volt Typhoon, and Silk Typhoon. Their persistent access to U.S. critical infrastructure and the potential to weaponize those vulnerabilities remains, I believe, a top national priority.
There’s clearly a shift toward greater reliance on states and the private sector to handle cybersecurity on their own. That’s how I interpret the shrinking federal role in critical infrastructure protection—both in size and scope.
LP: In moving from federal to state responsibility, what security vulnerabilities at the state level concern you?
DM: It’s less about where the vulnerabilities are and more about who controls them—and what capabilities they have. In Texas, for instance, some private infrastructure operators are technically strong, but they may now bear more responsibility than they’re used to. That puts greater weight on their security plans. As the federal role recedes, the effectiveness of those plans—against everything from insider threats and ransomware to nation-state actors—matters more than ever.
A state will be only as good as the big private sector infrastructure operators are. States don’t typically have a lot of autonomous critical infrastructure protection or resilience capacity of their own. They’re very dependent on the private sector.
LP: Do you see any states modeling effective planning?
DM: Difficult to tell, because this shift toward the states also represents a move away from federal Sector Risk Management Agencies (SRMAs). Under the traditional model, CISA coordinated at the top, followed by SRMAs, then state and local governments, and finally the private-sector operators of critical infrastructure. That hierarchy now seems to be changing.
To the extent that states have technological capabilities under their control—take California, for example: a wealthy state with significant critical infrastructure and advanced industries—the state can leverage its own partnerships, fiscal tools, and regulatory mechanisms. That combination of technical expertise, preexisting capacity, and the presence of a strong technological and educational ecosystem is probably the best predictor of how well a state will perform. And across the U.S., states vary widely in how well they’re positioned to take on more of this responsibility themselves.
I think generally that richer states are going to be in a better position to do this than poorer states. But is the federal government still able to step in and pick up the slack?
LP: We saw it during the pandemic—states can act independently, but for certain challenges, coordination is essential. How do you see that playing out when it comes to cybersecurity?
DM: CISA remains the national coordinator for critical infrastructure security and resilience. Regardless of resources, it’s essential for public-private coordination and helping states leverage each other’s strengths. That’s where my concerns lie—I worry about the loss of expertise and whether CISA, especially after the Chevron deference decision(curtailing the power of agencies), will keep the authority and capacity to coordinate effectively. I’m also concerned if Sector Risk Management Agencies have the authority to do their part.
Some don’t, and some won’t. I don’t expect Congress to grant federal agencies’ broad cyber-regulatory power anytime soon. So, regulatory approaches to cyber-risk won’t be as prominent, even though regulation has long been seen as key to federal cyber-risk management.
States do have the authority to regulate critical infrastructure within their borders. The big question is how each state will approach it. I don’t know most states well enough to predict, but generally, regulations aren’t popular—and cybersecurity rules likely won’t be either.
LP: Let’s talk about the politicization of cybersecurity and the shifting approaches of different administrations. How do you view that issue, especially in light of evolving threats arising from global conflicts?
DM: I think it makes us less coherent as a nation, since we won’t be able to articulate a single strategy or strategic approach to critical infrastructure security and resilience.
My former director, Jen Easterly, used to say that cybersecurity isn’t political—it’s a set of risks we have to address regardless of ideology. I agree. That perspective points you toward a particular approach: federal coordination of a largely private-sector-oriented, voluntary framework built around best practices carried out for the public good.
In areas like defense and nuclear power generation and distribution, there are strong mandates for infrastructure protection—and I think that’s rightly seen as nonpartisan. Whether we return to that view remains to be seen. Given current threat conditions, I’m not sure it’s sustainable to treat this as a partisan issue. We have real vulnerabilities in our critical infrastructure.
CISA continues to publish the Known Exploited Vulnerabilities (KEV) list, which was already voluminous when I was there—and it hasn’t gotten any shorter. These are vulnerabilities that have been actively exploited, sometimes for years, and they pose serious risks both individually and when chained together. These are the same vulnerabilities that adversaries from China, Iran, and Russia continue to target—and that remains true regardless of ideology. The best practice recommendations to counter them haven’t changed either.
Right now, there’s real potential for Iranian cyber countermeasures against the U.S., given the current conflict involving Israel. In the recent past, Iranian cyber actors linked to the Islamic Revolutionary Guard Corps (IRGC) have targeted U.S. critical infrastructure and even election systems. They’ve also shown interest in water systems and dams.
Iran’s cyber capabilities are serious. They’ve actively targeted regional rivals like the UAE and Saudi Arabia. If conflict escalates, could they go after U.S. infrastructure again—and would our defenses hold up? Those are key questions as we consider possible actions related to Iran’s nuclear and military capabilities.
LP: Let’s talk economics and budgets. DOGE was sold as a way to cut costs and boost efficiency. But when agencies like CISA face cuts and lose expertise, could we be risking higher costs later?
DM: We could. But I’d frame it a bit differently: the real question is whether the private sector and the states can mitigate risk at scale to meet the threat as it exists—and whether we have systems in place to actually measure that.
LP: Do we?
DM: We don’t. There’s no metrics-based framework to assess how well we’re doing on cybersecurity or critical infrastructure resilience at the national level. When we were running risk management and infrastructure assessments through CISA, we struggled—our metrics were weak, and the data was often stale.
Knowing things are fragile—and understanding how risk and vulnerability have changed between 2022 and 2025—makes responding effectively a major challenge.
On the economics, I think the key is focusing on incentives for remediation. Are the asset owners—where the risks actually exist—truly incentivized to take the necessary steps to reduce risk to their assets, their operations, and the public? It’s clear there’s an incentive misalignment—asset owners often shift the cost of risk onto others, like customers or other jurisdictions, because they rarely face accountability for breaches or for passing risks along.
Our insurance markets don’t align those incentives, and we don’t regulate to prevent companies from inadvertently imposing risks on their customers. This is especially clear with consumer technologies, which people buy and are then expected to protect on their own.
I do a lot with my computers. But at a certain level, I can’t fix hardware vulnerabilities. I can’t really do a lot about the software vulnerabilities either. If I hear about a piece of software that’s breached or something, I guess I can look for an update. But that’s about it.
What about hardware and software manufacturers? What about the secure-by-design principles we discussed during our administration? How do we measure if something truly meets those standards—and how do we make that information available to consumers so they can make informed choices?
All of that undermines market-based incentives from favoring security over insecurity, because the market isn’t properly structured for it. And since regulation is so often off the table, we don’t have enough security to protect cyberspace—whether at the national critical infrastructure level against nation-states, or at the individual level inside people’s homes.
LP: With massive password breaches hitting companies like Google, Apple, and Facebook, what security challenges are everyday people facing—and how are they changing?
DM: Problems like identity theft, ecommerce fraud, and ransomware have actually worsened over the last decade. The actors conducting these activities are sometimes criminals, and sometimes proxy groups – acting on behalf of others for ideological or foreign intelligence reasons. It is often difficult to differentiate the causes of activity. We just know we are seeing more of it, and more consequential critical infrastructure impacts. The best defense for consumers is to use best practices such as multi-factor authentication—ideally with authenticator apps, not SMS—and using strong, unique passwords or passkeys. Having a playbook for incident response is a good practice as well as it enables practice and preparation for difficult circumstances.
LP: We rely on businesses to keep us safe. Are they doing enough?
DM: Awareness is much better now, especially around ransomware. Most medium-sized companies follow basic security practices or hire experts. Small businesses still struggle but often use built-in protections from Apple, Microsoft, etc. The real issue is the mindset—some still think “it won’t happen to us,” which is dangerous. Your security depends on your weakest link—like contractors without protections.
CISA and others do a lot to share best practices, but the bigger problem is vulnerable hardware and software—like IoT devices with default passwords anyone can find. Consumers depend on companies to build secure products, but often they don’t.
LP: What risks do devices like smart speakers pose?
DM: The major risk that IoT devices — of which smart speakers are just one example — is that their provenance — the security procedures built into their hardware and software — may be entirely unknown to the consumer and possibly architected to prevent user modifications of their functionality. This means that consumers may be unable to rectify flaws in shipped hardware (or software) — meaning that risk is shifted from technology producers to consumers from the standpoint of accountability and cost.
LP: With so many breaches in 2025, which worries you most?
DM: Honestly, I’m not surprised by new or repackaged breach data anymore. After a decade of massive breaches affecting hundreds of millions, how much more can be stolen? We live in a “breached” environment, which fuels cybercrime. The core issue is that we still haven’t nailed the basics.
Overall, my major concerns were and remain nation-state cyber operations targeting US critical infrastructure, and the velocity of change that new technologies – such as AI – bring to the risk challenge.
LP: What are the things that keep you up at night?
DM: Given my last job, I’m especially worried about vulnerabilities that linger unpatched for years. I use the CISA Known Exploited Vulnerability (KEV) list as a strategic litmus test—is it shrinking? Is the average time vulnerabilities stay on it going down? The answer is no. Campaigns like Salt Typhoon and Volt Typhoon show ongoing intrusions. Are we pushing those actors out? No. These persistent weaknesses let adversaries hold a foothold in critical infrastructure—a serious strategic threat.
We didn’t always face widespread, persistent vulnerabilities exploited by China across most critical infrastructure. Now we do. Russia’s tactics in Ukraine—using crimeware and criminal groups tied to nation-states—weren’t always seen here, but they are now. Plus, IoT is embedded everywhere in our lives, vastly expanding the attack surface and risks. I worry about identity theft too—I’ve had my data breached like many, relying on lifetime monitoring because that’s the reality today.
Nation-state techniques used to be elite and rare; now they’ve trickled down to criminals, making anyone a target. That cluster of risks is what keeps me up at night.
LP: Every administration presumably wants to keep Americans safe and avoid being blamed for a major incident. That’s a strong incentive. But are there disincentives working against our cybersecurity?
DM: Everything requires a solid plan. The best incentives and intentions don’t stop risks actively being actively exploited. We’re still in the early stages of this administration’s cybersecurity plans.
We have numerous top-level executive orders and presidential directives, but CISA still doesn’t have a director. The fate of NSM-22, the successor to PPD-21, remains uncertain. Sector Risk Management Agencies are still finding their footing without new strategic guidance.
That guidance hasn’t formally arrived yet. Meanwhile, the Critical Infrastructure Partnership Advisory Council (CIPAC), which connects the federal government with private-sector leaders in critical sectors, was suspended. Its successor has yet to emerge.
So we have structures that aren’t functioning as they used to, nor have they been redesigned to fit the new administration’s priorities. Five or six months in, much hasn’t yet gelled around a new approach. What remains is the leftover framework from before, alongside a lot of disbanded structures. For example—where is the National Infrastructure Advisory Council? Where is the Cyber Safety Review Board (CSRB)? These bodies, created to tackle cyber risks, have been sidelined, supposedly awaiting better replacements.
So in these early days, even though the threat remains, we’ve effectively taken a five- or six-month hiatus from framing a renewed and fully strategic response.
LP: If you had to give one piece of advice to the current administration, what would it be?
DM: It’s easy for someone no longer in the administration to advise their successors. So I won’t do that—just share a couple of thoughts instead.
If we agree that nation-state risks are real—and I believe they are—and that CISA’s known exploited vulnerabilities and interagency concerns are valid, then we need to act with much greater urgency. Our ability to collaborate with the private sector and states, now with more responsibility, must become a higher priority.
LP: What happens if we don’t?
DM: Well, attackers hold huge advantages in this environment, and that’s largely because these exploited vulnerabilities remain unmitigated. That’s the first, and biggest, concern.
Second, if we take the China threat seriously—Salt Typhoon and Volt Typhoon—their ability to freely access and remain inside our critical infrastructure hostage has to be a huge concern. When I was in, we worried about 2027 and threats to Taiwan, fearing China might miscalculate and disrupt our infrastructure to keep us out of ‘their business.’ That concern remains, because 2027 will still come.
Are we positioned to protect our critical infrastructure from being held hostage—so China can’t use those vulnerabilities to deter U.S. policy choices? The idea that the U.S. could be blackmailed into backing down from defending its Pacific interests is a major concern—one I had then and still have now. How well we’ll manage this remains to be seen, but if we have fewer capabilities in 2027 than if better plans had been made, we’ll be more vulnerable.
Yves, the article is all in bold.
Sorry, fixed!
So the private entities the interviewee mentions, that would be the likes of Palantir and Anduril would it not? He doesn’t mention what they may be and is not asked.
The org I used to work for, according to the security folks our membership system was pinged by the Chinese daily.
Also Crowdstrike, Lastpass, Kaspersky, etc.
I’ve been involved with computers and networks since the 70s and security since the late 90s and have authored and collaborated on FOOS encryption and security tools. Mussington talks about topics and trends that are well known and widely discussed. What’s not clear is what value federal bureaucracies such as CISA afford. This may be due to my ignorance but Mussington does not here manage to explain the value. Indeed, talking about how the security problems have gown over time doesn’t persuade me that the department you’re defending has been very effective.
As Bill Binney speaking of NSA culture said, “But then after a few years I kind of figured it out. It really wasn’t that. The whole vision statement for them is keep the problem going so the money keeps flowing. The whole point was you had to have a problem to say you needed more money from Congress to do it, and that’s the way fundamentally almost all of them operate.”
Thank you, .Tom.
As someone who is not an expert in the space but felt a bit uneasy about the logic used throughout the article, you have captured my sentiments precisely.
“What’s not clear is what value federal bureaucracies such as CISA afford.”
Mentioned in the interview are:
USCYBERCOM
CISA = Cybersecurity and Infrastructure Security Agency
SRMA = Sector Risk Management Agencies
CIPAC = Critical Infrastructure Partnership Advisory Council
CSRB = Cyber Safety Review Board
NIAC = National Infrastructure Advisory Council
With all those bodies working on (cyber)security, the tone of the interview was that ultimately the private sector is the one to take measures because states do not have the competence, the federal government is not keen on regulatory actions, and industrial actors are the ones building the equipment that is target of cyberattacks — but they have few economic incentives to do so instead of externalizing the consequences of risk, while insurance frameworks are not conducive to correcting such behaviour.
In that nebulous organizational framework, a concrete aspect that seems to be worthwhile is the compilation and publication of KEV.
According to the interviewee, cybersecurity should not be political. Now I may have my acronyms mixed up here, but Taibbi, who also confused CISA and CIS briefly in his twitter files reporting before correcting the minor error- came to the conclusion that both agencies were involved with censoring social media platforms for the Biden administration.
Hard to trust the spooks to ever behave themselves, and I take this guy’s concerns with a large shaker of salt.
‘the Trump administration has gutted the very agency designed to protect America’s most critical systems—cutting CISA’s budget by nearly half, eliminating nearly a third of its staff, and driving out decades of cybersecurity expertise.’
I’m thinking that it is all a part of some grand strategy. Gut a government agency until it cannot do its job anymore, sack much if not most of their workers – and then wait for the magic to happen. Some corporation will step up – that has links with some in the Trump government – buy those government facilities and equipment on the cheap, hire those worker back for cheaper wages and fewer conditions, and then charge the government for the same work but at extortionate rates while sharing the profits with those in government that made it all happen. It’s all about the contracts, baby. And if America’s cybersecurity gets compromised because of how they gutted the CISA, well, it’s not personal, it’s just business.
IMO, computers themselves have been the grand criminal strategy, completely overlooked from CISA’s perspective. In the US, since ~1960, computers have displaced millions of white collar workers, not to mention millions of blue collar workers displaced by computer-assisted automation. The resulting wage theft dwarfs the threats Mussington worries about.
A back-of-the envelope calculation dividing just the net worth of the Forbes 400 among 350 million USians works out to a theft nearly $150,000 each, enough to house every family, with virtually no compensation in terms of quality of life.
I don’t know if actual humans building them, GramSci, is labor intensive; looks more like mental anguish intensive. And energy/water intensive.
The reason why kids know computers and phones better is because they are devoting their energies to quelling the onslaught of this technology? They’re fighting the fight unconsciously, and have plenty of energy? Putting people in cubes at this juncture seems to me overkill; computers put people in Kafkaesque mental labyrinths such that there is no need for physical cubes (to render them sufficiently alienated). I’m tempted to check on how many logic courses are given in this land during fall and spring. There are probably enough profs teaching it to have produced lots of critiques of software “logic” we could have been reading all along [or have they like the software become illogical as well?]. I fear what kids learn on these things amounts to wild goose chases…that will leave them with few skills to obtain, when they’re in say Ukraine, critical information directly from humans.
But, beyond all this theory, what I think for sure is true is that our civ can’t afford the AI they want to set up. Nor probably can it really afford so much bandwidth devoted to visuals. Wean the nation off them some, and we’ll be better off when outages result from heat waves, fires, ice, hurricanes, solar flares, and hackers yonder. The poor foolish souls that thought Stuxnet gave their lives meaning. Probably our nation could find something better to lure folks away from that path.
If you’re sitting in front of a screen all day surveilling others, your creativity will be shunted/jammed into your poor shadow. Which IMO could incentivize the latter to make you look silly, or alternatively make you become an actual menace…by causing you to perceive harmless beings as evil, at which point you may rail against them, jail them, starve them, shoot them, or bomb them. Projection.
Worth a mention – another area of US leadership was maintaining the list of Common Vulnerabilities and Exposures (CVE). Like many victims of DOGE cuts, it was defunded but also maybe not so in limbo and barely functional.
Don’t get me started on states. The occasional franchise owner might have all their Burger Kings in one state but most targets of cyber threats are not plopped down in one state.
None of the comments mention elections.
I suspect other than privatization which is an essential aim of the Trump Administration, they want to control elections so that MAGA Republicans can win elections permanently. And of course I don’t think Trump wants to retire from the field at the end of his term.
I don’t think establishment types want MAGA republicans in perpetuity. They just want to make sure nobody like Bernie ever gets close to the presidency, and it’s a fully bipartisan effort.
That being said, a populace that agrees to hold elections without paper ballots hand marked and hand counted in public and allows machines to do it for them deserves whatever they get for a “result”, and they deserve it good and hard.
MAGA Republicans??? Democrats have done pretty well at controlling elections and now someone is worried about MAGA? Where have people been the last decade plus?
They control their primaries. The general elections are a different matter.
The elections have already been hacked by billionaires.
Ok. An inside-the-locker-room perspective.
1. Who is this guy.
Worked at RAND (“research teams”), then the DoD (“writing strategies”), then CISA on the management side. And it sounds like most of his work is theoretical.
The way the private infosec sector is organized is roughly as follows:
– “Blue Team” – people who defend organizations from being hacked, and mitigate any damge. Could be either internal staff, or, particularly for damage mitigation, outside consultants. Not as glamorous, but there’s a lot of blue-teamers out there. And still not enough.
– “Red Team” – a.k.a. “ethical hackers”. In the private sector, that means conducting penetration tests on organizations (“can we hack you?” “ok, we hacked you, here’s what you need to do to secure yourself”). Where the government gets involved, you’re basically designing or even deploying hacking tools for them, e.g. versus foreign governments, but the majority of red-teamers are not involved in this sort of thing. More glamorous and fewer in number than blue-teamers.
– “Researchers” – these are the people who periodically publish headlines like, we hacked every computer in the world using a piece of cheese and a specific application of String Theory. In other words, they look for vulnerabilities, including using techniques that most criminals would never bother with because of the expense or complexity involved. Fun to talk to, but can be a bit impractical.
– “Consultants” or “strategists” – every company needs a security policy. Someone has to write the thing, and tick all the regulatory boxes, and then let the blue-teamers and red-teamers in the room actually worry about practical implementation. [Or not – plenty of management teams just say “eh, it’ll never happen to us”.] Deloitte basically made a metric megaton of money mass-producing these weighty documents that not a great many people actually read.
THIS guy sounds like one of those, but with a specific government bent, so instead of thinking about how to secure a company from ransomware, or how to mesh your security policy with HIPAA requirements (for hospitals and such), it’s the Evil North Koreans(TM) trying to bring down civilization. Which is a subject I remember reading breathless books on back in high school, no joke, and yet here we still sit.
I do not mean to denigrate the “strategists” in the room, but it’s one thing if you get some practical experience in the field, and then move on up to Deloitte or whatnot for more money and less stress. When all you’ve got is the theoretical side, and coming out of RAND and the DoD…yeah.
As a side note, there are one-stop-shops like Palantir, who do it all to a greater or lesser extent. But those are the basic roles in the industry. CISA, however, in my extremely limited experience with them, has always been on the “strategy” side, but without the power to actually come in and force you to do stuff. Read into that what you will.
2. The first thing that stood out like a naked bodybuilder in the middle of a nunnery is what he did not say – the words “Zero Trust”. Which is not only a relatively “hot new” security concept in the industry, but literally the stated federal government policy on infosec, i.e. something CISA ought to be pushing or at least be vaguely aware of, as of the Biden administration. Interested parties can delve into (https://www.isms.online/knowledge/iso-27001-and-implementing-a-zero-trust-security-model/), but the basic idea is – the old moat-and-bailey model of infosec does not work, because there are too many vulnerabilities and ways to breach. So now you ASSUME that a breach is going to happen, and implement internal controls to pre-emptively mitigate the impact – for example, siloing users, so that a guy in Accounting does not have write access to your customer database; creating and TESTING (far too few organizations do this thoroughly enough) your backup and restore procedures, should you get hit with ransomware; monitoring internal network traffic, so that you can respond before, and not long after, someone transfers out 100 Gbs of data to a server in Romania.
I can’t tell you how many times I’ve heard “Zero Trust” being discussed in industry seminars and conferences in the past couple of years, and I tend to avoid many of those! This guy ticked many of the other boxes in terms of standard verbiage – multifactor, authenticator, incident response, unpatched vulnerabilities.
It’s like going to the annual New York Auto Show, and not seeing a single BMW or a Mercedes. Wait…that’s exactly how it was this year! [The Europeans were very, very skimpily represented, for whatever reason.]
3. Let’s start by stripping away any and all political Evil North Koreans Crouching Panda Hidden Agenda stupid nonsense. Yes, state actors and hacks are very much a thing, and the Chinese press, by the way, makes just about as much noise about it as the American press. And yes, CISA, as a government agency, should worry less about scammers in Romania looking for a quick buck than about said Evil North Koreans. But for 98% of private organizations in the US, it’s the scammers. And the ransomware artists. And now the Deepfake scams, and cloud security, and implementing Zero Trust, and what the devil is going on with quantum computing – I literally have just listed four of the seven or so webinars held by ISC(2) in the past month. [The other three are vulnerability management, industry certification, and “A Day in Life of a Penetration Tester”.]
The other stuff he says, very wordily, is actually something that makes sense. Yes, there are tons of unpatched vulnerabilities, and the infosec industry has been screaming about this for years. Most hacks that are actually hacks, as opposed to stealing someone’s credentials or insider attacks (or using “P@$$word123” as your password), use unpatched vulnerabilities. Yes, multifactor authentication and a move away from SMS as the second factor – to biometrics, to “authenticator” apps, etc. – is a big thing that not enough people are doing. I think “strong, unique passwords” has been a thing for at least a decade and a half.
It’s basic stuff, we haven’t even gotten into the whole Zero Trust Architecture issue, but part of the problem is that for decades, far too many organizations do not even do this basic stuff. So I support and endorse that part of the interview.
As for CISA…as I said, I have not personally interacted with them, and some of their “advisory” functions – like a vulnerability list – had been duplicated by private sector entities for a long time. That said, yes, I can completely see how the current administration will completely ignore an issue like infosec, while defunding agencies and groups that should ostensibly worry about it. Granted, CISA had not done itself many favors by doing up the whole Russiagate narrative, even during the Biden years – I visibly shuddered when I saw then-head of CISA do the keynote at a CrowdStrike conference a couple of years ago, because of how deeply CrowdStrike in particular was enmeshed in that whole fiasco. Its job should have been, oh, I don’t know, getting all federal agencies on the same IT security standard, systems, and best practices. Actually being the government’s “Blue Team” instead of setting up a myriad of advisory bodies. But whatever.
4. Without going off for 20 pages.
The infosec landscape in the US is an ungodly mess, that is not getting to be any better. To be sure, much of the infosec industry’s focus is on private organizations, because that’s where the money is(TM). But the same, I am given to understand, extends to government operations.
We have just seen how, in the context of the 12-day Israel-Iran war, cyber operations can significantly hamper someone’s ability to respond to a crisis situation. To my knowledge, this did not extend to trying to physically bring down some civilian systems, but in the next round, and I fully believe there will be a next round, that is another possibility. Bombing the Israeli stock exchange is one thing; bringing its trading systems down for a week, is quite another.
So now combine the two concepts. For now, the infosec industry in the US is busy trying to plug holes, a la the proverbial Dutch boy, viz. scammers from Romania, and not faring all too successfully. The possibility of a strategic actor taking advantage of the panda-monium (ha!) to wreak havoc in the US the next time it backs an Israel or a Ukraine in a shooting war, well, it exists. Especially as we seem so intent on wrecking even the mildest avocado-toast efforts to at least develop a nationwide security strategy and recommendations. [You want to shift this down to the states? Boy have I got stories…]
From this perspective, again, dismissing all the fancy code names, the guy has a point. It would not be right to dismiss him completely, even as he stays in the very shallow end of the intellectual pool as far as the overall discussion. For now, I suppose we just have to hope that scammers will be the worst of our problems.
This is great, thanks a ton!
Very interesting.
I had an acquaintance about 25 years ago who was a “Red Team” guy as you’ve described it. He told me how companies would hire him to hack into their systems, find the vulnerabilities, and then advise the companies on how to fix them. I mentioned that if he had found the flaws and prescribed the fixes, then presumably he also knew how to get around his own prescribed fixes if he ever felt like it.
He said, “Yes, of course”. So thus, apparently, “Zero Trust”.
Very illuminating indeed.
And one could see some parallels with pandemic preparedness and gaps not closed and procedures not tested and papers no one in the decision chain reads or properly drafts to begin with. Oh, the stories…
Zero Trust is just the repackaging of Defense in Depth. These agencies haven’t come up with any new strategies in the past 15 years, but the software services that manage fundamentals have gotten better. The problem is no one wants to pay attention to fundamentals … so we, at the advice of these agencies, move everything to the cloud so it is all easily accessible in one place when someone forgets to change a default password. For much, much more money, you get to be held hostage and face the same problems.
To get a more complete overview of CISA and its history, check out “The Weaponization of CISA: How a ‘Cybersecurity’ Agency colluded with Big Tech and ‘Disinformation’ Partners to Censor Americans”
Interim Report of the Committee on the Judiciary and the Weaponization of the Federal Government (U.S. House of Representatives (June 26, 2023).
Brief overview of major findings:
“Founded in 2018, CISA was originally intended to be an ancillary agency designed to protect “critical infrastructure,” and guard against cybersecurity threats.
In the years since its creation, however, CISA metastasized into the nerve center of the federal government’s domestic surveillance and censorship operation on social media.
By 2020, CISA routinely reported social media posts that allegedly spread “disinformation” to social media platforms. By 2021, CISA had a formal “Mis-,Dis-, and Malformation (MDM) team. In 2022 and 2023, in response to growing public and private criticism of CISA’s unconstitutional behavior, CISA attempted to camouflage its activities, duplicitously claiming it serves a purely “informational role.”
At the beginning of the Executive summary of this House report, Jen Easterly, David Mussington’s boss at CISA stated:
“One could argue we’re in the business of critical infrastructure and the most critical infrastructure is our cognitive infrastructure, so building the resilience to misinformation and disinformation, I think, is incredibly important.”
In addition, see all the writings of Lee Fang, Mike Benz, and Matt Taibbi on CISA and its history.
It is also disturbing to me to see the interview of Lynn Parramore with David Mussington back on Oct. 23, 2020 in which he was asked:
“Do you see domestic disinformation threats as significant on social media this time around?”
“DM: I do, but I see it as a little bit more nuanced than what is typically reported on.”
Thanks, Gulag! Here’s the report.
Lynn Parramore’s interview of David Mussington bring to mind the following quote by Upton Sinclair: “It is difficult to get a man to understand something when his salary depends upon his not understanding it.”
After reading the article and so many comments, people (such as me) seem to agree that these are crocodile tears. With so much mistrust in our (used to be) government institutions, this is just one more batch of agencies in the USA that has turned against its own people. It’s a tough question, but if forced to answer, I’d say good riddance. More complex than that of course but I find myself “pulling” for other countries, BRICS for example. We are only a country in name.
Is there any real penalty for poor security? If a company that has your personal information on file lets hackers exfiltrate it, can they be prosecuted for allowing that? (I get that lawyers can cook up a class action suit where you the person whose data was exfiltrated get $2.17 in the end while the lawyers get multiple thousands of dollars in fees. That’s not what I am asking about.) If a company forces you to create an online account to interact with them and then lets your credentials get stolen, do they suffer any meaningful fines or other legal consequences? “Just pick a good password, our system is totally secure.” But everything is “totally secure” up to the moment it isn’t and you’ve been hacked. It’s all talk. If we had been serious about security, Microsoft Windows computers would have been banned from connecting to the internet sometime before 2000. At least until such time as they could prove they were secure enough not to cause massive problems for everyone. IOT devices with default passwords would’ve been banned. But, no, it’s all about making it easy for corporations to make money. Until there are real and sizable penalties for insecure corporate and government practices, it’s all talk.