“Effective Altruism” Network Infiltrates Congress, Federal Agencies to Create Silicon Valley Favoring AI Policies

Politico recently ran an important story on how the AI policy sausage-making is being done that does not appear to be getting the attention it warrants. That may be because the piece, How a billionaire-backed network of AI advisers took over Washington, tried doing too many things at once. It presents the main nodes in this sprawling undertaking. Politico being oriented towards Beltway insiders, showing how this enterprise has entrenched itself in policy-making and identifying some of the key players in this enterprise is no small undertaking, considering that most are Flexians, as in wear many hats and are linked to multiple influence groups. For instance:

RAND, the influential Washington think tank, received a $5.5 million grant from Open Philanthropy in April to research “potential risks from advanced AI” and another $10 million in May to study biosecurity, which overlaps closely with concerns around the use of AI models to develop bioweapons. Both grants are to be spent at the discretion of RAND CEO Jason Matheny, a luminary in the effective altruist community who in September became one of five members on Anthropic’s new Long-Term Benefit Trust. Matheny previously oversaw the Biden administration’s policy on technology and national security at the National Security Council and Office of Science and Technology Policy…

In April, the same month Open Philanthropy granted RAND more than $5 million to research existential AI risk, Jeff Alstott, a well-known effective altruist and top information scientist at RAND, sketched out a plan to convince Congress to pass licensing requirements that would “constrain the proliferation” of advanced AI systems.

In an April 19 email sent to several members of the Omidyar Network, a network of policy groups established by billionaire eBay founder Pierre Omidyar, Alstott attached a detailed AI licensing proposal which he claimed to have shared with approximately “40 Hill staffers of both parties.”

The RAND researcher stressed that the proposal was “not a RAND report,” and asked recipients to “keep this document and attribution off the public internet.”

You can see how hard this is to keep straight.1 And the fact that people pretend to draw nice tidy boxes around their roles is hard to take seriously. Someone senior at RAND could conceivably be acting independently in writing an op-ed or giving a speech. Those are not hugely time intensive and the individual could think it’s important to correct misperceptions, highlight certain issues under debate, or simply elevate their professional standing by giving an informative talk in an area where they have expertise. But Alstott’s scheme and his promotion of it sounded like it took much more effort, raising the question of how a busy professional found the time to do that much supposed freelancing.

You can see how the need to prove up and then describe how the network operates consumes a lot of real estate, particularly when further larded up with having to quote the various protests of innocence by apparent perps.

So we’ll give short shrift to the description of the key actors to focus on the policies they are pushing, which is to hype the danger of AI turning into Skynet and endangering us all, and ignoring real and present hazard like bias and just plain bad results that users rely on because AI.

While this is all helpful, it still does not get to what we were told months ago by a surveillance state insider about the underlying economic motivations for of all people, diehard Silicon Valley libertarians to be acting so out of character as to be seeking regulation. His thesis is that AI investors have woken up and realized there is nothing natively protectable or all that skill intensive about AI. All you need is enough computing power. And computing power is getting cheaper all the time. On top of that, users could come up with narrow applications and comparatively small training sets, like a law firm training on its own correspondence so as to draft certain types of client letters.

So the promoters are creating a panic about purported AI dangers so as to restrict AI development/ownership to “safe hands” as in big tech incumbents, and bar development and use by small fry.

So it’s disappointing and frustrating to see such an in-depth piece get wrapped around the axle of who is doing what to whom and not get all that far in considering the critically important “why.” It’s that tech players have gotten used to having or creating barriers to entry, via scale economies and customer switching costs (who wants to learn a new spreadsheet program?). They are not used to operating in a setting where small players or even customers themselves can eat a lot of their lunch.

To recap the article, an organization called Open Philanthropy,2 funded mainly by billionaire Facebook co-founder Dustin Moskovitz and his wife Cari Tuna, is paying for ” more than a dozen AI fellows” who work as Congressional staffers or in Federal agencies or think tanks. This is presented as a purely charitable activity since the sponsor is a [squillionare financed] not for profit, even though it is clearly pushing an agenda designed to protect and increase the profits of Silicon Valley incumbents and venture capitalists who are investing in AI. But there is yet another layer of indirection, in that the Open Philanthropy monies are being laundered through the Horizon Institute for Public Service, yet another not-for-profit….created by Open Philanthropy.3

Here is the guts of the story:

Horizon is one piece of a sprawling web of AI influence that Open Philanthropy has built across Washington’s power centers. The organization — which is closely aligned with “effective altruism,” a movement made famous by disgraced FTX founder Sam Bankman-Fried that emphasizes a data-driven approach to philanthropy — has also spent tens of millions of dollars on direct contributions to AI and biosecurity researchers at RAND, Georgetown’s CSET, the Center for a New American Security and other influential think tanks guiding Washington on AI.

In the high-stakes Washington debate over AI rules, Open Philanthropy has long been focused on one slice of the problem — the long-term threats that future AI systems might pose to human survival. Many AI thinkers see those as science-fiction concerns far removed from the current AI harms that Washington should address. And they worry that Open Philanthropy, in concert with its web of affiliated organizations and experts, is shifting the policy conversation away from more pressing issues — including topics some leading AI firms might prefer to keep off the policy agenda…

Despite concerns raised by ethics experts, Horizon fellows on Capitol Hill appear to be taking direct roles in writing AI bills and helping lawmakers understand the technology. An Open Philanthropy web page says its fellows will be involved in “drafting legislation” and “educating members and colleagues on technology issues.” Pictures taken inside September’s Senate AI Insight Forum — a meeting of top tech CEOs, AI researchers and senators that was closed to journalists and the public — show at least two Horizon AI fellows in attendance.

Author Brandon Bordelon quotes experts over the course of the article who depict the “AI will soon rule humans” threat as far too speculative to worry about, particularly when contrasted with concrete harm that AI is doing now, like too often misidentifying blacks in facial recognition programs.

Perhaps your humble blogger is reading the wrong press, but I have not seen much amplification of the “AI as Skynet” meme, beyond short remarks by the like of Elon Musk. That may be because the Big Tech movers and shakers are so confident of their takeover of the AI agenda in the Beltway that they don’t feel the need to worry about mass messaging.

Bordelon describes the policies the Open Philanthropy combine is promoting, and points out the benefits to private sector players that have close connections to major Open Philanthropy backers:

One key issue that has already emerged is licensing — the idea, now part of a legislative framework from Blumenthal and Sen. Josh Hawley (R-Mo.), that the government should require licenses for companies to work on advanced AI. [Deborah] Raji [an AI researcher at the University of California, Berkeley,] worries that Open Philanthropy-funded experts could help lock in the advantages of existing tech giants by pushing for a licensing regime. She said that would likely cement the importance of a few leading AI companies – including OpenAI and Anthropic, two firms with significant financial and personal links to Moskovitz and Open Philanthropy…

In 2016, OpenAI CEO Sam Altman led a $50 million venture-capital investment in Asana, a software company founded and led by Moskovitz. In 2017, Moskovitz’s Open Philanthropy provided a $30 million grant to OpenAI. Asana and OpenAI also share a board member in Adam D’Angelo, a former Facebook executive.

Having delineated the shape of the network, Bordelon can finally describe how the “AI is gonna get you” narrative advances the interests of the big AI incumbents:

Altman has been personally active in giving Washington advice on AI and has previously urged Congress to impose licensing regimes on companies developing advanced AI. That proposal aligns with effective-altruist concerns about the technology’s cataclysmic potential, and critics see it as a way to also protect OpenAI from competitors.

The article describes how an Open Philanthropy spokescritter tried claiming that a licensing regime would hobble the big players more than the small fry. That’s patent nonsense since a large firm has more capacity to bear the financial and administration costs. Not surprisingly, knowledgeable parties lambasted this claim:

Many AI experts dispute Levine’s claim that well-resourced AI firms will be hardest hit by licensing rules. [Suresh] Venkatasubramanian [ a professor of computer science at Brown University] said the message to lawmakers from researchers, companies and organizations aligned with Open Philanthropy’s approach to AI is simple — “‘You should be scared out of your mind, and only I can help you.’” And he said any rules placing limits on who can work on “risky” AI would put today’s leading companies in the pole position.

“There is an agenda to control the development of large language models — and more broadly, generative AI technology,” Venkatasubramanian said.

The article closes by describing how other groups like Public Citizen and the Algorithmic Justice League are trying to enlist support for addressing AI risk to civil liberties. But it concludes that they are outmatched by the well-funded and coordinated Open Foundation effort.

So more and more of what could be the commons is being grabbed by the rich. Welcome to capitalism in its 21st century incarnation.
_____

1 The fact that the article refers to the effective altruist community so many times is also creepy. It appears “effective altruism” still has good brand associations in Washington and Silicon Valley, even though Sam Bankman-Fried’s outsized role should have tarnished it permanently. I heard that phrase and it makes me think that rich people are keen to extend their control over society to promote their goodthink and good action, and because they do it though not-for-profits, there can’t conceivably be ulterior motives, like learning how to get policies implemented, building personal relationships with influential insiders, and ego gratification. Even the official gloss comes off as a power trip:

2 The use of “open” in the name of a not-for-profit should come to have the same negative association as restaurants called “Mom’s”. I attended a presentation at an INET conference in 2015. Chrystia Freeland was interviewing George Soros. Soros bragged that his Open Society foundation had directly or indirectly given a grant to every major figure in Ukraine government. Since it was known even then that Banderite neo-Nazis were disproportionately represented, at at least 15% versus about 2% in the population, that meant Soros was touting his promotion of fascists.

3 The article contains many pious defenses of this arrangement, like Open Philanthropy is not promoting specific policies via the Horizon Institute, has no role in the selection of its fellows, etc.

Print Friendly, PDF & Email

30 comments

  1. GDmofo

    I can’t be the only one that sees “Effective Altruism” for what it is, woke Ann Ryand-ism, right?

    1. Michaelmas

      GDMofo: I can’t be the only one that sees “Effective Altruism” for what it is, woke Ann Ryand-ism, right?

      In fact, a lot of ‘effective altruism’ recognizably derives from the work of — and maybe it’s a perversion of that work, maybe some of it isn’t — the philosopher Derek Parfit.

      https://en.wikipedia.org/wiki/Derek_Parfit

      This is not to say that “effective altruism” once it gets into the grubby little American hands of the likes of Sam Bankman-Fried, Marc Andreessen, or Eric Schmidt (the OP doesn’t mention him, but he’ll be in the mix somewhere) can’t be perverted to serve a libertarian Tech Bro agenda.

      But Parfit’s theories on personal identity, for instance, essentially recapitulate Buddhism in a secular, intellectual analysis-driven mode, which is why someone once walked into a Buddhist monastery in Tibet and found the monks chanting lines from Parfit’s REASONS AND PERSONS.
      https://www.theguardian.com/world/2017/jan/12/derek-parfit-obituary

      1. Michaelmas

        Anyway, Yves, thanks for this. And yes ….

        YS: we were told months ago by a surveillance state insider about the underlying economic motivations for of all people, diehard Silicon Valley libertarians to be acting so out of character … His thesis is that AI investors have woken up and realized there is nothing natively protectable or all that skill intensive about AI. All you need is enough computing power. And computing power is getting cheaper all the time.

        This seems broadly right.

        1. digi_owl

          Watch all the lessons learned form crypto to be applied to AI, like for example making things more cost effective by moving from GPUs to ASICs (why Nvidia is yakking up ray tracing and DLSS on their latest cards, as it acts like a demo for the power of their “AI” circuitry).

      2. digi_owl

        Buddhism with the “religion” filed off seem to be all over HR departments and like these days. Things like meditation to cope with office stress etc.

      3. witters

        Trouble with Parfitt was he had absolutely no political intelligence. He simply bracketed it out in the way utilitarians (see Peter Singer) do. They assume – ridiculously – that what really matters is something the virtuous (i.e. dedicated) utilitarian (or enough and rich enough of them) can bring about in the already existing politico-economic order. Thus they in effect legitimate (perpetrate and perpetuate) this order, and here enters “effective altruism”…

  2. TomDority

    It comes down to grabbing first mover advantage – at all costs and for all profits – businesses can’t be expected to have ethics at the expense of shareholder value – just not what businesses are legally required to do. – civil liberties, the commons = opportunities for profit, imposing overhead and influence – all legal as the legislatures made the laws. What could go wrong with using AI to alleviate the tremendous burdens of thinking and solution finding, creativity etal when AI is directed by ‘free market’ forces —the market free for financial rentiers and financialization – I mean, why work and think when you can just get it over the other fellow with AI first mover advantage.

  3. flora

    I guess Congress and pols learned…uh…something from Sam Bankman-Fried’s FTX scam. They gained…”experience.” /oy

    WaPo. Oct 18th

    Effective Altruism Is as Bankrupt as Sam Bankman-Fried’s FTX

    “The movement’s leaders are surely embarrassed by their association with a man on trial for defrauding investors of billions of dollars – but they might argue that Bankman-Fried is irrelevant to their own moral and intellectual standing. ”

    https://www.washingtonpost.com/business/2023/10/18/effective-altruism-is-as-bankrupt-as-samuel-bankman-fried-s-ftx/5fc19a94-6db6-11ee-b01a-f593caa04363_story.html

    Sure, sure… I beleeve them. / ;)

    1. flora

      And from the same article:

      What’s interesting is whether Effective Altruists could do what he is accused of doing and still be in the right so long as they used the proceeds to advance human happiness. The movement’s answer to this question seems to be: No, cheating people is obviously wrong. Most sane people would agree. The problem with this answer is that it shows Effective Altruism to be something of an intellectual scam….

  4. notabanker

    Andreessen was on JRE a while back and said that big tech wants to regulate AI because they know they will be able to staff the regulator, and that their real mission is to prevent open source AI. So the big tech lobbysits are aligning with the doom and gloomers, whom they don’t actually believe, to pump the narrative of “rogue actors” to be shutdown at all costs. Because, of course, Google does no evil.

  5. KLG

    William MacAskill, the world’s youngest Associate Professor of Philosophy, at Oxford(!) of course, has a lot to answer for. Not that he will ever be held to account. Read this if you have a strong constitution. It would seem that SBF will answer for his misbehavior, though. But I’ll wait for the verdict.

  6. Vicky Cookies

    So, if I’m understanding this, we have some tech capitalists attempting to kick the ladder, and to buy state charters for their market share, using ideology, ignorance, and fear as a cover? Gross. Thanks for dis-and-re-assembling the original reporting, which obscured this under the weight of its own effort.

  7. Paradan

    I think they might also be concerned that the public could use AI to detect sophisticated fake videos, etc. So if they wanted to create an atrocity video and use it as casus belli they could get called out on it.

  8. Camelotkidd

    In the Triumph of Conservatism, Gabriel Kolko argues that business leaders not reformers pushed for government regulations not to hobble monopolies but to stymie competition.
    Plus sa change
    Also, pertaining to AI, I’ve read a couple of articles lately suggesting that US foreign policies are being implemented using AI and that’s why they are such a hot-mess

    1. steppenwolf fetchit

      Which business leaders pushed for the Pure Food and Drug Act during the Teddy Roosevelt Administration?

      Which business leaders pushed for the Fair Wages and Hours Act during the FDR Administration?

      Which business leaders pushed for the Endangered Species Act?

      Which business leaders pushed for the Clean Water Act and for the initial formation of the EPA as it was initially formed to work during its very first 3-5 years of existence?

  9. Matthew G. Saroff

    I would note that at the core of “Effective Altruism”, is a bit of mathematical/economic nonsense, they reverse the time discount rate.

    Econ 101 is that capital you have NOW is worth more than capital you have in the future, because you can use that capital in the interval between now and later to generate revenue though investment.

    This is why long term bonds have a higher interest rate than short term bonds.

    Effective altruism says that a life (human capital) today is worth less than a life in (for example) 2525, even though that earlier life might actually create solutions to problems that would effect that person in the year 2525.

    “Effective Altruism” is, to quote JK Galbraith, “One of man’s oldest exercises in moral philosophy; that is, the search for a superior moral justification for selfishness.”

  10. JonnyJames

    Thanks Yves for posting this. It’s a dilemma, we need to know what it going on, but the facts are depressing.

    >”The use of “open” in the name of a not-for-profit should come to have the same negative association as…”
    The Orwellian nomenclature is almost a parody of itself. A bit of fun: How about Effective Narcissism? Covert Sociopathy?

    The article also traces the larger issue of networks of interests (epistemic communities) disproportionately influencing our “elected officials”. These groups have converging interests and can act together but not appear so.

    They still teach kids in school that Congress members are elected by, and act according to the preferences of their constituents. We need some updated textbooks to explain how politics/legislation (sausage making) really works.
    Also, a new definition of “democracy” needs to be worked out.

  11. Paris

    What about other countries? Here hoping this is just a US thing and the Russians and Chinese will do as they please. So much for Amerikan kapitalism…

  12. David in Friday Harbor

    Perusing the “Horizon fellows” website where these obvious plants are indoctrinated — and the related B.S. of “Apollo Research” that was funded by “Open Philanthropy” through “Rethink Priorities” — it’s quite evident that the SBF model of how to co-opt government regulation toward favoring certain monopolists is alive and kicking.

    Thanks for breaking this down into an understandable format. I was wondering what on earth the business model for A.I. might be. Full employment for cease-and-desist trolls is clearly part of it…

  13. fjallstrom

    Since the FTX scandal I have been reading up a bit on Effective Altruism. This is a short version of how it connects to AI:

    Effective Altruism is one of the later versions of what has been called TESCREAL, that is transhumanism, extropianism, singularitarianism, cosmism, rationalism, Effective Altruism, and longtermism. It is overlapping futuristic movements (or cults) in Silicon Valley. One of the leaders of “rationalism” (which is as rational as scientology is scientific) is Eliezer Yudkowsky, high school dropout and self declared genius and AI researcher. In particular Yudkowsky has been claiming for over a decade that AI is an existential risk as it will happen and then quickly dominate the world. Now, this is AI as in Termintor or Matrix, not as in chatbot. A digital god, if you like. Therefore Yudkowsky has been “researching” AI alignment, which is how to make it a good god, instead of an evil god. I believe Peter Thiel has bankrolled his self made research institute.

    Now on surface Effective Altruism is about effective giving and earn to give and all that. But scratch the surface and the most optimal cause is saving humanity. For example from the evil AI. And once you get there, no expense is to big. Open Philantrophy is part of this milieu, and has been since it was founded a decade ago.

    Now some people are in cults for the grift, other for power or sex, but there is also true believers. The funders of Open Philantrophy already has money, power and can probably buy all the sex they want, so odds are they are true believers. Remember they have been funding these weird ideas about how to control the Matrix for a decade, long before AI was even an early bubble. Now, having had a decade to formulate these ideas and creating policy papers and such (again, about how to control the Matrix) they have a first mover advantage.

    It will probably not go anywhere unless it also serves the need of capital as you outline, but I think it is relevant to know that these are very weird people, in a cult like movement, acting to bend the Matrix for “good”. All the more reason to oppose their ideas, of course.

    1. wsa

      I call TESCREAL the Alien Death Cult. Their definition of “altruism” is radically different from how most people use the word, focused as it is on theoretical post-humans of the far future. Contemporary humans are just some larval stage of a more glorious, silicon species. As an example, the self-driving car people aren’t particularly interested in how many pedestrians are mowed down, except as training data. The future benefits justify the death now.

      We have plenty of historical data on what happens when people with power adopt a worldview that thinks killing or immiserating people now paves the way for salvation in the future. And a non-trivial percent of people working on AI these days have such a world-view.

  14. ChrisRUEcon

    Eyes just about rolling out of my skull …

    The GPU manufacturers who were left holding the back as crypto imploded have a new lease on life with AI. But it’s not quite a case of “build it and they will come”, but rather “prove you can sell it if they come”. A colleague reminded me of the hype cycle the other day (via Wikipedia). Big money still riding that crest of “inflated expectations”.

    Soon, though …

  15. N

    If you want a real bombshell article about effective altruism, check out this article from the university of antwerp: https://repository.uantwerpen.be/link/irua/197087

    > From the very beginning, EA discourse operated on two levels, one for the general public and new recruits (focused on global poverty) and one for the core EA community (focused on the transhumanist agenda articulated by Nick Bostrom, Eliezer Yudkowsky, and others, centered on AI-safety/x-risk, now lumped under the banner of ‘longtermism’). The article’s aim is narrowly focused on presenting rich qualitative data to make legible the distinction between public-facing EA and core EA.

    There is a table in the article distinguishing the differences between ‘core EA’ and ‘public-facing EA’ which really helps clarify things.

  16. hunkerdown

    Thank you, Yves, for this superb post. I don’t have anything to add to your excellent decoding, except to note that there seems to be a lot of Chinese talent working in the AI/ML field, especially on resource efficiency. US residents may benefit off their open-source research fruits regardless of whether US legislators shut down garage research here.

Comments are closed.