49 Comments

Bryan’s old post on Batman vs. Superman is highly relevant to the AI x-risk debate.

https://www.econlib.org/archives/2016/12/the_dumbest_thi.html

https://twitter.com/ageofinfovores/status/1658121358763606020?s=46&t=yzu7Smeja7K2V_b0NhPGBA

Expand full comment

Yes, I've had the same thought in different contexts. I agree with Bryan that AI leading to an extinction event is not a big worry, but when I read some of the comments from alarmists about how to deal with the AI revolution, it did occur to me that one way to make enemies out of them is to treat them as enemies. Bostrom makes the same point in Superintelligence, although mostly just in passing.

Expand full comment

1. There is an extreme asymmetry exists between a global catastrophe and a global extinction event. Eliminating a majority of the human population preserves the possibility of a long future for humanity because we can rebound, but total extinction means all that future human utility is lost. Parfit makes this point in Reasons and Persons. The number of future possible people is extraordinarily large, and so the loss from extinction is really really bad (see Bostrom's article "Astronomical Waste"). For this reason, I weigh a nuclear war as extraordinarily less bad than AI extinction.

2. Using past data about existential catastrophes can be misleading because of an issue that Ćirković et al (2010) call the "anthropic shadow." We will necessarily underpredict the frequency of world-destroying events because if they occurred, we would not be around to record them.

3. Certain new technology (AI, bioweapons, nanobots, etc.) are potentially much more lethal than past wars and may make your poison analogy unfitting. Imagine that humanity created 1 new substance per decade that you had to try for the past few thousand years, but in the current decade, we are making hundreds, and you have to consume them all. Eventually, one might be lethal.

Expand full comment

If Leftism achieves global dominance, would that be a global catastrophe or a global extinction event? Can we generalize from the Leftist mass murders in the SU, China and Cambodia?

Expand full comment

Global totalitarianism is probably a global catastrophe.

Expand full comment

Probably?! In what context is it not?

Expand full comment

Speaking of which,

If you see ten troubles coming down the road, you can be sure that nine will run into the ditch before they reach you.

— Calvin Coolidge

Anybody know an exact source?

Expand full comment

Where is this lucky road?

Expand full comment

Got to be honest, that last paragraph was a big "I feel seen" moment. I am very much one of those people that intellectually believes in AI x-risk but practically I'm doing nothing about it.

And yes I know, stated versus revealed preferences... but I'd like to point out that it's the same attitude I had about what was then called the novel coronavirus back in February 2020, and that did indeed turn out to be a case when my head was right and my heart was wrong.

Expand full comment

It seems like you are misapplying the point of his comment. Yes, his concluding statement is not great for backing up his main point because it applies if there are fat or thin tails, but from everything I've read of his, he has two major and related points (before the antifragile stuff): (1) People get complacent because they assume things are normally distributed without sufficient reason (and sometimes despite obvious evidence against it) and (2) they underappreciate the asymmetry of "payoffs". The concluding sentence you reference only supports point number 2. That doesn't really detract from point #1.

We have a rough approximation of how likely pandemics are to occur in any year, but that approximation is based on relatively few samples considering how rare we think they are and it's possible we have been in a "lull" without realizing it because of the small sample size. Our approximation of how likely pandemics are to occur in a given year is based on an even smaller sample if you consider the sample to be something approximating todays conditions, which would include among other things, a world with >6B people, daily international travel, gain of function research, and whatever other dozen(s) of factors that might be relevant.

Expand full comment

I thank whoever added my at link at the bottom. That is something I started a number of months ago but family issues and healthcare created unexpected problems.

Expand full comment

Consider this analogy. The main cause of antibiotic resistance is antibiotic use. Antibiotic resistance is an example of evolutionary adaptation by natural selection, It’s arguably a stochastic process that has dire exponential implications. Health professionals who prescribe antibiotics are persuaded (public health recommendations/ mandates) to minimize their prescribing of antibiotics to slow down or limit the general progression of antibiotic resistance, based on the fact most infections are largely viral and self-limiting with minimal morbidity. This slows down the progression of antibiotic resistance (flatten the curve). However, personally say if my daughter had an infection, she was unwell and had a exam week ahead I would prescribe an antibiotic empirically, just in case it was bacterial (1 in 100 chances, say), improving her welfare, increasing the likelihood she gets better quicker, regardless of the larger community effect of overprescribing of antibiotics causing increased resistance. It will not effect her health personally long term if she's generally healthy. I will definitely adhere to antibiotic guideline usage otherwise.

This is analogous to the AI problem. Great personal benefit but increasing possibility of the global apocalypse if “over prescribing” continues. The fat tail event is only ever getting closer at an accelerating rate. Or is it? you tell me. Good case for authoritarianism right! No. This is a dilemma and putting numbers on all this is very difficult, but just in case ..., which is why Taleb is right in one way! Further, the apocalyptic outcome possibility induces fear, which in itself has ugly possibilities (see climate change fear). This is why Taleb gets cranky! He doesn’t have authoritarian tendencies, but his conclusions lead to authoritarian measures, which they did (masks during COVID). Somewhere deep in his consciousness he is struggling with this. This problem needs to be solved by markets not by serfdom, but we are just not shrewd enough to take the hard road.

Expand full comment

Remember that statistics is one of those wonderful sciences where two equally expert mathematicians can tease two contradictory meanings out of the data. The odds of mass extinction from a space object colliding with Earth are higher than the odds of dying in an airplane crash. That's basic arithmetic. The key number is that an event on this scale only happens once in a million human lifetimes. Forget the arithmetic, forget the statisticians, don't bother looking for an envelope to write on the back of.

Expand full comment

"allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs."

We don't seriously engage the crazy guy on the corner ranting about end of the world... Are we just watching these people for entertainment?

Expand full comment

Allied nuclear countries are "willing to run some risk of nuclear exchange" for far smaller things than the risk of total human extinction. Many of those things are plausibly even things you or I would agree are worthwhile, like aiding Ukraine or Taiwan.

If you think the above passage is crazy, it's probably because you just totally dismiss the premise that AI poses some risk to humanity. Which, sure, in that case anything people who take that risk seriously propose will seem as extreme as the claim itself. But no amount of arguing over proposals changes the object level worry.

Expand full comment

“it's probably because you just totally dismiss the premise that AI poses some risk to humanity”. You’re correct, notwithstanding the wiggle room in “some”. Every argument I’ve read about existential AI risk starts with “assume we create a god-like entity” and goes from there. That base assumption isn’t supported. I could be persuaded by a first-principals argument, but I don’t think we can even correctly envision the form of an answer to the question “what is intelligence?” - which leads me to conclude that we’re not going to accidentally create a deity. Extraordinary claims (“we must carpet bomb datacenters”) require extraordinary evidence, and it is wholly absent.

This will pass, just like the last one. Hopefully without any asinine laws that prevent improvements our lives.

Expand full comment

If you believe the only arguments that there is something worth worrying about require "god-like" entities, I would humbly submit that you have not looked too hard at the arguments. There are quite a lot of them, there is quite a large range of imagined things that could go wrong, and most do not require any more than these first principles:

1) It is by definition hard to predict what somethint more intelligent than you will do

2) It is unreasonable to expect us to stop making more and more intelligent computers

3) There is no reason to believe that intelligence is linked to human-mapped moral systems.

4) There is no reaaon to believe the first people who create sufficiently intelligent AI to be dangerous will instill it with morals you or I agree with... particularly given arms-race dynamics.

Beyond that, the facts of the issue are that we are visibly already failing to align what rudimentary "intelligence" our current AIs possess, and that we are already unable to fully predict or understand all of what they do and why.

Expand full comment

What do you think of the Christian claim, implication or suggestion that the Devil is dangerous because he is intelligent?

Expand full comment

To directly address you four points: There's a leap from "more and more intelligent computers" to "more intelligent than us" that nobody can propose a mechanism for.

By 'first principals', I mean physics, engineering, math, maybe even biology.

How many transistors does it take to replicate a human brain? (we don't know)

Can you replicate a human brain with transistors? (we don't know)

Do we need to replicate a human brain to have super intelligence? (we don't know)

Why did evolution settle on our particular level of intelligence? (we don't know)

Why not more intelligence? (we don't know)

Is our intelligence static? (we don't know)

Why did it take so long for intelligence to evolve? (we don't know)

What is consciousness? (we don't know)

Do you need consciousness to be 'intelligent'? (we don't know)

...

And some of these questions aren't even in a form such that we have the tools to look for an answer.

And yet it's plausible that we're *this close* to creating super-intelligence?

Expand full comment

A lot of those questions seem... confused, to me, or at least just irrelevant?

Like... evolution didn't "settle" on anything, that's not how evolution works. Human ancestors got intelligent enough to be capable of outcompeting their rivals and flourishing beyond the environment that shaped them, then natural selection stopped mattering nearly as much.

I have genuine uncertainty about what sort of "hardware" or "wetware" a superintelligence would need. But it seems fundamental to me that humans will not stop trying to make it. I'm a layman, I don't know if it's 20 years away or 100, but either way this is a problem that feels inescapably in humanity's future.

Expand full comment

At first, I was willing to retreat from the phrase "god-like" and argue that even replacing the phrase with "more intelligent than us" doesn't diminish my argument. I think this would still be correct.

But on further thought, I don't think 'god-like' is an inappropriate characterization of any artificial intelligence that could cause plausible existential risk.

I'm not arguing that we shouldn't fund research into AI safety, or that we shouldn't consider privacy concerns - we should do those things. But polluting those policy discussions with calls to carpet bomb foreign data centers? That's crazy talk. And worse, it provides cover to terrible policy that may actually be enacted, like the recent EU proposal to practically ban access to LLM's.

Expand full comment

Again, godlike AI seems far from necessary for existential risks. Humans are already capable of existential risks, and there's plenty that AI can do between where it is now and "godlike" to make it easier for us to wipe ourselves out.

I won't comment on carpet bombing, I think that's just irrelevant to argue about without clearer understanding of the object level questions. But it's worth noting that the actual call was not for unilateral action. It was for a coalition of sufficiently concerned countries acting with clear boundaries. I may be persuaded that this is beyond the pale, in concept, but only first if any similar coalitions, like those which currently exist for anti-nuclear proliferation, are similarly denounced.

Expand full comment

Great post. I'm a data scientist with experience with machine learning, and I'm not worried about AI risk. I wrote a post a couple weeks ago, trying to explain "AI" to a layman (so-to-speak). The first half is a description of what (e.g.) these LLMs really are, and the second half is speculation as to why so many nerds are freaked out. In case you're interested, enjoy! https://ipsherman.substack.com/p/ai-for-seminarians

Expand full comment

>I'm a data scientist with experience with machine learning

Learning is a function of consciousness. Or does a pool cue ball learn to be more accurate the more its hit?

Expand full comment

Nice bet for you Bryan and cool you did it in 2017. Btw, Taleb doesn’t seem concerned with LLMs. He says those wanting to ban it are those whose businesses are being eaten.

Expand full comment

>We are told that we should worry more about deaths from diabetes or people tangled in their bedsheets. Let us think about it in terms of tails. But, if we were to read in the newspaper that 2 billion people have died suddenly, it is far more likely that they died of ebola than smoking or diabetes or tangled in their bedsheets?

This passage seems somewhat jumbled. Was is copied incorrectly?

Expand full comment

Imagine we were in the grip of a perverse trend that, if uncurbed, would eventually have catastrophic, civilization-destroying consequences but that no one of any eminence ever mentions openly, perhaps because doing so would be almost certainly be punished by ostracization. Well, friends, I'm afraid we're currently enmeshed in just such a predicament. It's not global warming that I have in mind, which eminent people blab about in perfect confidence. Nor is it the possibility of nuclear holocaust or a more lethal viral pandemic, which are horrific to contemplate but not beyond the pale as topics of polite conversation. Rather, it's the problem that Mike Judge illustrated for comic effect in the prologue to his film "Idiocracy," released some seventeen years ago: namely, dysgenic fertility. Which is no laughing matter. https://www.sciencedirect.com/science/article/abs/pii/S0160289613000470

Expand full comment

AI is pattern "recognition," like like an animal responding the familiar smell of a predator. It is NOT intelligence, which categorizes patterns. AI is no more likely to become The TermInator than stones because Cain slew Abel.

Intro. To Objectivist Epistemology-Ayn Rand

Expand full comment

Saying it is not intelligent, therefore it can’t kill us seems wrong. I suppose there must be many more premises in your argument that you aren’t sharing.

AI doesn’t have to be conscious or intelligent to cause an extermination event. It needs the means. I think the argument should be about how it could get the means, and what we could do to prevent it from using it (whether by intent or by accident), since giving it some access to things which could be used to create means will be fairly difficult to resist.

Expand full comment

You imply that anything, intelligent or not, can destroy man. This is the ancient, pre-rational, religious fear of reality and of man.

Expand full comment

You misunderstood my point, or I misunderstand yours.

If AI has the means to destroy humanity, it might do so whether it is intelligent or not. Given the means, a simple program that no one would think of as intelligent could do so. For a simple program to do so would probably require intent by its programmers, but an AI has more capacity for surprising its designers.

The question is, could AI gain such means suddenly? Will it be embedded into our infrastructure or economy in a way that effectively provides it with the means?

Expand full comment

You provide no evidence for your alleged possibility. There is nothing to consider.

Expand full comment

Hmmm….

Expand full comment

Knowledge requires a focused mind, not an unfocused mind. Possibility is a fact of concrete reality, eg, its possible for a frog to croak, but not to sing opera. You provide no evidence that "AI" is possibly destructive. There is nothing there upon which the mind can focus. And seeking alleged knowledge from an unfocused mind is destructive of the minds function in guiding thought and action for ones life. Look out at reality, not inward. Focus your mind, thru your senses, onto concrete reality. Subjectivism and mysticism are possibilities of the unfocused mind, not methods of knowing concrete reality.

"For The New Intellectual"-Ayn Rand

Expand full comment

Off-topic: Europe holds a striking dominance in the top half of that graph of casualties. England, France and Germany in particular.

Expand full comment

Taleb + Yudkowksy is a powerful combo in this discussion. Despite agreeing with your conclusions, I suspect that they (in this imaginary team-up) are right on the arguments. The difference is a gut-level lack of concern about the possibility of human extinction. Many fans of Taleb from his 'Black Swan' era were shocked at his tweets during COVID. Fans of Yudkowsky from his 'Sequences' era likely also find him shocking these days.

It was never about the arguments.

Expand full comment