Allied nuclear countries are "willing to run some risk of nuclear exchange" for far smaller things than the risk of total human extinction. Many of those things are plausibly even things you or I would agree are worthwhile, like aiding Ukraine or Taiwan.
If you think the above passage is crazy, it's probably because you just totally dismi…
Allied nuclear countries are "willing to run some risk of nuclear exchange" for far smaller things than the risk of total human extinction. Many of those things are plausibly even things you or I would agree are worthwhile, like aiding Ukraine or Taiwan.
If you think the above passage is crazy, it's probably because you just totally dismiss the premise that AI poses some risk to humanity. Which, sure, in that case anything people who take that risk seriously propose will seem as extreme as the claim itself. But no amount of arguing over proposals changes the object level worry.
“it's probably because you just totally dismiss the premise that AI poses some risk to humanity”. You’re correct, notwithstanding the wiggle room in “some”. Every argument I’ve read about existential AI risk starts with “assume we create a god-like entity” and goes from there. That base assumption isn’t supported. I could be persuaded by a first-principals argument, but I don’t think we can even correctly envision the form of an answer to the question “what is intelligence?” - which leads me to conclude that we’re not going to accidentally create a deity. Extraordinary claims (“we must carpet bomb datacenters”) require extraordinary evidence, and it is wholly absent.
This will pass, just like the last one. Hopefully without any asinine laws that prevent improvements our lives.
If you believe the only arguments that there is something worth worrying about require "god-like" entities, I would humbly submit that you have not looked too hard at the arguments. There are quite a lot of them, there is quite a large range of imagined things that could go wrong, and most do not require any more than these first principles:
1) It is by definition hard to predict what somethint more intelligent than you will do
2) It is unreasonable to expect us to stop making more and more intelligent computers
3) There is no reason to believe that intelligence is linked to human-mapped moral systems.
4) There is no reaaon to believe the first people who create sufficiently intelligent AI to be dangerous will instill it with morals you or I agree with... particularly given arms-race dynamics.
Beyond that, the facts of the issue are that we are visibly already failing to align what rudimentary "intelligence" our current AIs possess, and that we are already unable to fully predict or understand all of what they do and why.
To directly address you four points: There's a leap from "more and more intelligent computers" to "more intelligent than us" that nobody can propose a mechanism for.
By 'first principals', I mean physics, engineering, math, maybe even biology.
How many transistors does it take to replicate a human brain? (we don't know)
Can you replicate a human brain with transistors? (we don't know)
Do we need to replicate a human brain to have super intelligence? (we don't know)
Why did evolution settle on our particular level of intelligence? (we don't know)
Why not more intelligence? (we don't know)
Is our intelligence static? (we don't know)
Why did it take so long for intelligence to evolve? (we don't know)
What is consciousness? (we don't know)
Do you need consciousness to be 'intelligent'? (we don't know)
...
And some of these questions aren't even in a form such that we have the tools to look for an answer.
And yet it's plausible that we're *this close* to creating super-intelligence?
A lot of those questions seem... confused, to me, or at least just irrelevant?
Like... evolution didn't "settle" on anything, that's not how evolution works. Human ancestors got intelligent enough to be capable of outcompeting their rivals and flourishing beyond the environment that shaped them, then natural selection stopped mattering nearly as much.
I have genuine uncertainty about what sort of "hardware" or "wetware" a superintelligence would need. But it seems fundamental to me that humans will not stop trying to make it. I'm a layman, I don't know if it's 20 years away or 100, but either way this is a problem that feels inescapably in humanity's future.
At first, I was willing to retreat from the phrase "god-like" and argue that even replacing the phrase with "more intelligent than us" doesn't diminish my argument. I think this would still be correct.
But on further thought, I don't think 'god-like' is an inappropriate characterization of any artificial intelligence that could cause plausible existential risk.
I'm not arguing that we shouldn't fund research into AI safety, or that we shouldn't consider privacy concerns - we should do those things. But polluting those policy discussions with calls to carpet bomb foreign data centers? That's crazy talk. And worse, it provides cover to terrible policy that may actually be enacted, like the recent EU proposal to practically ban access to LLM's.
Again, godlike AI seems far from necessary for existential risks. Humans are already capable of existential risks, and there's plenty that AI can do between where it is now and "godlike" to make it easier for us to wipe ourselves out.
I won't comment on carpet bombing, I think that's just irrelevant to argue about without clearer understanding of the object level questions. But it's worth noting that the actual call was not for unilateral action. It was for a coalition of sufficiently concerned countries acting with clear boundaries. I may be persuaded that this is beyond the pale, in concept, but only first if any similar coalitions, like those which currently exist for anti-nuclear proliferation, are similarly denounced.
Allied nuclear countries are "willing to run some risk of nuclear exchange" for far smaller things than the risk of total human extinction. Many of those things are plausibly even things you or I would agree are worthwhile, like aiding Ukraine or Taiwan.
If you think the above passage is crazy, it's probably because you just totally dismiss the premise that AI poses some risk to humanity. Which, sure, in that case anything people who take that risk seriously propose will seem as extreme as the claim itself. But no amount of arguing over proposals changes the object level worry.
“it's probably because you just totally dismiss the premise that AI poses some risk to humanity”. You’re correct, notwithstanding the wiggle room in “some”. Every argument I’ve read about existential AI risk starts with “assume we create a god-like entity” and goes from there. That base assumption isn’t supported. I could be persuaded by a first-principals argument, but I don’t think we can even correctly envision the form of an answer to the question “what is intelligence?” - which leads me to conclude that we’re not going to accidentally create a deity. Extraordinary claims (“we must carpet bomb datacenters”) require extraordinary evidence, and it is wholly absent.
This will pass, just like the last one. Hopefully without any asinine laws that prevent improvements our lives.
If you believe the only arguments that there is something worth worrying about require "god-like" entities, I would humbly submit that you have not looked too hard at the arguments. There are quite a lot of them, there is quite a large range of imagined things that could go wrong, and most do not require any more than these first principles:
1) It is by definition hard to predict what somethint more intelligent than you will do
2) It is unreasonable to expect us to stop making more and more intelligent computers
3) There is no reason to believe that intelligence is linked to human-mapped moral systems.
4) There is no reaaon to believe the first people who create sufficiently intelligent AI to be dangerous will instill it with morals you or I agree with... particularly given arms-race dynamics.
Beyond that, the facts of the issue are that we are visibly already failing to align what rudimentary "intelligence" our current AIs possess, and that we are already unable to fully predict or understand all of what they do and why.
What do you think of the Christian claim, implication or suggestion that the Devil is dangerous because he is intelligent?
To directly address you four points: There's a leap from "more and more intelligent computers" to "more intelligent than us" that nobody can propose a mechanism for.
By 'first principals', I mean physics, engineering, math, maybe even biology.
How many transistors does it take to replicate a human brain? (we don't know)
Can you replicate a human brain with transistors? (we don't know)
Do we need to replicate a human brain to have super intelligence? (we don't know)
Why did evolution settle on our particular level of intelligence? (we don't know)
Why not more intelligence? (we don't know)
Is our intelligence static? (we don't know)
Why did it take so long for intelligence to evolve? (we don't know)
What is consciousness? (we don't know)
Do you need consciousness to be 'intelligent'? (we don't know)
...
And some of these questions aren't even in a form such that we have the tools to look for an answer.
And yet it's plausible that we're *this close* to creating super-intelligence?
A lot of those questions seem... confused, to me, or at least just irrelevant?
Like... evolution didn't "settle" on anything, that's not how evolution works. Human ancestors got intelligent enough to be capable of outcompeting their rivals and flourishing beyond the environment that shaped them, then natural selection stopped mattering nearly as much.
I have genuine uncertainty about what sort of "hardware" or "wetware" a superintelligence would need. But it seems fundamental to me that humans will not stop trying to make it. I'm a layman, I don't know if it's 20 years away or 100, but either way this is a problem that feels inescapably in humanity's future.
At first, I was willing to retreat from the phrase "god-like" and argue that even replacing the phrase with "more intelligent than us" doesn't diminish my argument. I think this would still be correct.
But on further thought, I don't think 'god-like' is an inappropriate characterization of any artificial intelligence that could cause plausible existential risk.
I'm not arguing that we shouldn't fund research into AI safety, or that we shouldn't consider privacy concerns - we should do those things. But polluting those policy discussions with calls to carpet bomb foreign data centers? That's crazy talk. And worse, it provides cover to terrible policy that may actually be enacted, like the recent EU proposal to practically ban access to LLM's.
Again, godlike AI seems far from necessary for existential risks. Humans are already capable of existential risks, and there's plenty that AI can do between where it is now and "godlike" to make it easier for us to wipe ourselves out.
I won't comment on carpet bombing, I think that's just irrelevant to argue about without clearer understanding of the object level questions. But it's worth noting that the actual call was not for unilateral action. It was for a coalition of sufficiently concerned countries acting with clear boundaries. I may be persuaded that this is beyond the pale, in concept, but only first if any similar coalitions, like those which currently exist for anti-nuclear proliferation, are similarly denounced.