The Doom Level in last week’s AGI Doom livestream (full video above) was considerably lower than I expected. Instead, the conversation mostly focused on AI’s mid-range economic effects. Perhaps the AI Doomers consider me beyond redemption? If so, they should be swarming me with offers to bet. Or at least maxing out every credit card they can get their hands on. If we’re all going to die, why not live high before you die?
Gentle ribbing of fellow well-meaning nerds aside, even I agree that AGI doom is more likely than I thought decades ago. But that’s only because AI has advanced more rapidly than I expected: P(AI doom | AI) > P(AI doom). My P(AI salvation) has increased by the same multiple. And as usual, my P(any specific extreme outcome) remains low. Weird stuff happens all the time, but the vast majority of weird stuff is eternally confined to our infinite imaginations.
[37m0s] I think this comes to down to the probabilities. If it's 1% chance of human extinction and 1% chance of curing aging, then, sure. Or at least reasonable people can disagree. If it's a 10% chance of catastrophe, we need to slow down on AI capabilities and/or speed up work on alignment.
I disagree that we can always just pull the plug on AI. The plausible disaster scenarios don't involve a literal murder spree. What's tricky is that the disaster scenarios are so multifarious that any particular one still seems far-fetched. And yet, in aggregate, it's hard to push the probability of disaster below 10%. There's a good chunk of probability mass on "things spin out of control in ways we currently are failing to even imagine".
I think full drive VR has more of a negative societal impact long term that robots do. If we can interface our brains with computers seamlessly and assuming that photonic computing increases our computing power by a few orders of magnitude such that a VR world could be built indistinguishable from our own. I think that most of our interest in goods and services in the real world collapses. Presumably we are left with food, a small amount of shelter, energy, water, and healthcare. In a full drive VR world most human needs including social needs would not only be met, but exceeded. For men: Sexual relationships would be limitless and access to experiences and partners would far exceed their current access. Granted you admit some room for this on the sex bots point, but I would argue full drive VR more fully replaces this role and potentially cheaper especially when considering multiple 'partners' if we can call them that. For both men and women status could be simulated in a very real sense. Most people are socially irrelevant in the real world, but in this alternate reality you could be the most rich and famous person who has ever lived (perceptively of course, but here full drive VR implies indistinguishable from reality). For instance you could be a famous actor adored by millions making films 'in game'. If the AI is good enough it can certainly emulate human interaction, and do so in a way that implies fame in your scenario making the experience indistinguishable from reality. Ironically this cheapens the meaning of wealth and fame in the real world; so I see charitable giving going through the roof as a possible positive. After all, why be a decamillionaire in the real world when no one cares and instead you can be a trillionaire in the artificial world? A decamillionaire could buy a fancy car and house to attract one 9/10 partner maybe one 10/10 with another on the side at best; but in the artificial world presumably there is access to an infinite supply of 10/10s. This is no contest, the real world loses out on every measure except that of deeper true human relationships. But considering what dating apps and porn have already done to normal human relationships, they seem like they would be a fringe exception in this context. Parenting included. Friendships are probably exceedingly more common than romantic relationships in this scenario, but still I think this is far more worrisome than robots in the real world and not necessarily for material reasons (again super rich probably give most of it away).
I would be interested in making a bet with the same structure as your bet with Eliezer, ending at a later date (perhaps 2035 or 2040).
I am not as doomy as Eliezer so I don't think a bet at 1:1 odds is favorable for me, but presumably there are some odds that are mutually beneficial. Do you have a preferred procedure for deciding on fair betting odds?
(I would also make a bet for 2030 if you give me good odds.)
Also feel free to email me at bet@mdickens.me if that's easier.
[43m23s] It's me! No regrets! I have a million notes for continuing our debate about climate change but lately I've been more focused on AI doom than climate doom. I know you're rolling your eyes at me but I honestly think it's rational to focus on one's best estimate of the biggest threat to humanity. In 2021, when we started our debate, I thought that was climate change and now I think it's AGI (it was Google's PaLM paper in 2022 that was my wake-up call -- a lot of people I know, like Scott Alexander, saw it coming in 2019 with GPT-2 already).
Relatedly, I wanted to point you at a comment I added to your "What, Me Worry?" post from 2006:
https://www.betonit.ai/p/what_me_worryhtml/comment/110773488
Or, since that post is kind of ancient history, maybe I'll repeat here:
Hi from the future! You said "good reason to expect global warming to be milder than models predict" and you, impressively, operationalized that as a wager with Yoram Bauman:
https://manifold.markets/dreev/will-bryan-caplan-win-his-climate-b
But, as is visually clear from the temperature graph, the climate models turned out to be spot on. A rare instance of an incorrect prediction from you!
I'm getting tempted to offer you another wager, this time for reading Toby Ord's _The Precipice_. It lays out the case meticulously for alarmism on many fronts and, in my opinion, deftly demolishes the "what me worry" stance.
PS: Also the section of Ord's book on climate change is very moderate and measured and meticulous. It's part of what convinced me to move climate change down on my list of Things Worth Worrying About (not that I *removed* it from the list). Ord himself has since downgraded climate risk further based on the latest evidence: https://www.tobyord.com/writing/the-precipice-revisited