The only question whether this means competition on the scale of "China outpacing US or US outpacing China" or like, "Singapore is the worlds largest economy now"
I think you are overestimating bottle necks. Expect for the political one. I wouldn’t bet any money on something like that. It’s just more profitable to buy good stocks and no matter what happens you win
In the most optimistic case for AI, AI would just provide a (nearly) limitless supply of free, artificial replicas of human neural networks. Barnett's fallacy is thinking that an unlimited supply of any single economic resource can lead to vast gains in production: "AIs that can substitute for human labor could drastically increase the growth rate of the economy." First, AIs can't substitute for all human labor, e.g., manual labor. At most, AIs can substitute for the part of human labor consisting of computations performed by human neural networks. Second, producing myriad goods and services requires myriad economic resources: energy, raw materials, land, risk-bearing capacity, and all aspects of human labor beyond mental computation. Land, labor, and capital --- AI represents only one narrow aspect of that. Whenever one resource becomes cheap, it ceases to be binding and production becomes limited by the other resources. That happens far sooner than "an extremely high level of growth by current standards." In fact, there are many goods and services where human mental computation probably isn't the binding constraint even right now. Would more human mental computation allow us to frack more oil, produce more cars (which are already heavily produced by robots), grow more corn, mine more lithium? I don't know, but I doubt that production of all these goods are currently limited primarily by computational resources.
If a single resource could lead to huge production gains, then central planning would be a lot easier. Central planners could concentrate their efforts on producing more of that one key resource. In reality, economies are too complex to understand how to allocate myriad economic resources towards the production of myriad goods and services. That's why we need markets. In turn, that implies that production gains require gains in many different resources (or gains in the efficiency of their use). That's probably why advanced economies, which don't have much low hanging fruit left from simply reallocating existing resources, historically have managed to grow only slowly even with the inventions of revolutionary technologies: nuclear power, PCs, internet, etc.
It's not even clear how much AI would increase the production of pencils --- google "I, pencil" if you don't understand the reference --- let alone the production of all goods and services generally.
This is one of Caplan's safest bets ever, and I hope he wins, as the only way really to lose would be an annus horribilis (think Covid, but this time deadly - say 5% - and for all age groups) - where all activity not connected directly to basic food-production and healthcare would be down to zero. A Shutdown. Down to 30% of economic activity/energy production. Then it turns out 95% got infected and all survivors are immune now and the year after things go back to "normal" at 70%. Voila: 130% growth in a year. - Even a singularity could hardly end in 130% growth in ONE year. Even 30% I can not see. First year some niche-things might well grow "explosively". But how to get all of GDP up by even 20%? 1% is a trillion US-$! Those power-sources need to be build. Fusion would take ... more than a year (a decade, more likely three) . Solar needs Lithium. Those robot-factories need to be built. And thus every year GDP would build on the other. With 20% it would double in 4 years, only. I would call that explosive, of course; the bet would be lost anyway. Resp.: won by Bryan Caplan.
The bet terms specifically exclude that edge case.
>> exceeds 130% of its previous yearly peak value for any single year
means previous yearly peak value for ANY single year.
>> Voila: 130% growth in a year.
130% (this year) - 100% (the previous year) is 30%, not 130%.
I don't get why people with poor reading comprehension sound so confident; if you can't even read simple sentences I feel like you should not have very high confidence in your ability to prognosticate matters far more complex than simple English sentences.
Thank you. Linch, could you - please - explain me how this sentence from the text makes sense then: "How is GWP (sic! Though I assumed it to mean "GDP" as "global warming potential" makes little sense here) supposed to more than double in a year in a world where you have to wait a decade just to assemble the right building permits?" - Actually, I assumed the bet to be about a 30% change in a year until I read that sentence - and therefore I explicitly wrote in my comment: "Even 30% I can not see." (btw, I once tried some SAT r.c. samples - and, yep, my r.c. sucked.)
Geez rereading my message, I am such a dick. Apologies, I try to not be mean on the internet but sometimes my worse self gets the better of me.
"Actually, I assumed the bet to be about a 30% change in a year until I read that sentence - and therefore I explicitly wrote in my comment: "Even 30% I can not see." ("
Actually your reading was pretty good. Rereading the blog post and the exchange, I think *Bryan Caplan* misunderstood what Matthew Barnett meant. (I had the earlier context of seeing Ted Sanders and Matthew Barnett hash out the first bet).
We all are, the studies say: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4359608 In my native language on quora germany I write near daily: „If you can't get commas and orthography right, how you dare to think you are smart enough to ...“ (Each day I swear ‚I’ll take a time-out next month‘, and click those trolls again.) Anyways, seems Prof. Caplan got carried away then – probably just while writing, not betting. I stick to my: "Safe bet at great odds. 30% in a single year is near impossible even with the singularity coming before 2043. Only conceivable way – but highly unlikely - would be a fast recovery after a huge crash. Shutdown due to deadly but short global pandemic." (No idea how GDP would be calculated in the aftermath of a medium nuclear war; hardly a swift recovery.) Peace and love. :)
My English s.... , obviously. When I'm reading an English blog, I kinda delude myself in believing I could think in English. I do use google/DeepL to translate my German writing, sure. I shall henceforth try with my comments, too. - Gave me "delude" now instead of "fool" - and "in the future" but I stick to "shall henceforth", fully aware it is outdated and slightly ridiculous. Just like me.
Yep, this is about right. The fact that people are expressing confidence in that 130% number shows the degree to which the present enthusiasm is basically hysterical and unquestioned in some subcultures.
My expectation is that AI will prove to be a very useful tool -- less economically transformative in the next 25 years than the power of the Internet has been in the previous 25, but still THE story in computing-driven productive improvements. Then at some point its usefulness will plateau, long before everything in the world is automated and long before anyone builds an endlessly self-improving AI.
But if I'm supremely wrong and in the next 20 years we invent an AI that ends up conquering the Earth, building a Dyson Sphere around the sun, and reassembling our solar system into a giant robot factory -- if all that comes to pass, I'd still say 130% growth in a single year is much less likely than not, for the reasons you cite.
It seems as though even Matthew thinks that 2043 might be a bit of a squeeze: "my unconditional median TAI timeline is now something like 2047" - Matthew Barnett, Dec 2022.
2043 is some way off. I checked some historical stats of GWP. Fluctucations of 10% seem very rare. Am thinking the best chance of Matthew winning would likely come from a rebound after a large catastrophe. However, even so, I figure Bryan is nuch more likely to win here - despite apparently misunderstanding the terms of the bet.
Maybe the question on persuasion should be more specific to include what LLM model, with what weights, how the data is labeled, what data it's trained on, and how it's fine tuned. Any change in those attributes will change the ouput to the very same prompt or question.
Are you sure you're understanding "130% of the previous year" correctly? I read it as 30% growth, not 130% growth. This comment from the google docs seems to agree with my reading:
"Ideally it would also be nice to require 130% of the previous year, conditional on the previous year being the peak, to exclude rare scenarios where production falls for some years and then is able to overshoot on the way back up; in theory this shouldn’t be much of a concession as an AGI-driven growth boom should be able to hit two highs in a row and sustain 30% growth; but again, this is rare enough that I don’t mind not excluding it."
I agree that regulation will likely dampen and delay the economic effects of AI, but I think you're exaggerating the extent of this bottleneck. AI is qualitatively different from nuclear energy. Since AI can broadly substitute for labor, we should expect its effects to be more akin to extremely rapid population growth than to those of traditional technologies. Unlike AI, no nation can radically enrich itself over its neighbors simply by lifting nuclear energy regulations.
Nations that choose not to regulate AI heavily will outpace those that do. Consider this scenario: Russia is about to increase their effective population size tenfold using AI. Wouldn't it seem plausible that our government might be alarmed and attempt to match this with their own AI development program? History provides us with a precedent for this scenario. In prior centuries, nations that embraced the Industrial Revolution (like the UK) ended up far wealthier on a per capita basis than those that didn't initially (such as China), to the severe detriment of the latter. And it was the former set of nations that ended up establishing our contemporary framework of international law.
Yes, there's a possibility that a worldwide draconian regime could emerge soon, violating international sovereignty and completely suppressing any nation that tries to embrace AI. However, I don't believe that's the path we're currently on. Moreover, the odds are 4:1 in my favor. You'd need to be quite confident that I'm incorrect for this wager to make sense on your end.
How about apparent wealth? What I mean is if we come out with a computer brain interface, such that we can trick your brain on all five senses, coupled with much greater computing power; this would allow us to implement full immersion VR. In the VR sense, you can experience and do things, presumably, that only the richest of the rich could now. It depends on the game or 'experience'. I'm sort of reminded of that episode of South Park where Cartman buys his own theme park... This starts to, at the very least, blur the line of what constitutes wealthy. Even the objections to 'show off' goods not functioning in the same way in this environment should be tempered. This technology will likely first be used in an x-rated fashion. There goes most men's reason for wanting nice cars, homes, etc...
As an example I would give RMNA vaccines. They were pretty fast in circulation despite political bottlenecks like not trying out human virus trials and so on
Hey Bryan, can I get in on this bet on Matthew's side for another $500?
The countries which are not persuaded could be outcompeted.
The only question whether this means competition on the scale of "China outpacing US or US outpacing China" or like, "Singapore is the worlds largest economy now"
I think you are overestimating bottle necks. Expect for the political one. I wouldn’t bet any money on something like that. It’s just more profitable to buy good stocks and no matter what happens you win
In the most optimistic case for AI, AI would just provide a (nearly) limitless supply of free, artificial replicas of human neural networks. Barnett's fallacy is thinking that an unlimited supply of any single economic resource can lead to vast gains in production: "AIs that can substitute for human labor could drastically increase the growth rate of the economy." First, AIs can't substitute for all human labor, e.g., manual labor. At most, AIs can substitute for the part of human labor consisting of computations performed by human neural networks. Second, producing myriad goods and services requires myriad economic resources: energy, raw materials, land, risk-bearing capacity, and all aspects of human labor beyond mental computation. Land, labor, and capital --- AI represents only one narrow aspect of that. Whenever one resource becomes cheap, it ceases to be binding and production becomes limited by the other resources. That happens far sooner than "an extremely high level of growth by current standards." In fact, there are many goods and services where human mental computation probably isn't the binding constraint even right now. Would more human mental computation allow us to frack more oil, produce more cars (which are already heavily produced by robots), grow more corn, mine more lithium? I don't know, but I doubt that production of all these goods are currently limited primarily by computational resources.
If a single resource could lead to huge production gains, then central planning would be a lot easier. Central planners could concentrate their efforts on producing more of that one key resource. In reality, economies are too complex to understand how to allocate myriad economic resources towards the production of myriad goods and services. That's why we need markets. In turn, that implies that production gains require gains in many different resources (or gains in the efficiency of their use). That's probably why advanced economies, which don't have much low hanging fruit left from simply reallocating existing resources, historically have managed to grow only slowly even with the inventions of revolutionary technologies: nuclear power, PCs, internet, etc.
It's not even clear how much AI would increase the production of pencils --- google "I, pencil" if you don't understand the reference --- let alone the production of all goods and services generally.
> First, AIs can't substitute for all human labor, e.g., manual labor.
I think robotics count as "AI", for what it's worth.
This is one of Caplan's safest bets ever, and I hope he wins, as the only way really to lose would be an annus horribilis (think Covid, but this time deadly - say 5% - and for all age groups) - where all activity not connected directly to basic food-production and healthcare would be down to zero. A Shutdown. Down to 30% of economic activity/energy production. Then it turns out 95% got infected and all survivors are immune now and the year after things go back to "normal" at 70%. Voila: 130% growth in a year. - Even a singularity could hardly end in 130% growth in ONE year. Even 30% I can not see. First year some niche-things might well grow "explosively". But how to get all of GDP up by even 20%? 1% is a trillion US-$! Those power-sources need to be build. Fusion would take ... more than a year (a decade, more likely three) . Solar needs Lithium. Those robot-factories need to be built. And thus every year GDP would build on the other. With 20% it would double in 4 years, only. I would call that explosive, of course; the bet would be lost anyway. Resp.: won by Bryan Caplan.
The bet terms specifically exclude that edge case.
>> exceeds 130% of its previous yearly peak value for any single year
means previous yearly peak value for ANY single year.
>> Voila: 130% growth in a year.
130% (this year) - 100% (the previous year) is 30%, not 130%.
I don't get why people with poor reading comprehension sound so confident; if you can't even read simple sentences I feel like you should not have very high confidence in your ability to prognosticate matters far more complex than simple English sentences.
Thank you. Linch, could you - please - explain me how this sentence from the text makes sense then: "How is GWP (sic! Though I assumed it to mean "GDP" as "global warming potential" makes little sense here) supposed to more than double in a year in a world where you have to wait a decade just to assemble the right building permits?" - Actually, I assumed the bet to be about a 30% change in a year until I read that sentence - and therefore I explicitly wrote in my comment: "Even 30% I can not see." (btw, I once tried some SAT r.c. samples - and, yep, my r.c. sucked.)
Geez rereading my message, I am such a dick. Apologies, I try to not be mean on the internet but sometimes my worse self gets the better of me.
"Actually, I assumed the bet to be about a 30% change in a year until I read that sentence - and therefore I explicitly wrote in my comment: "Even 30% I can not see." ("
Actually your reading was pretty good. Rereading the blog post and the exchange, I think *Bryan Caplan* misunderstood what Matthew Barnett meant. (I had the earlier context of seeing Ted Sanders and Matthew Barnett hash out the first bet).
Apologies again for being dickish! :)
We all are, the studies say: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4359608 In my native language on quora germany I write near daily: „If you can't get commas and orthography right, how you dare to think you are smart enough to ...“ (Each day I swear ‚I’ll take a time-out next month‘, and click those trolls again.) Anyways, seems Prof. Caplan got carried away then – probably just while writing, not betting. I stick to my: "Safe bet at great odds. 30% in a single year is near impossible even with the singularity coming before 2043. Only conceivable way – but highly unlikely - would be a fast recovery after a huge crash. Shutdown due to deadly but short global pandemic." (No idea how GDP would be calculated in the aftermath of a medium nuclear war; hardly a swift recovery.) Peace and love. :)
Oh actually if you like to engage with English blogs, Google Translate or GPT-4 might be pretty helpful!
My English s.... , obviously. When I'm reading an English blog, I kinda delude myself in believing I could think in English. I do use google/DeepL to translate my German writing, sure. I shall henceforth try with my comments, too. - Gave me "delude" now instead of "fool" - and "in the future" but I stick to "shall henceforth", fully aware it is outdated and slightly ridiculous. Just like me.
Yep, this is about right. The fact that people are expressing confidence in that 130% number shows the degree to which the present enthusiasm is basically hysterical and unquestioned in some subcultures.
My expectation is that AI will prove to be a very useful tool -- less economically transformative in the next 25 years than the power of the Internet has been in the previous 25, but still THE story in computing-driven productive improvements. Then at some point its usefulness will plateau, long before everything in the world is automated and long before anyone builds an endlessly self-improving AI.
But if I'm supremely wrong and in the next 20 years we invent an AI that ends up conquering the Earth, building a Dyson Sphere around the sun, and reassembling our solar system into a giant robot factory -- if all that comes to pass, I'd still say 130% growth in a single year is much less likely than not, for the reasons you cite.
It seems as though even Matthew thinks that 2043 might be a bit of a squeeze: "my unconditional median TAI timeline is now something like 2047" - Matthew Barnett, Dec 2022.
2043 is some way off. I checked some historical stats of GWP. Fluctucations of 10% seem very rare. Am thinking the best chance of Matthew winning would likely come from a rebound after a large catastrophe. However, even so, I figure Bryan is nuch more likely to win here - despite apparently misunderstanding the terms of the bet.
Maybe the question on persuasion should be more specific to include what LLM model, with what weights, how the data is labeled, what data it's trained on, and how it's fine tuned. Any change in those attributes will change the ouput to the very same prompt or question.
Let's see what the markets say: https://manifold.markets/PeterBuyukliev/explosive-growth-bet?r=UGV0ZXJCdXl1a2xpZXY
Are you sure you're understanding "130% of the previous year" correctly? I read it as 30% growth, not 130% growth. This comment from the google docs seems to agree with my reading:
"Ideally it would also be nice to require 130% of the previous year, conditional on the previous year being the peak, to exclude rare scenarios where production falls for some years and then is able to overshoot on the way back up; in theory this shouldn’t be much of a concession as an AGI-driven growth boom should be able to hit two highs in a row and sustain 30% growth; but again, this is rare enough that I don’t mind not excluding it."
We're talking about >30% growth.
that's what I understood as well, but Bryan apparently understood it as 130% growth, not 30%:
>How is GWP supposed to more than double in a year
In that case, Bryan may lose - due to a misunderstanding.
I agree that regulation will likely dampen and delay the economic effects of AI, but I think you're exaggerating the extent of this bottleneck. AI is qualitatively different from nuclear energy. Since AI can broadly substitute for labor, we should expect its effects to be more akin to extremely rapid population growth than to those of traditional technologies. Unlike AI, no nation can radically enrich itself over its neighbors simply by lifting nuclear energy regulations.
Nations that choose not to regulate AI heavily will outpace those that do. Consider this scenario: Russia is about to increase their effective population size tenfold using AI. Wouldn't it seem plausible that our government might be alarmed and attempt to match this with their own AI development program? History provides us with a precedent for this scenario. In prior centuries, nations that embraced the Industrial Revolution (like the UK) ended up far wealthier on a per capita basis than those that didn't initially (such as China), to the severe detriment of the latter. And it was the former set of nations that ended up establishing our contemporary framework of international law.
Yes, there's a possibility that a worldwide draconian regime could emerge soon, violating international sovereignty and completely suppressing any nation that tries to embrace AI. However, I don't believe that's the path we're currently on. Moreover, the odds are 4:1 in my favor. You'd need to be quite confident that I'm incorrect for this wager to make sense on your end.
How about apparent wealth? What I mean is if we come out with a computer brain interface, such that we can trick your brain on all five senses, coupled with much greater computing power; this would allow us to implement full immersion VR. In the VR sense, you can experience and do things, presumably, that only the richest of the rich could now. It depends on the game or 'experience'. I'm sort of reminded of that episode of South Park where Cartman buys his own theme park... This starts to, at the very least, blur the line of what constitutes wealthy. Even the objections to 'show off' goods not functioning in the same way in this environment should be tempered. This technology will likely first be used in an x-rated fashion. There goes most men's reason for wanting nice cars, homes, etc...
The example with nuclear power is a bit tricky because it was never the cheapest energy source. Even though I am supporting it
As an example I would give RMNA vaccines. They were pretty fast in circulation despite political bottlenecks like not trying out human virus trials and so on