"...yet when technology virtually wiped out agricultural jobs, we still found plenty of new work to do instead." This is exactly the attitude that led to Karl Marx and Communism. Workers were displaced by machines faster than they could be absorbed in other industries. Poor-houses were a feature of every town and every large village. Marx looked around him and like William Blake, he marked in every face he met, marks of weakness, marks of woe. And thus began the anti-capitalist movement which since allying itself with socialism, has become the dominant political force today. Yes, in time technology (which as you said takes decades to mature) will provide other jobs for humans, more rewarding to the soul as well as to the wallet. The time it takes to do this is the problem.
To compete against other animals, using our superior brains humans developed first tools then machines then energy. This is the first time in our history that we have had to compete against equal or higher brain-power. I cannot see it ending well.
That scares me. We managed a really really good trick with universities. Instead of having people invest in a competition to signal fitness with no or negative externalities we somehow got people to sign up to a system which funds productive research (tuition and alumni donors) and uses the academic influence of your university as a measure of it's status.
That's an incredible trick when it basically could have been anything. Could have been sports or fanciest grounds or whatever. So I worry about upsetting that system.
> imagine a bot writing a thousand turgid pages a second about “neoliberalism,” “patriarchy,” and “anti-racism”
You forgot "late-stage capitalism".
That said, I would never condone torturing a possibly sentient AI this way.
The current academic evaluation methods are of course designed to be cheap and mindless to implement, and that is precisely where our current pseudo-AI excels. The only viable option in the short-term is proctored exams with no devices allowed.
There will be no unemployment because the government will invent jobs for people to get votes.
This already happened. Education, healthcare, etc all increased in employment and cost despite massive changes in technology.
We might not like these new jobs, and we might feel like "where did all my productivity gain go, I'm working just as hard but not getting more in return." The reason is that AI productivity gains were taxed off to pay for make-work government jobs.
There doesn't even have to be a pretense that the new jobs are useful. Teachers didn't even show up to work during COVID and they got paid anyway.
Note that this will have an affect on people. Those working in a productive reality based job and constantly interacting with reality and that affects them. They are constantly told 2 + 2 = 4 and internalize that.
People working in the government/influence/fantasy sector are also constantly receiving feedback about how 2 + 2 = whatever the party says. They internalize that too.
“If you want a picture of the future, imagine a bot writing a thousand turgid pages a second about “neoliberalism,” “patriarchy,” and “anti-racism” — forever.”
Bryan - Gen Pop seems to be entirely convinced of "revolutionary change" right around (every) corner. Gen Pop needs historical reference badly right now, yet it seems as if that project has been abandoned by the hordes of histrionic historians.
Have you ever read A Concise Economic History of the World by Rondo Cameron? His introduction has a steep logistic s-shaped curve with time on the horizontal and adoption of new technologies on the vertical. There are great examples throughout the book of the phenomenon you note regarding trans-Atlantic phones.
With regards to taking over jobs, this time it is different. Humans may "specialize in whatever AI does worst", but this is just treading water. Every year, the Venn diagram of uniquely human skills gets smaller as AI/robotics's footprint grows.
Using the examples of transatlantic phone calls or e-commerce are false equivalents because these technologies were not simultaneously available, inexpensive, and with low barriers to use. Today, a smart person can watch a 20 min video on Langchain and create an AI app to automate a task in a day. In a year, I'm guessing we'll be able to just ask the an AI app to create a task automating app. The human may have to critique it for a few rounds to get it right, but don't our bosses already do this? Sure, humans are flexible, but they can't compete with a world of motivated people cranking out apps and posting them to Github for the world to improve upon.
We currently have a safe harbor in physical tasks, but that will last 5-10 years at the most. Don't get me wrong - the end of scarcity is a good thing, assuming we can successfully make this transition and we aren't wedded to the idea that sharing the benefits == (horror of horrors) socialism.
<quote> I’ve spent much of my career arguing that the main function of education is not to teach useful skills (or even useless skills), but to certify students’ employability. </quote>
Seventy years ago, in my first job after graduation from engineering school, I remarked to an HR (Personnel, in those days) employee that while I was learning a lot about the steel business, I wasn't making much use of my college education. He replied that people like me were hired because we were not suceptible to the labor union ethic.
<quote>While we like to picture education as job training, it is largely a passport to the real training that happens on the job.</quote>
A senior engineer told me that his first industrial job required him to design a small DC power supply. He said he was clueless and his boss had to lead him step by step.
I suspect AI will have earliest impact on industries that are less regulated. Industries that are heavily regulated (for example, education and medicine) will resist AI. (you won't be able to fire teachers any time soon). Maybe in the long run AI will "end-run" these retarded industries, like smartphone technology upended the taxi industry. The worst mistake of all would be to regulate AI itself, which will, unfortunately, probably happen.
The revolutions were an application of power (gasoline, steam) to manual labor.
The manual labor AI has replaced is getting from ignorance to mid-level expert (see "Practical AI").
AI leverages the thoughts and writings of all. To the extent it's "learning" is filled with garbage, it will fail. To the extent it is curated (by great teaches, Knowledge Engineers, MLOps) it will come close to replacing the uninspired, uncreating, professors and graduate students.
Undergraduates are responsible for rote education and helping entry level students from learning too slowly: Look out for grad student unemployement.
A good professor teaches me to think more better. I am not worried about AI creating original inferences at scale. Good professors will be more in demand. Mediocre will not. But, how do graduate students learn how the unschooled (in their expertise) learn to become schooled? That is a secret sauce of an inspired expert. How will the next generation of experts come to exist?
Dear Bryan, I would recommend you listen for about two minutes of the following talk starting at 42:55 https://youtu.be/ND_AjF_KTD8?t=2575 . Highlights: "AI is a terrible term". "As long as someone is calling something 'artificial intelligence', it's because *it doesn't work*, as soon as it does work, it gets a new name". The chart proving the last point.
The best Substack on using AI in the classroom that I have found is from Professor Ethan Mollick's https://www.oneusefulthing.org/
"...yet when technology virtually wiped out agricultural jobs, we still found plenty of new work to do instead." This is exactly the attitude that led to Karl Marx and Communism. Workers were displaced by machines faster than they could be absorbed in other industries. Poor-houses were a feature of every town and every large village. Marx looked around him and like William Blake, he marked in every face he met, marks of weakness, marks of woe. And thus began the anti-capitalist movement which since allying itself with socialism, has become the dominant political force today. Yes, in time technology (which as you said takes decades to mature) will provide other jobs for humans, more rewarding to the soul as well as to the wallet. The time it takes to do this is the problem.
To compete against other animals, using our superior brains humans developed first tools then machines then energy. This is the first time in our history that we have had to compete against equal or higher brain-power. I cannot see it ending well.
That scares me. We managed a really really good trick with universities. Instead of having people invest in a competition to signal fitness with no or negative externalities we somehow got people to sign up to a system which funds productive research (tuition and alumni donors) and uses the academic influence of your university as a measure of it's status.
That's an incredible trick when it basically could have been anything. Could have been sports or fanciest grounds or whatever. So I worry about upsetting that system.
> imagine a bot writing a thousand turgid pages a second about “neoliberalism,” “patriarchy,” and “anti-racism”
You forgot "late-stage capitalism".
That said, I would never condone torturing a possibly sentient AI this way.
The current academic evaluation methods are of course designed to be cheap and mindless to implement, and that is precisely where our current pseudo-AI excels. The only viable option in the short-term is proctored exams with no devices allowed.
There will be no unemployment because the government will invent jobs for people to get votes.
This already happened. Education, healthcare, etc all increased in employment and cost despite massive changes in technology.
We might not like these new jobs, and we might feel like "where did all my productivity gain go, I'm working just as hard but not getting more in return." The reason is that AI productivity gains were taxed off to pay for make-work government jobs.
There doesn't even have to be a pretense that the new jobs are useful. Teachers didn't even show up to work during COVID and they got paid anyway.
Note that this will have an affect on people. Those working in a productive reality based job and constantly interacting with reality and that affects them. They are constantly told 2 + 2 = 4 and internalize that.
People working in the government/influence/fantasy sector are also constantly receiving feedback about how 2 + 2 = whatever the party says. They internalize that too.
“If you want a picture of the future, imagine a bot writing a thousand turgid pages a second about “neoliberalism,” “patriarchy,” and “anti-racism” — forever.”
Loved the 1984 reference!
Bryan - Gen Pop seems to be entirely convinced of "revolutionary change" right around (every) corner. Gen Pop needs historical reference badly right now, yet it seems as if that project has been abandoned by the hordes of histrionic historians.
Have you ever read A Concise Economic History of the World by Rondo Cameron? His introduction has a steep logistic s-shaped curve with time on the horizontal and adoption of new technologies on the vertical. There are great examples throughout the book of the phenomenon you note regarding trans-Atlantic phones.
From year to year, a decrease in the level of education can be observed at each level of education.
The question is whether the use of AI in the educational process can change this in any way?
BR,
Jod
https://zdrowersi.pl
With regards to taking over jobs, this time it is different. Humans may "specialize in whatever AI does worst", but this is just treading water. Every year, the Venn diagram of uniquely human skills gets smaller as AI/robotics's footprint grows.
Using the examples of transatlantic phone calls or e-commerce are false equivalents because these technologies were not simultaneously available, inexpensive, and with low barriers to use. Today, a smart person can watch a 20 min video on Langchain and create an AI app to automate a task in a day. In a year, I'm guessing we'll be able to just ask the an AI app to create a task automating app. The human may have to critique it for a few rounds to get it right, but don't our bosses already do this? Sure, humans are flexible, but they can't compete with a world of motivated people cranking out apps and posting them to Github for the world to improve upon.
We currently have a safe harbor in physical tasks, but that will last 5-10 years at the most. Don't get me wrong - the end of scarcity is a good thing, assuming we can successfully make this transition and we aren't wedded to the idea that sharing the benefits == (horror of horrors) socialism.
<quote> I’ve spent much of my career arguing that the main function of education is not to teach useful skills (or even useless skills), but to certify students’ employability. </quote>
Seventy years ago, in my first job after graduation from engineering school, I remarked to an HR (Personnel, in those days) employee that while I was learning a lot about the steel business, I wasn't making much use of my college education. He replied that people like me were hired because we were not suceptible to the labor union ethic.
<quote>While we like to picture education as job training, it is largely a passport to the real training that happens on the job.</quote>
A senior engineer told me that his first industrial job required him to design a small DC power supply. He said he was clueless and his boss had to lead him step by step.
Regards,
Bill Drissel
Marc Andreesen says it better than I can:
https://pmarca.substack.com/p/why-ai-wont-cause-unemployment
I suspect AI will have earliest impact on industries that are less regulated. Industries that are heavily regulated (for example, education and medicine) will resist AI. (you won't be able to fire teachers any time soon). Maybe in the long run AI will "end-run" these retarded industries, like smartphone technology upended the taxi industry. The worst mistake of all would be to regulate AI itself, which will, unfortunately, probably happen.
Quick question for Bryan - what is “CS” in your post? Did you mean computer science, or something else?
Customer Service?
I had the same question .. so I tried looking it up on the new Bing AI chat. None the wiser .. I’m assuming it means computer science
The revolutions were an application of power (gasoline, steam) to manual labor.
The manual labor AI has replaced is getting from ignorance to mid-level expert (see "Practical AI").
AI leverages the thoughts and writings of all. To the extent it's "learning" is filled with garbage, it will fail. To the extent it is curated (by great teaches, Knowledge Engineers, MLOps) it will come close to replacing the uninspired, uncreating, professors and graduate students.
Undergraduates are responsible for rote education and helping entry level students from learning too slowly: Look out for grad student unemployement.
A good professor teaches me to think more better. I am not worried about AI creating original inferences at scale. Good professors will be more in demand. Mediocre will not. But, how do graduate students learn how the unschooled (in their expertise) learn to become schooled? That is a secret sauce of an inspired expert. How will the next generation of experts come to exist?
Dear Bryan, I would recommend you listen for about two minutes of the following talk starting at 42:55 https://youtu.be/ND_AjF_KTD8?t=2575 . Highlights: "AI is a terrible term". "As long as someone is calling something 'artificial intelligence', it's because *it doesn't work*, as soon as it does work, it gets a new name". The chart proving the last point.
GPT-4 already gets As w/ minimal effort in "Clinical Research" exams: https://automated.beehiiv.com/p/aiimmunity-challenge-lessons-clinical-research-exam too