Even though I wrote The Case Against Education, The Chronicle of Higher Education has always treated me well. Their latest issue includes my answer to the following question:
When ChatGPT made its public debut last year, the CEO of OpenAI, ChatGPT’s parent company, predicted that its significance “will eclipse the agricultural revolution, the industrial revolution, the Internet revolution all put together.” Even discounting for hyperbole, the release of ChatGPT, followed soon after by Google’s Bard, suggests that we’re at the dawn of an era marked by rapid advances in artificial intelligence, with far-reaching consequences for nearly every facet of higher education. Observing these dizzying developments, we keep circling the same fundamental question: How will AI change higher education? So we’re organizing a forum to gather a wide spectrum of answers to that question. In your response, feel free to take a comprehensive, sector-wide view or to focus on a specific area. Among the questions you might take up: How will AI change the way professors teach and students learn? How will it change the way scholarship happens and knowledge advances? Will AI alter how we think about expertise? Will it shift the role of the university in society? Also feel free to come up with an angle of your own. Finally, you’re welcome to take issue with the premise that AI will significantly change higher ed, and/or that it should.
I’ve spent much of my career arguing that the main function of education is not to teach useful skills (or even useless skills), but to certify students’ employability. By and large, the reason our customers are on campus is to credibly show, or “signal,” their intelligence, work ethic, and sheer conformity. While we like to picture education as job training, it is largely a passport to the real training that happens on the job.
How will AI change this? To start, we should bet against any major disruption. Sheer conformity is one of the top traits students want to signal, and doing anything radically new signals… non-conformity. Few students will stop attending traditional colleges to “go” to virtual AI training centers. And since most majors are already barely job-related, few students will rethink their majors either.
What will change for students is workload and evaluation. Professors are used to assigning homework, papers, and projects. Using AI to cheat on such work will soon be child’s play. Sufficiently harsh punishments for cheating might preserve the status quo, but modern schools give cheaters a slap on the wrist, and that won’t change. Unmonitored schoolwork will become optional, or a farce. The only thing that will really matter will be exams. And unless the exams are in-person, they’ll be a farce, too. I’m known for giving tough tests of deep understanding - yet GPT-4 already gets A’s on them.
The bigger question is how AI will change the labor market itself. As a general rule, new technologies take decades to realize their full potential. The first true trans-Atlantic phone call wasn’t placed until 1956, almost eighty years after the invention of the phone. E-commerce was technically feasible in the mid-90s, but local malls still endure. But we should still expect AI to slowly disrupt a wide range of industries, including CS itself. This will, in turn, slowly alter students’ career aspirations.
As an economist, I scoff at the idea that AI will permanently disemploy the population. Human wants are unlimited, and human skills are flexible. Agriculture was humanity’s dominant industry for millennia; yet when technology virtually wiped out agricultural jobs, we still found plenty of new work to do instead. I do not, however, scoff at the idea that AI could drastically reduce employment in CS over the next two decades. Humans will specialize in whatever AI does worst.
What about we humble professors? Those of us with tenure have nothing to worry about. Taxpayers and donors will keep funding us no matter how useless we become. Even if you don’t have tenure, students will keep coming and your job will go on… unless you’re at a mediocre private college with a small endowment. Except for cutting-edge empiricists, however, AI will make research even more of a scam than it already is. If Sokal or Boghossian-Lindsey-Pluckrose can successfully “punk” humanities journals, AI will be able to do the same on an industrial scale. If you want a picture of the future, imagine a bot writing a thousand turgid pages a second about “neoliberalism,” “patriarchy,” and “anti-racism” — forever. Outside of the most empirical subjects, the only rationing tools left to determine academic status will be uniquely human: networking and sheer charisma. My fellow scholars, now is a great time to reread Dale Carnegie’s How to Win Friends and Influence People.
The best Substack on using AI in the classroom that I have found is from Professor Ethan Mollick's https://www.oneusefulthing.org/
"...yet when technology virtually wiped out agricultural jobs, we still found plenty of new work to do instead." This is exactly the attitude that led to Karl Marx and Communism. Workers were displaced by machines faster than they could be absorbed in other industries. Poor-houses were a feature of every town and every large village. Marx looked around him and like William Blake, he marked in every face he met, marks of weakness, marks of woe. And thus began the anti-capitalist movement which since allying itself with socialism, has become the dominant political force today. Yes, in time technology (which as you said takes decades to mature) will provide other jobs for humans, more rewarding to the soul as well as to the wallet. The time it takes to do this is the problem.
To compete against other animals, using our superior brains humans developed first tools then machines then energy. This is the first time in our history that we have had to compete against equal or higher brain-power. I cannot see it ending well.