This strikes me as a lot of handwaving and cope. Do we need something that we can perfectly plot on a graph here? In 2007 (or 2006 or something, I forget), autonomous vehicles couldn't navigate through the open desert and reach a finish line. The idea that they could be on a road with people was laughable -- it was obvious they'd kill ev…
This strikes me as a lot of handwaving and cope. Do we need something that we can perfectly plot on a graph here?
In 2007 (or 2006 or something, I forget), autonomous vehicles couldn't navigate through the open desert and reach a finish line. The idea that they could be on a road with people was laughable -- it was obvious they'd kill everyone and then themselves, basically instantly. In successive DARPA challenges, they started finishing the course, then finishing harder courses, going through simulated traffic.
By 2013, we had cars that could, in certain situations, safely drive in traffic (on freeways). By 2015, we had cars that could handle (a subset of) urban traffic situations, probably not actually safely compared to human drivers, but like not two orders of magnitude worse or anything. By 2018, Waymo launched autonomous vehicles to non-employees in Scottsdale. And then... we've inched forward. We have a small fleet driving, almost certainly deeply unprofitably, in San Fancisco. The Scottsdale area has increased a bit.
This clearly was a surprise to companies working in the autonomous vehicle space. Their internal metrics didn't give them any better prediction that this slowdown was coming.
Does this mean that LLMs will suddenly have a giant slowdown in progress post GPT-4?
It absolutely does not.
Does this mean that people should rein in their confident predictions that LLMs will increase steadily with no end in sight? It does.
This strikes me as a lot of handwaving and cope. Do we need something that we can perfectly plot on a graph here?
In 2007 (or 2006 or something, I forget), autonomous vehicles couldn't navigate through the open desert and reach a finish line. The idea that they could be on a road with people was laughable -- it was obvious they'd kill everyone and then themselves, basically instantly. In successive DARPA challenges, they started finishing the course, then finishing harder courses, going through simulated traffic.
By 2013, we had cars that could, in certain situations, safely drive in traffic (on freeways). By 2015, we had cars that could handle (a subset of) urban traffic situations, probably not actually safely compared to human drivers, but like not two orders of magnitude worse or anything. By 2018, Waymo launched autonomous vehicles to non-employees in Scottsdale. And then... we've inched forward. We have a small fleet driving, almost certainly deeply unprofitably, in San Fancisco. The Scottsdale area has increased a bit.
This clearly was a surprise to companies working in the autonomous vehicle space. Their internal metrics didn't give them any better prediction that this slowdown was coming.
Does this mean that LLMs will suddenly have a giant slowdown in progress post GPT-4?
It absolutely does not.
Does this mean that people should rein in their confident predictions that LLMs will increase steadily with no end in sight? It does.