4 Comments

This was a really good discussion. I liked when you talked about the deeper philosophical disagreements! I've found Huemer and you persuasive on these issues and different from what I normally hear (EA people), so it's interesting to see you discuss this stuff with them. I would like to hear your thoughts on Huemer's reincarnation argument!

Rob is clearly very intelligent and was asking good questions. This made for an excellent discussion.

Expand full comment

Great conversation!

Expand full comment

Great conversation! I find Eliezer's convictions about intelligent AI a bit... extreme?

"So Eliezer, we actually have a bet. It takes a little effort to understand the bet, because his view is so specific. He said, “Look, I want a bet on there will no longer be any human beings on the surface of the Earth on January 1, 2030.” I was willing to give him, like, “How about all of human extinction?” — “No, no, no, no, no. There could still be humans in mine shafts. That’s OK. But not the surface of the Earth.”"

Expand full comment

Eliezer is a hella excentric person. I love reading his ideas, and i am very concerned about ai risks myself, but even i think that 2030 is increadibly unlikely to have a FOOM scenario or something like that. I havnt thought about the risk super criticily, but i think a doom day scenario like that has a maybe... 0.0001 chance of happening then?

Which if you multipy by humans potential and the future and all that is bad, and worth having some people working fulltime on! aka why i support AI alignement work and MIRI and such. But i would require very very large odds in my favor. and im not that worried about my own life and such. theres a billion things more likely to kill me

Expand full comment