Dangers of A.I. with Guest Nick Bostrom

Jan 06, 2015, 10:26 PM

On Superintelligence: Paths, Dangers, and Strategies (2014) with the author, a philosophy professor at Oxford.

Just grant the hypothetical that machine intelligence advances will eventually produce a machine capable of further improving itself, and becoming much smarter than we are. Put aside the question of whether such a being could in principle be conscious or self-conscious or have a soul or whatever. None of those are necessary for it to be capable, say, of developing and manufacturing a trillion nanobots which it could then use to remake the earth.

Bostrom thinks that we can make some predictions about the motivations of such a being, whatever goals it’s programmed to achieve, e.g. its goals will entail that it won’t want those goals changed by us. This sets up challenges for us in advance to figure out ways to frame and implement motivational programming an A.I. before it’s smart enough to resist future changes. Can we in effect tell the A.I. to figure out and do whatever we would ask it to do if we were better informed and wiser? Can we offload philosophical thought to such a superior intelligence in this way? Bostrom thinks that philosophers are in a great position for well-informed speculation on topics like this.

Mark, Dylan, and Nick are also joined by former philosophy podcaster Luke Muehlhauser. Go to the blog: http://www.partiallyexaminedlife.com/2015/01/06/ep108-nick-bostrom/

#superintelligence #bostrom #ai #artificial #intelligence #muehlhauser