DOI:
关键词: Superintelligence 、 Artificial general intelligence 、 Artificial intelligence 、 Intelligence cycle (target-centric approach) 、 Humanity 、 Intelligence analysis 、 Psychology 、 Significant risk 、 Event (philosophy) 、 Openness to experience
摘要: If the general intelligence of artificial systems were to surpass that humans significantly, this would constitute a significant risk for humanity - so even if we estimate probability event be fairly low, it is necessary think about now. We need what progress can expect, impact superintelligent ma- chines might be, how design safe and controllable systems, whether there are directions research should best avoided or strengthened. 1. The conference papers in special volume Journal Experimental Theoretical Artificial Intelligence (JETAI) outcome on "Impacts Risks Arti- ficial General Intelligence" (AGI-Impacts) took place at University Oxford, St. Anne's College, December 10th 11th, 2012 jointly with fifth annual "Artificial (AGI-12). was orga- nized by Future Humanity Institute Oxford: academically Nick Bostrom myself, support from fellows Stuart Armstrong, Toby Ord, Anders Sandberg more members program committee; organisationally Sean O'Heigertaigh, Alexandre Erler, Daniel Dewey, others. grateful (AGI) community openness these issues security, as shown fact they initiated con- nection vast majority ca. 150 participants main AGI meeting also attended our AGI-Impacts event. Last but not least, want thank 'Euro- pean Network Cognitive Systems, Interaction Robotics (EUCog)' organization 'Saving Homo Sapiens' sponsoring