Jebari, Karim & Joakim Lundborg | 2018
Foresight, doi.org/10.1108/FS-04-2018-0042
Sammanfattning
Purpose
The claim that super intelligent machines constitute a major existential risk was recently defended in Nick Bostrom’s book Superintelligence and forms the basis of the sub-discipline AI risk. The purpose of this paper is to critically assess the philosophical assumptions that are of importance to the argument that AI could pose an existential risk and if so, the character of that risk.
Design/methodology/approach
This paper distinguishes between “intelligence” or the cognitive capacity of an individual and “techne”, a more general ability to solve problems using, for example, technological artifacts. While human intelligence has not changed much over historical time, human techne has improved considerably. Moreover, the fact that human techne has more variance across individuals than human intelligence suggests that if machine techne were to surpass human techne, the transition is likely going to be prolonged rather than explosive.
Findings
Some constraints for the intelligence explosion scenario are presented that imply that AI could be controlled by human organizations.
Originality/value
If true, this argument suggests that efforts should focus on devising strategies to control AI rather strategies that assume that such control is impossible.