Self-Regulating Artificial General Intelligence

11/12/2017
by   Joshua S. Gans, et al.
0

Here we examine the paperclip apocalypse concern for artificial general intelligence (or AGI) whereby a superintelligent AI with a simple goal (ie., producing paperclips) accumulates power so that all resources are devoted towards that simple goal and are unavailable for any other use. We provide conditions under which a paper apocalypse can arise but also show that, under certain architectures for recursive self-improvement of AIs, that a paperclip AI may refrain from allowing power capabilities to be developed. The reason is that such developments pose the same control problem for the AI as they do for humans (over AIs) and hence, threaten to deprive it of resources for its primary goal.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/29/2023

Suffering Toasters – A New Self-Awareness Test for AI

A widely accepted definition of intelligence in the context of Artificia...
research
12/24/2013

Bounded Recursive Self-Improvement

We have designed a machine that becomes increasingly better at behaving ...
research
08/31/2022

Language and Intelligence, Artificial vs. Natural or What Can and What Cannot AI Do with NL?

In this talk, I argue that there are certain pragmatic features of natur...
research
12/13/2019

Does AlphaGo actually play Go? Concerning the State Space of Artificial Intelligence

The overarching goal of this paper is to develop a general model of the ...
research
05/17/2018

A Formulation of Recursive Self-Improvement and Its Possible Efficiency

Recursive self-improving (RSI) systems have been dreamed of since the ea...
research
07/19/2020

On Controllability of AI

Invention of artificial general intelligence is predicted to cause a shi...
research
08/07/2020

Uncontrollability of AI

Invention of artificial general intelligence is predicted to cause a shi...

Please sign up or login with your details

Forgot password? Click here to reset