2011
DOI: 10.1007/978-3-642-22887-2_1
|View full text |Cite
|
Sign up to set email alerts
|

Self-Modification and Mortality in Artificial Agents

Abstract: Abstract. This paper considers the consequences of endowing an intelligent agent with the ability to modify its own code. The intelligent agent is patterned closely after AIXI [1], but the environment has read-only access to the agent's description. On the basis of some simple modifications to the utility and horizon functions, we are able to discuss and compare some very different kinds of agents, specifically: reinforcement-learning, goal-seeking, predictive, and knowledge-seeking agents. In particular, we i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
36
0
3

Year Published

2012
2012
2018
2018

Publication Types

Select...
8

Relationship

1
7

Authors

Journals

citations
Cited by 31 publications
(39 citation statements)
references
References 10 publications
0
36
0
3
Order By: Relevance
“…That is, UAI has the potential to arrive at definite answers to various questions regarding the social behavior of super-intelligences. Some formalizations and semi-formal answers have recently appeared in the award-winning papers [OR11,RO11].…”
Section: Social Questionsmentioning
confidence: 99%
“…That is, UAI has the potential to arrive at definite answers to various questions regarding the social behavior of super-intelligences. Some formalizations and semi-formal answers have recently appeared in the award-winning papers [OR11,RO11].…”
Section: Social Questionsmentioning
confidence: 99%
“…We (very) briefly summarize the definition of a universal agent, based on AIXI [3,4], following Orseau & Ring [8,13].…”
Section: Notation and Agent Frameworkmentioning
confidence: 99%
“…A knowledge-seeking agent (KSA) [8,13,11], chooses actions to maximize its knowledge of the environment (by reducing ρ(o 1:t | a 1:t ) through elimination of inconsistent environments) as quickly as possible; thus its utility function is u(ao 1:t ) = −ρ(o 1:t | a 1:t ). A prediction-seeking agent (PSA) [8,13] tries to maximize the accuracy of its predictions: u(ao 1:t ) = 1 if o t = arg max o ρ(o ≺t o | a 1:t ), and 0 otherwise.…”
Section: Notation and Agent Frameworkmentioning
confidence: 99%
See 1 more Smart Citation
“…The AGI-11 conference included three papers (Orseau and Ring, 2011a; Ring and Orseau, 2011b;Dewey, 2011) that employed the mathematics of rational agents to analyze ways that AI agents may fail to satisfy the intentions of their designers. Omohundro (2008) and Bostrom (forthcoming) described secondary AI motivations that are implied by a wide variety of primary motivations and that may drive unintended behaviors threatening humans.…”
Section: Introductionmentioning
confidence: 99%