This chapter surveys eight research areas organized around one question: As learning systems become increasingly intelligent and autonomous, what design principles can best ensure that their behavior is aligned with the interests of the operators? The chapter focuses on two major technical obstacles to AI alignment: the challenge of specifying the right kind of objective functions and the challenge of designing AI systems that avoid unintended consequences and undesirable behavior even in cases where the objective function does not line up perfectly with the intentions of the designers. The questions surveyed include the following: How can we train reinforcement learners to take actions that are more amenable to meaningful assessment by intelligent overseers? What kinds of objective functions incentivize a system to “not have an overly large impact” or “not have many side effects”? The chapter discusses these questions, related work, and potential directions for future research, with the goal of highlighting relevant research topics in machine learning that appear tractable today.
We present the logical induction criterion for computable algorithms that
assign probabilities to every logical statement in a given formal language, and
refine those probabilities over time. The criterion is motivated by a series of
stock trading analogies. Roughly speaking, each logical sentence phi is
associated with a stock that is worth $1 per share if phi is true and nothing
otherwise, and we interpret the belief-state of a logically uncertain reasoner
as a set of market prices, where pt_N(phi)=50% means that on day N, shares of
phi may be bought or sold from the reasoner for 50%. A market is then called a
logical inductor if (very roughly) there is no polynomial-time computable
trading strategy with finite risk tolerance that earns unbounded profits in
that market over time. We then describe how this single criterion implies a
number of desirable properties of bounded reasoners; for example, logical
inductors outpace their underlying deductive process, perform universal
empirical induction given enough time to think, and place strong trust in their
own reasoning process.Comment: In Proceedings TARK 2017, arXiv:1707.0825
Classical game theory treats players as special-a description of a game contains a full, explicit enumeration of all players-even though in the real world, "players" are no more fundamentally special than rocks or clouds. It isn't trivial to find a decision-theoretic foundation for game theory in which an agent's coplayers are a non-distinguished part of the agent's environment. Attempts to model both players and the environment as Turing machines, for example, fail for standard diagonalization reasons.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.