Recently, a connection has been established between two branches of computability theory, namely between algorithmic randomness and algorithmic learning theory. Learning-theoretical characterizations of several notions of randomness were discovered. We study such characterizations based on the asymptotic density of positive answers. In particular, this note provides a new learning-theoretic definition of weak 2-randomness, solving the problem posed by (Zaffora Blando, Rev. Symb. Log. 2019). The note also highlights the close connection between these characterizations and the problem of convergence on random sequences.
We consider the notion of prediction functions (or predictors) studied before in the context of randomness and stochasticity by Ko, and later by Ambos‐Spies and others. Predictor is a total computable function which tries to predict bits of some infinite binary sequence. The prediction error is defined as the limit of the number of incorrect answers divided by the number of answers given so far. We discuss indefiniteness of prediction errors for weak 1‐generics and show that this phenomenon affects certain c.e. sequences as well. On the other hand, a notion of optimal predictor is considered. It is shown that there is a sequence for which increasingly better predictors exist but for which no predictor is optimal.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.