Philosophers frequently define knowledge as justified, true belief. In this paper we build a mathematical framework that makes possible to define learning (increased degree of true belief) and knowledge of an agent in precise ways. This is achieved by phrasing belief in terms of epistemic probabilities, defined from Bayes' Rule. The degree of true belief is then quantified by means of active information $I^+$, that is, a comparison between the degree of belief of the agent and a completely ignorant person. Learning has occurred when either the agent's strength of belief in a true proposition has increased in comparison with the ignorant person ($I^+>0$), or if the strength of belief in a false proposition has decreased ($I^+<0$). Knowledge additionally requires that learning occurs for the right reason, and in this context we introduce a framework of parallel worlds, of which one is true and the others are counterfactuals. We also generalize the framework of learning and knowledge acquisition to a sequential setting, where information and data is updated over time. The theory is illustrated using examples of coin tossing, historical events, future events, replication of studies, and causal inference.