We give the first agnostic, efficient, proper learning algorithm for monotone Boolean functions. Given 2 Õ( √ n/ε) uniformly random examples of an unknown function f : {±1} n → {±1}, our algorithm outputs a hypothesis g : {±1} n → {±1} that is monotone and (opt + ε)-close to f , where opt is the distance from f to the closest monotone function. The running time of the algorithm (and consequently the size and evaluation time of the hypothesis) is also 2 Õ( √ n/ε) , nearly matching the lower bound of [BCO + 15]. We also give an algorithm for estimating up to additive error ε the distance of an unknown function f to monotone using a run-time of 2 Õ( √ n/ε) . Previously, for both of these problems, sample-efficient algorithms were known, but these algorithms were not run-time efficient. Our work thus closes this gap in our knowledge between the run-time and sample complexity.This work builds upon the improper learning algorithm of [BT96] and the proper semiagnostic learning algorithm of [LRV22], which obtains a non-monotone Boolean-valued hypothesis, then "corrects" it to monotone using query-efficient local computation algorithms on graphs. This black-box correction approach can achieve no error better than 2opt + ε information-theoretically; we bypass this barrier by a) augmenting the improper learner with a convex optimization step, and b) learning and correcting a real-valued function before rounding its values to Boolean.Our real-valued correction algorithm solves the "poset sorting" problem of [LRV22] for functions over general posets with non-Boolean labels.