In this work we study active learning of homogeneous s-sparse halfspaces in R d under label noise. Even in the absence of label noise this is a challenging problem and only recently have label complexity bounds of the form Õ s • polylog d, 1 been established in Zhang ( 2018) for computationally efficient algorithms under the broad class of isotropic log-concave distributions. In contrast, under high levels of label noise, the label complexity bounds achieved by computationally efficient algorithms are much worse. When the label noise satisfies the Massart condition (Massart and Nédélec, 2006), i.e., each label is flipped with probability at most η for a parameter η ∈ [0, 1 2 ), the work of Awasthi et al. ( 2016) provides a computationally efficient active learning algorithm under isotropic log-concave distributions with label complexity Õ s poly(1/(1−2η)) poly log d, 1 . Hence the algorithm is label-efficient only when the noise rate η is a constant. In this work, we substantially improve on the state of the art by designing a polynomial time algorithm for active learning of s-sparse halfspaces under bounded noise and isotropic log-concave distributions, with a label complexity of Õ s (1−2η) 4 polylog d, 1 . Hence, our new algorithm is label-efficient even for noise rates close to 1 2 . Prior to our work, such a result was not known even for the random classification noise model. Our algorithm builds upon existing margin-based algorithmic framework and at each iteration performs a sequence of online mirror descent updates on a carefully chosen loss sequence, and uses a novel gradient update rule that accounts for the bounded noise.