Abstract-Consider a large number of nodes, which sequentially make decisions between two given hypotheses. Each node takes a measurement of the underlying truth, observes the decisions from some immediate predecessors, and makes a decision between the given hypotheses. We consider two classes of broadcast failures: 1) each node broadcasts a decision to the other nodes, subject to random erasure in the form of a binary erasure channel; 2) each node broadcasts a randomly flipped decision to the other nodes in the form of a binary symmetric channel. We are interested in conditions under which there does (or does not) exist a decision strategy consisting of a sequence of likelihood ratio tests such that the node decisions converge in probability to the underlying truth, as the number of nodes goes to infinity. In both cases, we show that if each node only learns from a bounded number of immediate predecessors, then there does not exist a decision strategy such that the decisions converge in probability to the underlying truth. However, in case 1, we show that if each node learns from an unboundedly growing number of predecessors, then there exists a decision strategy such that the decisions converge in probability to the underlying truth, even when the erasure probabilities converge to 1. We show that a locally optimal strategy, consisting of a sequence of Bayesian likelihood ratio tests, is such a strategy, and we derive the convergence rate of the error probability for this strategy. In case 2, we show that if each node learns from all of its previous predecessors, then there exists a decision strategy such that the decisions converge in probability to the underlying truth when the flipping probabilities of the binary symmetric channels are bounded away from 1/2. Again, we show that a locally optimal strategy achieves this, and we derive the convergence rate of the error probability for it. In the case where the flipping probabilities converge to 1/2, we derive a necessary condition on the convergence rate of the flipping probabilities such that the decisions based on the locally optimal strategy still converge to the underlying truth. We also explicitly characterize the relationship between the convergence rate of the error probability and the convergence rate of the flipping probabilities.