Any number can be expanded to the base 10, leading to a sequence of digits between 0 and 9 corresponding to the number. Also, any number can be expanded to the base 2, leading to a sequence of digits, each one being either 0 or 1, corresponding to the number. It is result due to Émile Borel in 1904 that "almost all" numbers have the property that, when expanded to the base 2, each of the digits 0 and 1 appears with an asymptotic frequency of 1/2. That is, if we regard the sequence of digits in the expansion to the base 2 as a sequence of 'heads' and 'tails' resulting from a coin-tossing experiment, then, in the language of probability theory, the probability of getting heads (that is a 0) is 1/2, and the probability of getting tails (that is a 1) is also 1/2. Numbers with this property are called "simply normal numbers" to the base 2. Traditionally, the proof of Borel's Theorem relies on a knowledge of measure theory, which generally lies outside the undergraduate curriculum. Here, a proof of Borel's Theorem is presented which requires only an introductory knowledge of sequences and series, and a knowledge of how to integrate step functions on an interval. This makes it possible to discuss Borel's theorem at the level of a first or second year course in mathematical analysis.