This is a short overview a explaining how building a large-scale, silicon-photonic quantum computer has been reduced to the creation of good sources of 3-photon entangled states (and may simplify further). Given such sources, each photon need pass through a small, constant, number of components, interfering with at most 2 other spatially nearby photons, and current photonics engineering has already demonstrated the manufacture of thousands of components on two-dimensional semiconductor chips with performance that allows the creation of tens of thousands of photons entangled in a state universal for quantum computation.At present the fully-integrated, silicon-photonic architecture we envisage involves creating the required entangled states by starting with single-photons produced non-determistically by pumping silicon waveguides (or cavities) combined with on-chip filters and nanowire superconducting detectors to herald that a photon has been produced. These sources are multiplexed into being near-deterministic, and the single photons then passed through an interferometer to non-deterministically produce small entangled states -necessarily multiplexed to near-determinism again. This is followed by a 'ballistic' scattering of the small-scale entangled photons through an interferometer such that some photons are detected, leaving the remainder in a large-scale entangled state which is provably universal for quantum computing implemented by single-photon measurements.There are a large number of questions regarding the optimum ways to make and use the final cluster state, dealing with static imperfections, constructing the initial entangled photon sources and so on, that need to be investigated before we can aim for millions of qubits capable of billions of computational time-steps. The focus in this article is on the theoretical side of such questions.