Proving that systems satisfy hard input and state constraints is frequently desirable when designing cyber-physical systems. One method for doing so is to compute the viability kernel, the subset of the state space for which a control signal exists that is guaranteed to keep the system within the constraints over some time horizon. In this paper we present a novel method for approximating the viability kernel for linear sampled-data systems using a sampling-based algorithm, which by its construction offers a direct trade-off between scalability and accuracy. We also prove that the algorithm is correct, that its convergence properties are optimal, and demonstrate it on a simple example. We conclude by briefly describing additional results which are omitted due to space constraints.