Modern data science research, at the cutting edge, can involve massive computational experimentation; an ambitious PhD in computational fields may conduct experiments consuming several million CPU hours. Traditional computing practices, in which researchers use laptops, PCs, or campus-resident resources with shared policies, are awkward or inadequate for experiments at the massive scale and varied scope that we now see in the most ambitious data science. On the other hand, modern cloud computing promises seemingly unlimited computational resources that can be custom configured, and seems to offer a powerful new venue for ambitious data-driven science. Exploiting the cloud fully, it seems the amount of raw experimental work that could be completed in a fixed amount of calendar time ought to expand by several orders of magnitude.As potentially powerful as cloud-based experimentation may be in the abstract, it has not yet become a standard option for researchers in many academic disciplines. The prospect of actually conducting massive computational experiments in today's cloud systems with today's standard approaches forcefully confronts the potential user with daunting challenges. The user schooled in traditional interactive personal computing likely expects that a cloud experiment will involve an intricate collection of moving parts seemingly requiring extensive monitoring and involvement. Leading considerations include: (i) the seeming complexity of today's cloud computing interface, (ii) the difficulty of executing and managing an overwhelmingly large number of computational jobs, and (iii) the difficulty of keeping track of, collating, and combining a massive collection of separate results. Starting a massive experiment 'bare-handed' seems therefore highly problematic and prone to rapid 'researcher burn out'. New software stacks are emerging that render massive cloud-based experiments relatively painless. Such stacks simplify experimentation by systematizing experiment definition, automating distribution and management of all tasks, and allowing easy harvesting of results and documentation. In this article, we discuss several painless computing stacks that abstract away the difficulties of massive experimentation, thereby allowing a proliferation of ambitious experiments for scientific discovery. * This article is based on a series of lectures given in a Stanford course Stats285 in the Fall of 2017. † Corresponding