We describe and implement a computer-assisted approach for accelerating the exploration of uncharted effective free-energy surfaces (FESs). More generally, the aim is the extraction of coarsegrained, macroscopic information from stochastic or atomistic simulations, such as molecular dynamics (MD). The approach functionally links the MD simulator with nonlinear manifold learning techniques. The added value comes from biasing the simulator toward unexplored phase-space regions by exploiting the smoothness of the gradually revealed intrinsic low-dimensional geometry of the FES. free-energy surface | model reduction | machine learning | protein folding | enhanced sampling methods A crucial bottleneck in extracting systems-level information from direct statistical mechanical simulations is that the simulations sample phase space "at their own pace" dictated by the shape and barriers of the effective free-energy surface (FES). In particular, this bottleneck is often a problem in molecular dynamics (MD). Long simulation times are "wasted" revisiting already explored locations in conformation space. Over the last 20 years, there has been a tremendous amount of effort invested, and many truly creative solutions have been proposed for biasing the simulations so as to circumvent this. Several techniques have now become a standard part of the simulator's toolkit, like umbrella sampling or SHAKE. Other biasing techniques, like importance sampling, milestoning, path sampling or metadynamics, and the nudged elastic band/string method have been also ingeniously formulated to help alleviate the above problem. It is worth mentioning also more recent methods based on machine learning, like reconnaissance metadynamics or diffusion mapdirected MD. An incomplete list of works reporting about those methods can be found in refs. 1-10. Moreover, a recent review on dimensional reduction and enhanced sampling in atomistic simulations can be found in ref. 11. A crucial assumption that underpins many of these methods is that the dynamics are, effectively, low-dimensional: there exists a "good set of a few collective variables or coordinates" (also called reduction coordinates), in which one can write an effective Langevin or FokkerPlanck equation. It is the potential of this effective Langevin representation that we are trying to identify and exploit. One generally expects this effective Langevin representation to be a higher order, generalized one with memory terms (12). In effect, we will show here how we can construct "short memory" approximations with the help of collective variables (CVs) detected and updated "on the fly" using manifold learning.If we knew the right CVs and had an "easy way" to create molecular conformations consistent with given values of these variables, then creating tabulated or interpolated effective FESs with a black box atomistic simulator and umbrella sampling would be "easy." By observing the dynamics of the MD in these few CVs, we can then straightforwardly estimate the local gradient of the effective potenti...