Optimization problems are frequent in several fields, such as the different branches of Engineering. In some cases, the objective function exposes mathematically exploitable properties to find exact solutions. However, when it is not the case, heuristics are appreciated. This situation occurs when the objective function involves numerical simulations and sophisticated models of reality. Then, population-based meta-heuristics, such as genetic algorithms, are widely used because of being independent of the objective function. Unfortunately, they have multiple parameters and generally require numerous function evaluations to find competitive solutions stably. An attractive alternative is DIRECT, which handles the objective function as a black box like the previous meta-heuristics but is almost parameter-free and deterministic. Unfortunately, its rectangle division behavior is rigid, and it may require many function evaluations for degenerate cases. This work presents an optimizer that combines the lack of parameters and stochasticity for high exploration capabilities. This method, called Tangram, defines a self-adapted set of division rules for the search space yet relies on a stochastic hill-climber to perform local searches. This optimizer is expected to be effective for low-dimensional problems (less than 20 variables) and few function evaluations. According to the results achieved, Tangram outperforms Teaching-Learning-Based Optimization (TLBO), a widespread population-based method, and a plain multi-start configuration of the stochastic hill-climber used.