Maintenance and evolution have been accepted as integral principles in the software development life-cycle. They are essential for any system that operates in or addresses problems or activities of the real world if it is to remain useful and profitable. Nevertheless, as time passes and modifications occur, modeling artifacts are often neglected due to the lack of proper maintenance. Hence, it may render outdated models and hinder the application of model-based reasoning techniques, such as model-based testing and model checking. To address these issues, recent academic and industrial studies have shown that finite state machine (FSM) model learning techniques are becoming increasingly popular in software verification and testing. Despite these advances, model learning algorithms are still hampered by scalability issues, as well as the constant changes over time that may require learning from scratch. Furthermore, there is a lack of investigations about learning strategies for software product lines (SPL), i.e., systems where variants shall co-exist to satisfying the needs of distinct market segments and, hence, incorporate variability in space. In this PhD Thesis, we improve upon the state-of-the-art of model-based software engineering by introducing theoretical and experimental contributions to address model learning in the setting of evolving systems that incorporate modifications over time and variability in space. Our main contributions are threefold: (i) We have introduced the partial-Dynamic L * M , an adaptive algorithm that explores models from pre-existing versions onthe-fly to discard redundant and deprecated knowledge in terms of input sequences that may not lead to state discovery. Using realistic models of the OpenSSL toolkit, we have shown that our algorithm has been more efficient than state-of-the-art techniques and less sensitive to software evolution. (ii) We have filled the gap of model learning algorithms for variability-intensive systems by introducing the FFSM Diff algorithm. It is an automated technique to identify similar behavior shared among product-specific FSMs, annotate states, and transitions with feature constraints, and integrate them into succinct featured finite state machines (FFSM). Using 105 FSMs derived from six SPLs of academic benchmarks, we have shown that our algorithm can effectively merge families of state machines into succinct FFSMs, especially if there is high feature reuse among products. (iii) We have extended our expertise upon the FFSM Diff algorithm and reported our experiences on learning FFSMs through product sampling. Our results have indicated that FFSMs learned by sampling can be as precise as those learned from exhaustive analysis and hence, collectively cover the behavior of an SPL.