Many automated system analysis techniques (e.g., model checking, model-based testing) rely on first obtaining a model of the system under analysis. System modeling is often done manually, which is often considered as a hindrance to adopt modelbased system analysis and development techniques. To overcome this problem, researchers have proposed to automatically "learn" models based on sample system executions and shown that the learned models can be useful sometimes. There are however many questions to be answered. For instance, how much shall we generalize from the observed samples and how fast would learning converge? Or, would the analysis result based on the learned model be more accurate than the estimation we could have obtained by sampling many system executions within the same amount of time? Moreover, how well does learning scale to real-world applications? If the answer is negative, what are the potential methods to improve the efficiency of learning? In this work, we first investigate existing algorithms for learning probabilistic models for model checking and propose an evolution-based approach for better controlling the degree of generalization. Then, we present existing approaches to learn abstract models to improve the efficiency of learning for scalability reasons. Lastly, we conduct an empirical study in order to answer the above questions. Our findings include that the effectiveness of learning may sometimes be limited and it is worth investigating how abstraction should be done properly in order to learn abstract models.