Along with the expansion of the application of integrated circuits (ICs), the risk of security vulnerabilities of them has been rising dramatically. Diverse type of successful attacks against ICs have been launched by adversaries taking advantage of those vulnerabilities to expose confidential information, e.g., intellectual property (IP), and customer data. As a remedy for the shortcomings of traditional security measures taken to protect ICs, PUFs appear to be promising candidates that are intended to offer instance-specific functionality. While cryptographic mechanisms enjoying this privilege have been emerging, the vulnerability of PUFs to different types of attacks have been demonstrated. Among these attacks, a great deal of attention has been paid to machine learning (ML) attacks, aiming at modeling the challengeresponse behavior of PUFs. So far the success of ML attacks has relied on trial and error, and consequently, ad hoc attacks and their corresponding countermeasures have been developed.This thesis aims to address this issue by providing the mathematical proofs of the vulnerability of various PUF families, including Arbiter, XOR Arbiter, ring-oscillator, and bistable ring PUFs, to ML attacks. To achieve this goal, for the assessment of these PUFs a generic framework is developed that include two main approaches. First, with regard to the inherent physical characteristics of the PUFs mentioned above, fit-for-purpose mathematical representations of them are established, which adequately reflect the physical behavior of those primitives. To this end, notions and formalizations, being already familiar to the ML theory world, are reintroduced in order to give a better understanding of why, how, and to what extent ML attacks against PUFs can be feasible in practice. Second, polynomial time ML algorithms are explored, which can learn the PUFs under the appropriate representation. More importantly, in contrast to previous ML approaches, not only the accuracy of the model mimicking the behavior of the PUF but also the delivery of such a model is ensured by our framework.Besides off-the-shelf ML algorithms, we apply a set of algorithms originated in property testing field of study that can support the evaluation of the security of PUFs. They serve as a "toolbox", from which PUF designers and manufacturers can choose the indicators being relevant to their requirements. Last but not least, on the basis of learning theory concepts, this thesis explicitly states that the PUF families studied here cannot be considered as an ultimate solution to the problem of insecure ICs. Furthermore, we believe that this thesis can provide an insight into not only the academic research but also the design and manufacturing of PUFs. xi