The last decade has witnessed remarkable research advances at the intersection of machine learning (ML) and hardware security. The confluence of the two technologies has created many interesting and unique opportunities, but also left some issues in their wake. ML schemes have been extensively used to enhance the security and trust of embedded systems like hardware Trojans and malware detection. On the other hand, ML-based approaches have also been adopted by adversaries to assist side-channel attacks, reverse engineer integrated circuits and break hardware security primitives like Physically Unclonable Functions (PUFs). Deep learning is a subfield of ML. It can continuously learn from a large amount of labeled data with a layered structure. Despite the impressive outcomes demonstrated by deep learning in many application scenarios, the dark side of it has not been fully exposed yet. The inability to fully understand and explain what has been done within the super-intelligence can turn an inherently benevolent system into malevolent. Recent research has revealed that the outputs of Deep Neural Networks (DNNs) can be easily corrupted by imperceptibly small input perturbations. As computations are brought nearer to the source of data creation, the attack surface of DNN has also been extended from the input data to the edge devices. Accordingly, due to the opportunities of MLassisted security and the vulnerabilities of ML implementation, in this paper, we will survey the applications, vulnerabilities and fortification of ML from the perspective of hardware security. We will discuss the possible future research directions, and thereby, sharing a roadmap for the hardware security community in general.