Globalization of semiconductor design and manufacturing has led to a concern of trust in the final product. The components may now be designed and manufactured from anywhere in the world without the direct supervision of the buyer. As a result, the hardware designs and fabricated chips may be vulnerable to malicious alterations by an adversary at any stage of VLSI design flow, thus compromising the integrity of the component. The effect of any modifications made by the adversary can be catastrophic in the critical applications.Because of the stealthy nature of such insertions, it is extremely difficult to detect them using traditional testing and verification methods. Therefore, the trust of the hardware systems require a new approach and have drawn much attention in the hardware security community.For many years, the researchers have developed sophisticated techniques to detect, isolate and prevent malicious attacks in cyber security community assuming that the underlying hardware platform is extremely secure and trustworthy. But the hardware may contain one or more backdoors that can be exploited by software at the time of operation. Therefore, the trust of the computing system cannot be guaranteed unless we can guarantee the trust of the hardware platform.A malicious insertion can be very stealthy and may only involve minor modification in the hardware design or the fabricated chip. The insertion may require rare or specific conditions in order to be activated. The effect may be denial of service, change of function, destruction of chip, leakage of secret information from cryptographic hardware etc.In this thesis, we propose a novel technique for the detection of malicious alteration(s) in a third party soft intellectual property (IP) using a clever combination of sequential equivalence checking (SEC) and automatic test generation. The use of powerful inductive invariants can prune a large illegal state space, and test generation helps to provide a sensitization path for nodes of interest. Results for a set of hard-to-verify designs show that our method can either ensure that the suspect design is free from the functional effect of any malicious change (s) or return a small group of most likely malicious signals.