The computer systems for decades have been threatened by various types of hardware and software attacks of which Malware have been one of the pivotal issues. This malware has the ability to steal, destroy, contaminate, gain unintended access, or even disrupt the entire system. There have been techniques to detect malware by performing static and dynamic analysis of malware files, but, stealthy malware has circumvented the static analysis method and for dynamic analysis, there have been previous works that propose different methods to detect malware. However, these techniques do not perform well on stealthy malware. Moreover, the rising trend and advancements in machine learning has resulted into its numerous applications in the field of computer vision, pattern recognition to providing security to hardware devices. Machine learning based malware detection techniques rely on grayscale images of malware and tends to classify malware based on the distribution of textures in graycale images. Albeit the advancement and promising results shown by machine learning techniques, attackers can exploit the vulnerabilities by generating adversarial samples. Adversarial samples are generated by intelligently crafting and adding perturbations to the input samples. There exists majority of the software based adversarial attacks and defenses. To defend against the adversaries, the existing malware detection based on machine learning and grayscale images needs a preprocessing for the adversarial data. This can cause an additional overhead and can prolong the real-time malware detection. So, as an alternative to this, we explore RRAM (Resistive Random Access Memory) based defense against adversaries. Therefore, the aim of this thesis is to address the above mentioned critical system security issues. The above mentioned challenges are addressed by demonstrating proposed techniques to design a secure and robust cognitive system. First, a novel technique to detect stealthy malware is proposed. The technique uses malware binary images and then extract different features from the same and then employ different ML-classifiers on the dataset thus obtained. Results demonstrate that this technique is successful in differentiating classes of malware based on the features extracted. Secondly, I demonstrate the effects of adversarial attacks on a reconfigurable RRAM-neuromorphic architecture with different learning algorithms and device characteristics. I also propose an integrated solution for mitigating the effects of the adversarial attack using the reconfigurable RRAM architecture.