This paper describes the multimodal deep learning system proposed for SemEval 2022 Task 5: MAMI -Multimedia Automatic Misogyny Identification. We participated in both Subtasks, i.e. Subtask A: Misogynous meme identification, and Subtask B: Identifying type of misogyny among potential overlapping categories (stereotype, shaming, objectification, violence). The proposed architecture uses pretrained models as feature extractors for text and images. We use these features to learn multimodal representation using methods like concatenation and scaled dot product attention. Classification layers are used on fused features as per the subtask definition. We also performed experiments using unimodal models for setting up comparative baselines. Our best performing system achieved an F1 score of 0.757 and was ranked 3 rd in Subtask A. On Subtask B, our system performed well with an F1 score of 0.690 and was ranked 10 th on the leaderboard. We further show extensive experiments using combinations of different pre-trained models which will be helpful as baselines for future work.