Crowdsourcing is adopted as a fast and cost-effective system for human computation and acquiring data for training models in machine learning. Although crowdsourcing has broad applicability, it still has the following challenges. Normally, workers have access to crowdsourcing platforms with simple authentication mechanisms. As a result, malicious workers may get into the system and submit unreliable answers rendering the platform fraudulent. Moreover, when workers perform tasks, their lack of expertise and the difficulty of tasks affects the accuracy. Truth inference benefits the data quality and worker reliability. A truth inference algorithm estimates workers' trustworthiness and correctness of the answers from their responses. Besides, it helps in filtering out low-quality answers. However, due to adversarial attacks by malicious workers, ground truth inference algorithms perform poorly. This research considers how to defend against adversarial attacks on crowdsourcing platforms and improve the truth inference process. The proposed method estimates the trust and reliability scores of the workers and classifies them as normal and malicious workers. Based on this classification , tasks are assigned to the workers. Moreover, these predicted scores are used for inferring the correct answers, thereby improving the ground truth inference. Experiments consistently show that the proposed truth inference method is tolerant to adversarial attacks with competent accuracy.