Crowdsourcing is becoming a norm among Internet users as it can be a convenient and cost-saving way of obtaining information or input into a task by enlisting a crowd of people. However, data obtained through a crowdsourcing platform may not be reliable and may lead to misinformation or misleading conclusions. Therefore, the need to evaluate and measure the trustworthiness of crowdsourced data is of utmost importance. In this paper, existing methods of evaluating the trustworthiness of data gathered from a crowdsourcing platform is studied. The aim is to investigate the different mechanism and measurements of trust and reliability of crowdsourced data. As implementation of evaluating trustworthiness is domain dependent, we selected the relevant mechanisms and measurements to be considered in our proposed speech emotion annotation in a crowdsourcing platform. After further studies, we decided to adapt and integrate selected mechanisms and measurements from the incentive, quality of participant and system control methods to be implemented in our proposed work in the future.