BackgroundNew Specific Application Domain (SAD) heuristics or design principles are being developed to guide the design and evaluation of mobile applications in a bid to improve on the usability of these applications. This is because the existing heuristics are rather generic and are often unable to reveal a large number of mobile usability issues related to mobile specific interfaces and characteristics. Mobile Electronic Data Capturing Forms (MEDCFs) are one of such applications that are being used to collect health data particularly in hard to reach areas, but with a number of usability challenges especially when used in rural areas by semi literate users. Existing SAD design principles are often not used to evaluate mobile forms because their focus on features specific to data capture is minimal. In addition, some of these lists are extremely long rendering them difficult to use during the design and development of the mobile forms. The main aim of this study therefore was to generate a usability evaluation checklist that can be used to design and evaluate Mobile Electronic Data Capturing Forms in a bid to improve their usability. We also sought to compare the novice and expert developers’ views regarding usability criteria.MethodsWe conducted a literature review in August 2016 using key words on articles and gray literature, and those with a focus on heuristics for mobile applications, user interface designs of mobile devices and web forms were eligible for review. The data bases included the ACM digital library, IEEE-Xplore and Google scholar. We had a total of 242 papers after removing duplicates and a total of 10 articles which met the criteria were finally reviewed. This review resulted in an initial usability evaluation checklist consisting of 125 questions that could be adopted for designing MEDCFs. The questions that handled the five main categories in data capture namely; form content, form layout, input type, error handling and form submission were considered. A validation study was conducted with both novice and expert developers using a validation tool in a bid to refine the checklist which was based on 5 criteria. The criteria for the validation included utility, clarity, question naming, categorization and measurability, with utility and measurability having a higher weight respectively. We then determined the proportion of participants who agreed (scored 4 or 5), disagreed (scored 1 or 2) and were neutral (scored 3) to a given criteria regarding a particular question for each of the experts and novice developers. Finally, we selected questions that had an average of 85% agreement (scored 4 or 5) across all the 5 criteria by both novice and expert developers. ‘Agreement’ stands for capturing the same views or sentiments about the perceived likeness of an evaluation question.ResultsThe validation study reduced the initial 125 usability evaluation questions to 30 evaluation questions with the form layout category having the majority questions. Results from the validation showed higher levels of aff...