Information dissemination occurs through the 'word of media' in the digital world. Fraudulent and deceitful content, such as misinformation, has detrimental effects on people. An implicit fact-based automated fact-checking technique comprising information retrieval, natural language processing, and machine learning techniques assist in assessing the credibility of content and detecting misinformation. Previous studies focused on linguistic and textual features and similarity measures-based approaches. However, these studies need to gain knowledge of facts, and similarity measures are less accurate when dealing with sparse or zero data. To fill these gaps, we propose a 'Content Similarity Measure (CSM)' algorithm that can perform automated fact-checking of URLs in the healthcare domain. Authors have introduced a novel set of content similarity, domain-specific, and sentiment polarity score features to achieve journalistic fact-checking. An extensive analysis of the proposed algorithm compared with standard similarity measures and machine learning classifiers showed that the ‘content similarity score’ feature outperformed other features with an accuracy of 88.26%. In the algorithmic approach, CSM showed improved accuracy of 91.06% compared to the Jaccard similarity measure with 74.26% accuracy. Another observation is that the algorithmic approach outperformed the feature-based method. To check the robustness of the algorithms, authors have tested the model on three state-of-the-art datasets, viz. CoAID, FakeHealth, and ReCOVery. With the algorithmic approach, CSM showed the highest accuracy of 87.30%, 89.30%, 85.26%, and 88.83% on CoAID, ReCOVery, FakeHealth (Story), and FakeHealth (Release) datasets, respectively. With a feature-based approach, the proposed CSM showed the highest accuracy of 85.93%, 87.97%, 83.92%, and 86.80%, respectively.