Comparing research performance across distinct subject domains is not recommended unless proper treatments are applied to normalize the domain‐specific characteristics. Except for limited research aimed at exploring the field‐dependent behaviours of specific research performance indicators, it is difficult to find comprehensive research examining both conventional and altmetric indicators for their influence on journals, articles, and authors in distinct subject domains. This research used Scopus and PlumX as sources to collect conventional and altmetric data, respectively. In addition to descriptive statistics, the Mann–Whitney U test, cluster plots, and correlation analysis were employed for data analysis. The results reveal that all three levels of indicators behave in notably different ways in Medicine compared with that of the Physical and Social Sciences. Most indicators in all three levels attain higher maximum and average values in Medicine. For instance, the maximum values for most indicators, except for citations and documents, are significantly higher in Medicine than in the Physical and Social Sciences. However, the citations and productivity of Physical Sciences journals surpass the two in other domains. SNIP deviates lightly across subject domains compared with that of other journal‐level indicators. Further, citations do not have a large influence on SNIP and SJR as they do Journal Impact Factor and CiteScore. All article‐level indicators show significant differences between Medicine and the Physical Sciences. Between the Physical and Social Sciences, all indicators except page count show significant differences. Further, article‐level indicators in the Social Sciences behave in nearly the same way as in the Physical Sciences. Citation counts positively influence captures. In addition, Medicine authors are likely to make more impact and be more productive in their field than authors in other fields. Collaboration was also found to improve both the productivity of authors and the impact of their research, irrespective of the domain they work in. These findings are important to authors, research evaluators, and publishers from different viewpoints. Discouraging performance comparisons based on raw indicator values can protect researchers from inaccurate assessments, enabling them to fully realize their potential for conducting cutting‐edge research. Finally, this research indicates different directions along which this area of research can be extended.