Since research and development (R&D) is the most critical determinant of the productivity, growth and competitive advantage of firms, measuring R&D performance has become the core of attention of R&D managers, and an extensive body of literature has examined and identified different R&D measurements and determinants of R&D performance. However, measuring R&D performance and assigning the same level of importance to different R&D measures, which is the common approach in existing studies, can oversimplify the R&D measuring process, which may result in misinterpretation of the performance and consequently fallacy R&D strategies. The aim of this study is to measure R&D performance taking into account the different levels of importance of R&D measures, using a multi-criteria decision-making method called Best Worst Method (BWM) to identify the weights (importance) of R&D measures and measure the R&D performance of 50 high-tech SMEs in the Netherlands using the data gathered in a survey among SMEs and from R&D experts. The results show how assigning different weights to different R&D measures (in contrast to simple mean) results in a different ranking of the firms and allow R&D managers to formulate more effective strategies to improve their firm's R&D performance by applying knowledge regarding the importance of different R&D measures.
A collaborative Ph.D. project, carried out by a doctoral candidate, is a type of collaboration between university and industry. Due to the importance of such projects, researchers have considered different ways to evaluate the success, with a focus on the outputs of these projects. However, what has been neglected is the other side of the coin—the inputs. The main aim of this study is to incorporate both the inputs and outputs of these projects into a more meaningful measure called efficiency. A ratio of the weighted sum of outputs over the weighted sum of inputs identifies the efficiency of a Ph.D. project. The weights of the inputs and outputs can be identified using a multi-criteria decision-making (MCDM) method. Data on inputs and outputs are collected from 51 Ph.D. candidates who graduated from Eindhoven University of Technology. The weights are identified using a new MCDM method called Best Worst Method (BWM). Because there may be differences in the opinion of Ph.D. candidates and supervisors on weighing the inputs and outputs, data for BWM are collected from both groups. It is interesting to see that there are differences in the level of efficiency from the two perspectives, because of the weight differences. Moreover, a comparison between the efficiency scores of these projects and their success scores reveals differences that may have significant implications. A sensitivity analysis divulges the most contributing inputs and outputs.
The increasing involvement of industry in academic research raised concerns whether university-industry projects actually meet the same academic standards as university projects in-house. Looking at the academic output and impact of collaborative versus non-collaborative Ph.D. projects at Eindhoven University of Technology, we observeunexpectedly -that doctoral candidates who conducted a collaborative Ph.D. project outperform their peers in academic performance. Less surprisingly, collaborative projects also lead to more patents and patent citations compared to non-collaborative projects. Science policy implications follow.
Assessing the quality of scientific outputs (i.e. research papers, books and reports) is a challenging issue. Although in practice, the basic quality of scientific outputs is evaluated by committees/peers (peer review) who have general knowledge and competencies. However, their assessment might not comprehensively consider different dimensions of the quality of the scientific outputs. Hence, there is a requirement to evaluate scientific outputs based on some other metrics which cover more aspects of quality after publishing, which is the aim of this study. To reach this aim, first different quality metrics are identified through an extensive literature review. Then a recently developed multi-criteria methodology (best worst method) is used to find the importance of each quality metric. Finally, based on the importance of each quality metric and the data which are collected from Scopus, the quality of research papers published by the members of a university faculty is measured. The proposed model in this paper provides the opportunity to measure quality of research papers not only by considering different aspects of quality, but also by considering the importance of each quality metric. The proposed model can be used for assessing other scientific outputs as well.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.