The results reported by Winnik et al were similar to ours, although in a smaller sample. We sincerely regret that we were not aware of and therefore did not acknowledge the fine work by Winnik et al. We find the results intriguing, particularly the link between abstract grading and subsequent publication likelihood. Evidence is accumulating for a suboptimal conversion of science from abstracts to publications in cardiology. We are happy to learn that Winnik et al agree with our conclusions and that more work is needed to ensure faster and complete sharing of scientific knowledge. Key opinion leaders should encourage publication of scientific abstracts. More discussion and research are needed to break down some of these publication barriers. We encourage collaboration to achieve this common goal.In the letter by Cobos Gil, 4 we were happy to learn that he concurs with our suggestion to grade and benchmark conferences. Cobos Gil suggests grading conferences on the potential impact factor, calculated as the sum of impact factors for the published abstracts divided by the number of presented abstracts. In our study, we demonstrated that the impact factor of subsequently published abstracts is associated with the scientific category (basic versus clinical versus population science) of the individual abstract. Therefore, the metric suggested by Cobos Gil would tend to favor conferences with more basic science. We also believe that the metric put forward by Cobos Gil may overemphasize the journal impact factor, thereby underestimating the specific scientific content of the individual abstracts presented at the conference. Like articles published in high-tier journals, those published in low-tier journals are often cited many times and can influence treatment guidelines. As a result, we suggest a couple of specific metrics in addition to the one proposed by Cobos Gil. First, the simple overall adjusted publication rate within a given time frame would be informative about the conference overall and would be easily reported by our algorithm. Second, a metric could include more specific information about the abstract and its consequent influence on the field, which could be assessed by calculating the number of citations the subsequent publication receives within a set time frame and indexing this number with the overall adjusted publication rate of the conference.Cobos Gil underlines a very important point at the end of his letter. With the algorithm developed for our article in Circulation, we now have a tool and several objective metrics that we can use to benchmark medical conferences. This algorithm may be a way to ensure that science continues to be at the center of these conferences, not to mention an incentive for the conference committees.