The coronavirus disease 2019 (COVID-19) pandemic has shaken up the socio-economic order on a global scale with interventions designed to curb the spread of the disease bearing multiple and reinforcing impacts on several aspects of economic and social lives. The effects of COVID-19 were diverse and often spilled over different or interdependent industries. Economies were hit top-down and bottom-up while businesses and individuals alike endured significant changes that altered national and international supply and demand trends for products and services. The primary and secondary sectors were especially influenced by supply shortages while services and education were largely demand-driven. Monetary policies were specifically targeted to ease these disruptions while protective measures for employees in many cases constrained business competitiveness. The present study provided a cross-sectoral (primary, secondary, tertiary, and quaternary sectors) outline of the implications and challenges since the start of the crisis, centralising important information and offering a view of the current socio-economic situation.
Background Large language models, such as ChatGPT by OpenAI, have demonstrated potential in various applications, including medical education. Previous studies have assessed ChatGPT’s performance in university or professional settings. However, the model’s potential in the context of standardized admission tests remains unexplored. Objective This study evaluated ChatGPT’s performance on standardized admission tests in the United Kingdom, including the BioMedical Admissions Test (BMAT), Test of Mathematics for University Admission (TMUA), Law National Aptitude Test (LNAT), and Thinking Skills Assessment (TSA), to understand its potential as an innovative tool for education and test preparation. Methods Recent public resources (2019-2022) were used to compile a data set of 509 questions from the BMAT, TMUA, LNAT, and TSA covering diverse topics in aptitude, scientific knowledge and applications, mathematical thinking and reasoning, critical thinking, problem-solving, reading comprehension, and logical reasoning. This evaluation assessed ChatGPT’s performance using the legacy GPT-3.5 model, focusing on multiple-choice questions for consistency. The model’s performance was analyzed based on question difficulty, the proportion of correct responses when aggregating exams from all years, and a comparison of test scores between papers of the same exam using binomial distribution and paired-sample (2-tailed) t tests. Results The proportion of correct responses was significantly lower than incorrect ones in BMAT section 2 (P<.001) and TMUA paper 1 (P<.001) and paper 2 (P<.001). No significant differences were observed in BMAT section 1 (P=.2), TSA section 1 (P=.7), or LNAT papers 1 and 2, section A (P=.3). ChatGPT performed better in BMAT section 1 than section 2 (P=.047), with a maximum candidate ranking of 73% compared to a minimum of 1%. In the TMUA, it engaged with questions but had limited accuracy and no performance difference between papers (P=.6), with candidate rankings below 10%. In the LNAT, it demonstrated moderate success, especially in paper 2’s questions; however, student performance data were unavailable. TSA performance varied across years with generally moderate results and fluctuating candidate rankings. Similar trends were observed for easy to moderate difficulty questions (BMAT section 1, P=.3; BMAT section 2, P=.04; TMUA paper 1, P<.001; TMUA paper 2, P=.003; TSA section 1, P=.8; and LNAT papers 1 and 2, section A, P>.99) and hard to challenging ones (BMAT section 1, P=.7; BMAT section 2, P<.001; TMUA paper 1, P=.007; TMUA paper 2, P<.001; TSA section 1, P=.3; and LNAT papers 1 and 2, section A, P=.2). Conclusions ChatGPT shows promise as a supplementary tool for subject areas and test formats that assess aptitude, problem-solving and critical thinking, and reading comprehension. However, its limitations in areas such as scientific and mathematical knowledge and applications highlight the need for continuous development and integration with conventional learning strategies in order to fully harness its potential.
Background Journal impact factor (IF) is the leading method of scholarly assessment in today’s research world, influencing where scholars submit their research and where funders distribute their resources. COVID-19, one of the most serious health crises, resulted in an unprecedented surge of publications across all areas of knowledge. An important question is whether COVID-19 affected the gold standard of scholarly assessment. Objective In this paper, we aimed to comprehensively compare the productivity trends of COVID-19 and non–COVID-19 literature as well as track their evolution and scholarly impact across 3 consecutive calendar years. Methods We took as an example 6 high-impact medical journals (Annals of Internal Medicine [Annals], The British Medical Journal [The BMJ], Journal of the American Medical Association [JAMA], The Lancet, Nature Medicine [NatMed], and The New England Journal of Medicine [NEJM]) and searched the literature using the Web of Science database for manuscripts published between January 1, 2019, and December 31, 2021. To assess the effect of COVID-19 and non–COVID-19 literature in their scholarly impact, we calculated their annual IFs and percentage changes. Thereafter, we estimated the citation probability of COVID-19 and non–COVID-19 publications along with their rates of publication and citation by journal. Results A significant increase in IF change for manuscripts including COVID-19 published from 2019 to 2020 (P=.002; Annals: 283%; The BMJ: 199%; JAMA: 208%; The Lancet: 392%; NatMed: 111%; and NEJM: 196%) and to 2021 (P=.007; Annals: 41%; The BMJ: 90%; JAMA: 6%; The Lancet: 22%; NatMed: 53%; and NEJM: 72%) was seen, against non–COVID-19 ones. The likelihood of highly cited publications was significantly increased in COVID-19 manuscripts between 2019 and 2021 (Annals: z=3.4, P<.001; The BMJ: z=4.0, P<.001; JAMA: z=3.8, P<.001; The Lancet: z=3.5, P<.001; NatMed: z=5.2, P<.001; and NEJM: z=4.7, P<.001). The publication and citation rates of COVID-19 publications followed a positive trajectory, as opposed to non–COVID-19. The citation rate for COVID-19 publications peaked by the second quarter of 2020 while that of the publication rate approximately a year later. Conclusions The rapid surge of COVID-19 publications emphasized the capacity of scientific communities to respond against a global health emergency, yet inflated IFs create ambiguity as benchmark tools for assessing scholarly impact. The immediate implication is a loss in value of and trust in journal IFs as metrics of research and scientific rigor perceived by academia and society. Loss of confidence toward procedures employed by highly reputable publishers may incentivize authors to exploit the publication process by monopolizing their research on COVID-19 and encourage them to publish in journals of predatory behavior.
Accumulating research has described cognitive impairment in adults with depression, however, few studies have focused on this relationship during older adulthood. Our cross-sectional study investigated the association between cognitive function performance and clinically significant depression symptoms in older adults. We analysed the data from the 2011 to 2014 National Health and Nutrition Examination Survey on older (aged 60 years and above) US adults. Cognitive function was assessed as a composite score and on a test-by-test basis based on the Consortium to Establish a Registry for Alzheimer’s Disease Word List Learning Test, the Word List Recall Test, and Intrusion Word Count Test, the Animal Fluency Test, and the Digit Symbol Substitution Test (DSST). Depression was defined as clinically significant depression symptoms based on the standard cut-off point of the Patient Health Questionnaire-9 (PHQ-9) score of 10 or greater. Adjusted-logistic regression analysis was employed using survey weights to examine the former relationships. Sociodemographic factors, in addition to medical history and status in terms of self-reported chronic illness and the incidence of stroke or memory–cognitive function loss, were considered as covariates. Among 1622 participants of a survey-weighted 860,400 US older adults, cognitive performance was associated with clinically significant depression symptoms (p = 0.003) after adjustment. Most prominently, older adults with significant cognitive deficits had approximately two and a half times (OR: 2.457 [1.219–4.953]) higher odds for a PHQ-9 score above threshold compared to those with the highest performance. Particularly, those with lowest DSST score had increased odds of almost four times (OR: 3.824 [1.069–13.678]). Efforts to decipher the underlying aetiology of these negative disparities may help create opportunities and interventions that could alleviate the risks from depression, cognitive impairment, and associated consequences in older adults at a population level.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.