There is a need for continued debate and dialog to validate the proposed set of competencies, and a need for further research to identify best strategies for incorporating these competencies into global health educational programs. Future research should focus on implementation and evaluation of these competencies across a range of educational programs, and further delineating the competencies needed across all four proposed competency levels.
Nursing professional development practitioners have the responsibility to find creative and innovative ways to teach and provide learners with the education needed to practice safely in the hospital setting. This article describes an interactive game-based learning experience as a way to engage and empower both nurse residents and experienced nurses.
The Biocreative VII Track-2 challenge consists of named entity recognition, entity-linking (or entity-normalization), and topic indexing tasks -with entities and topics limited to chemicals for this challenge. Named entity recognition is a well established problem and we achieve our best performance with BERT-based BioMegatron models. We extend our BERTbased approach to the entity linking task. After second stage pretraining BioBERT with a metric-learning loss strategy called self-alignment pretraining (SAP), we link entities based on the cosine similarity between their SAP-BioBERT word embeddings. Despite the success of our named entity recognition experiments, we find the chemical indexing task generally more challenging.In addition to conventional NER methods, we attempt both named entity recognition and entity linking with a novel textto-text or "prompt" based method that uses generative language models such as T5 and GPT. We achieve encouraging results with this new approach.
This paper provides an overview of NVIDIA NeMo's neural machine translation systems for the constrained data track of the WMT21 News and Biomedical Shared Translation Tasks. Our news task submissions for English ↔ German (En ↔ De) and English ↔ Russian (En ↔ Ru) are built on top of a baseline transformer-based sequence-to-sequence model (Vaswani et al., 2017). Specifically, we use a combination of 1) checkpoint averaging 2) model scaling 3) data augmentation with backtranslation and knowledge distillation from right-to-left factorized models 4) finetuning on test sets from previous years 5) model ensembling 6) shallow fusion decoding with transformer language models and 7) noisy channel re-ranking. Additionally, our biomedical task submission for English ↔ Russian uses a biomedically biased vocabulary and is trained from scratch on news task data, medically relevant text curated from the news task dataset, and biomedical data provided by the shared task. Our news system achieves a sacre-BLEU score of 39.5 on the WMT'20 En → De test set outperforming the best submission from last year's task of 38.8. Our biomedical task Ru → En and En → Ru systems reach BLEU scores of 43.8 and 40.3 respectively on the WMT'20 Biomedical Task Test set, outperforming the previous year's best submissions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.