Artificial intelligence is currently a hot topic in medicine. However, medical data is often sparse and hard to obtain due to legal restrictions and lack of medical personnel for the cumbersome and tedious process to manually label training data. These constraints make it difficult to develop systems for automatic analysis, like detecting disease or other lesions. In this respect, this article presents HyperKvasir , the largest image and video dataset of the gastrointestinal tract available today. The data is collected during real gastro- and colonoscopy examinations at Bærum Hospital in Norway and partly labeled by experienced gastrointestinal endoscopists. The dataset contains 110,079 images and 374 videos, and represents anatomical landmarks as well as pathological and normal findings. The total number of images and video frames together is around 1 million. Initial experiments demonstrate the potential benefits of artificial intelligence-based computer-assisted diagnosis systems. The HyperKvasir dataset can play a valuable role in developing better algorithms and computer-assisted examination systems not only for gastro- and colonoscopy, but also for other fields in medicine.
While there is a lot of hype around various concepts associated with the term Web 2.0 in industry, little academic research has so far been conducted on the implications of these new approaches for the domain of education. Much of what goes by the name of Web 2.0 can in fact be regarded and utilised as a new kind of learning technologies. This paper explains the background of Web 2.0, investigates the implications for knowledge transfer in general, and then discusses their particular use in eLearning contexts with the help of short scenarios.
Artificial intelligence (AI) is predicted to have profound effects on the future of video capsule endoscopy (VCE) technology. The potential lies in improving anomaly detection while reducing manual labour. Existing work demonstrates the promising benefits of AI-based computer-assisted diagnosis systems for VCE. They also show great potential for improvements to achieve even better results. Also, medical data is often sparse and unavailable to the research community, and qualified medical personnel rarely have time for the tedious labelling work. We present Kvasir-Capsule, a large VCE dataset collected from examinations at a Norwegian Hospital. Kvasir-Capsule consists of 117 videos which can be used to extract a total of 4,741,504 image frames. We have labelled and medically verified 47,238 frames with a bounding box around findings from 14 different classes. In addition to these labelled images, there are 4,694,266 unlabelled frames included in the dataset. The Kvasir-Capsule dataset can play a valuable role in developing better algorithms in order to reach true potential of VCE technology.
Searching for and retrieving videos in a meaningful way on the web is still an open problem. The integration of a user's context and intentions into the search process is one of the most promising approaches to enhance current search interfaces and algorithms. In this article, we present the results of two exploratory studies on the topic of online video searching, retrieving, watching, and sharing: a qualitative study in which 22 participants reported on situations when they retrieved and watched videos, and an online quantitative survey with more than 200 participants answering comparable questions. We provide a detailed analysis of the results from both studies and report on the insights that they provide in terms of video search, retrieval, watching, and sharing behavior. Our findings can be used to enhance current video retrieval systems, search interfaces, and algorithms in order to improve the overall user satisfaction and experience. As an example of such improvements, we also propose a prototype that addresses the problem of taking the user's intentions into account when designing video retrieval interfaces.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.