According to a classification of central nervous system tumors by the World Health Organization, diffuse gliomas are classified into grade 2, 3, and 4 gliomas in accordance with their aggressiveness. To quantitatively evaluate a tumor’s malignancy from brain magnetic resonance imaging, this study proposed a computer-aided diagnosis (CAD) system based on a deep convolutional neural network (DCNN). Gliomas from a multi-center database (The Cancer Imaging Archive) composed of a total of 30 grade 2, 43 grade 3, and 57 grade 4 gliomas were used for the training and evaluation of the proposed CAD. Using transfer learning to fine-tune AlexNet, a DCNN, its internal layers, and parameters trained from a million images were transferred to learn how to differentiate the acquired gliomas. Data augmentation was also implemented to increase possible spatial and geometric variations for a better training model. The transferred DCNN achieved an accuracy of 97.9% with a standard deviation of ±1% and an area under the receiver operation characteristics curve (Az) of 0.9991 ± 0, which were superior to handcrafted image features, the DCNN without pretrained features, which only achieved a mean accuracy of 61.42% with a standard deviation of ±7% and a mean Az of 0.8222 ± 0.07, and the DCNN without data augmentation, which was the worst with a mean accuracy of 59.85% with a standard deviation ±16% and a mean Az of 0.7896 ± 0.18. The DCNN with pretrained features and data augmentation can accurately and efficiently classify grade 2, 3, and 4 gliomas. The high accuracy is promising in providing diagnostic suggestions to radiologists in the clinic.
The mobile cloud gaming industry has been rapidly growing over the last decade. When streaming gaming videos are transmitted to customers' client devices from cloud servers, algorithms that can monitor distorted video quality without having any reference video available are desirable tools. However, creating No-Reference Video Quality Assessment (NR VQA) models that can accurately predict the quality of streaming gaming videos rendered by computer graphics engines is a challenging problem, since gaming content generally differs statistically from naturalistic videos, often lacks detail, and contains many smooth regions. Until recently, the problem has been further complicated by the lack of adequate subjective quality databases of mobile gaming content. We have created a new gaming-specific NR VQA model called the Gaming Video Quality Evaluator (GAMIVAL), which combines and leverages the advantages of spatial and temporal gaming distorted scene statistics models, a neural noise model, and deep semantic features. Using a support vector regression (SVR) as a regressor, GAMIVAL achieves superior performance on the new LIVE-Meta Mobile Cloud Gaming (LIVE-Meta MCG) video quality database.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.