The most widely used machine learning frameworks require users to carefully tune their memory usage so that the deep neural network (DNN) fits into the DRAM capacity of a GPU. This restriction hampers a researcher's flexibility to study different machine learning algorithms, forcing them to either use a less desirable network architecture or parallelize the processing across multiple GPUs. We propose a runtime memory manager that virtualizes the memory usage of DNNs such that both GPU and CPU memory can simultaneously be utilized for training larger DNNs. Our virtualized DNN (vDNN) reduces the average GPU memory usage of AlexNet by up to 89%, OverFeat by 91%, and GoogLeNet by 95%, a significant reduction in memory requirements of DNNs. Similar experiments on VGG-16, one of the deepest and memory hungry DNNs to date, demonstrate the memory-efficiency of our proposal. vDNN enables VGG-16 with batch size 256 (requiring 28 GB of memory) to be trained on a single NVIDIA Titan X GPU card containing 12 GB of memory, with 18% performance loss compared to a hypothetical, oracular GPU with enough memory to hold the entire DNN.
The growth in mobile vision applications, coupled with the performance limitations of mobile platforms, has led to a growing need to understand computer vision applications. Computationally intensive mobile vision applications, such as augmented reality or object recognition, place significant performance and power demands on existing embedded platforms, often leading to degraded application quality. With a better understanding of this growing application space, it will be possible to more effectively optimize future embedded platforms. In this work, we introduce and evaluate a custom benchmark suite for mobile embedded vision applications named MEVBench. MEVBench provides a wide range of mobile vision applications such as face detection, feature classification, object tracking and feature extraction. To better understand mobile vision processing characteristics at the architectural level, we analyze single and multithread implementations of many algorithms to evaluate performance, scalability, and memory characteristics. We provide insights into the major areas where architecture can improve the performance of these applications in embedded systems.
Limonene is a common component found in consumer goods ranging from beverages to cleaning compounds. Limonene oxidation products, however, have a less desirable flavor and fragrance. Early detection of limonene oxide formation would aid quality control. A method is developed to determine the concentration of limonene oxide in essential oils and beverages using solid-phase microextraction (SPME). A headspace sampling technique is used to reduce or eliminate the presence of less volatile components. Several different SPME fibers are tested, varying in polymer thickness, polymer cross-linking and bonding, and polarity of the polymer. For each fiber tested, the sampling time is optimized for reproducible results. The 7-microm-thick bonded poly(dimethylsiloxane) fiber provides the best results. External standards are used for quantitation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.