Despite the crucial role of lemming in the Arctic ecosystem, many aspects of its ecology are still unknown. The main challenge of studying lemming is that this rodent does not hibernate in winter and remains active under snow. To tackle this challenge, this paper presents a monitoring system based on near infrared. Design and implementation of a system that should work autonomously in the harsh arctic environment is really challenging. After developing the first version of the equipment, we installed three units at Bylot Island, Nunavut, Canada. Retrieved videos were promising and showed the great potential of this system in assisting ecologists to study the subnivean ecology of the Arctic. To the best of our knowledge, these are the first ever videos of lemming that have been recorded under snow in winter in the Arctic.
Subnivean life is an important part of the Arctic ecosystem but it has been little explored. Long, harsh winters in addition to remoteness have made direct studies in these hardly accessible areas very expensive and extremely hard. To tackle this problem, a low-power autonomous camera system (called ArcC ¸av) is developed for monitoring small mammals beneath the snow in the Canadian Arctic. ArcC ¸av is composed of several components, including a digital camera, a single board computer, a microcontroller board, and a motion detection sensor. Limited energy source, very cold temperature, darkness, and very long recording periods (several months) are major challenges that ArcC ¸av is designed to deal with. The performance of the developed system is evaluated in a real situation in the High Arctic. The field results show that ArcC ¸av can function well for an extended period of time on a battery at very low temperatures during the arctic winters. To the best of our knowledge, this is the first time that the life under snow has been filmed by a camera trap in the Arctic during winter. ArcC ¸av equips ecologists with a new means to explore and study subnivean life remotely. These observations can provide a foundation to answer some of questions that have puzzled animal ecologists for decades.
Convolutional Neural Networks (CNN) is already known as strong tools in various fields particularly in image processing and computer vision. This paper aims to exploit the power of CNN for transform learning and utilizes it as an unsupervised feature extractor for analyzing defects in a steel specimen with float batten holes and Carbon Fiber Reinforced Plastic (CFRP) composite materials. A pre-trained CNN (ImageNet-VGG-f) has been used for extraction of the vectorized features along with a spectral angler mapper (SAM) to provide a score for defects presented in the image. Empirical results on two aforementioned datasets indicate a promising performance for application of heating and cooling based active thermography with a reasonable computational cost due to unsupervised nature of the algorithm.
Finding efficient and less expensive techniques for different aspects of culvert inspection is in great demand. This study assesses the potential of infrared thermography (IRT) to detect the presence of cavities in the soil around a culvert, specifically for cavities adjacent to the pipe of galvanized culverts. To identify cavities, we analyze thermograms, generated via long pulse thermography, using absolute thermal contrast, principal components thermography, and a statistical approach along with a combination of different pre- and post-processing algorithms. Using several experiments, we evaluate the performance of IRT for accomplishing the given task. Empirical results show a promising future for the application of this approach in culvert inspection. The size and location of cavities are among the aspects that can be extracted from analyzing thermograms. The key finding of this research is that the proposed approach can provide useful information about a certain type of problem around a culvert pipe which may indicate the early stage of the cavity formation. Becoming aware of this process in earlier stages will certainly help to prevent any costly incidents later.
This paper presents a fast, vision-based method for the problem of human action representation and recognition. The first problem is addressed by constructing an action descriptor from spatiotemporal data of action silhouettes based on appearance and motion features. For action classification, a new Radial Basis Function Network (RBF), called Time Delay Input Radial Basis Function Network (TDIRBF) is proposed by introducing time delay units to the RBF in a novel approach. A TDIRBF offers a few desirable features such as an easier learning process and more flexibility. The representational power and speed of the proposed method were explored using a publicly available dataset. Based on experimental results, implemented in MATLAB and on standard PCs, the average time for constructing a feature vector for a high-resolution video was just about 20 ms/frame (or 50 fps) and the classifier speed was above 15 fps. Furthermore, the proposed approach demonstrated good performance in terms of both execution time and overall performance (a new performance measure that combines accuracy and speed into one metric).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.