Morphological classification is a key piece of information to define samples of galaxies aiming to study the large-scale structure of the universe. In essence, the challenge is to build up a robust methodology to perform a reliable morphological estimate from galaxy images. Here, we investigate how to substantially improve the galaxy classification within large datasets by mimicking human classification. We combine accurate visual classifications from the Galaxy Zoo project with machine and deep learning methodologies. We propose two distinct approaches for galaxy morphology: one based on non-parametric morphology and traditional machine learning algorithms; and another based on Deep Learning. To measure the input features for the traditional machine learning methodology, we have developed a system called CyMorph, with a novel non-parametric approach to study galaxy morphology. The main datasets employed comes from the Sloan Digital Sky Survey Data Release 7 (SDSS-DR7). We also discuss the class imbalance problem considering three classes. Performance of each model is mainly measured by Overall Accuracy (OA). A spectroscopic validation with astrophysical parameters is also provided for Decision Tree models to assess the quality of our morphological classification. In all of our samples, both Deep and Traditional Machine Learning approaches have over 94.5% OA to classify galaxies in two classes (elliptical and spiral). We compare our classification with state-of-the-art morphological classification from literature. Considering only two classes separation, we achieve 99% of overall accuracy in average when using our deep learning models, and 82% when using three classes. We provide a catalog with 670,560 galaxies containing our best results, including morphological metrics and classification.ETGs have T-Type ≤ 0 and LTGs have T-Type > 0 (de Vaucouleurs, 1963). T-Type considers ellipticity and spiral arms strength but does not reflect the presence or absence of the bar feature in spirals.Morphology reveals structural, intrinsic and environmental properties of galaxies. In the local universe, ETGs are mostly situated in the center of galaxy clusters, have a larger mass, less gas, higher velocity dispersion, and older stellar populations than LTGs, which are rich star-forming systems (Roberts and Haynes, 1994;Blanton and Moustakas, 2009;Pozzetti et al., 2010). By mapping where the ETGs are, it is possible to map the large-scale structure of the universe. Therefore, galaxy morphology is of paramount importance for extragalactic research as it relates to stellar properties and key aspects of the evolution and structure of the universe.Astronomy has become an extremely data-rich field of knowledge with the advance of new technologies in recent decades. Nowadays it is impossible to rely on human classification given the huge flow of data attained by current research
Figure 1: JECRIPE -a game for children with special needs AbtractThere are not many initiatives in the area of game development for children with special needs, specially children with Down syndrome. The major purpose of our research is to promote cognitive development of disabled children in the context of inclusive education. In order to do so, we address aspects of interaction, communication and game design in stimulating selected cognitive abilities. By using a Human-Computer Interaction method based on the Inspection of Evaluation, it was possible to study and understand user interaction with the interface and thus examine the positive aspects as well as the communicability problems found with the JECRIPE game -a game developed specially for children with Down syndrome in pre-scholar age.
Virtual reality (VR) and head-mounted displays are continually gaining popularity in various fields such as education, military, entertainment, and health. Although such technologies provide a high sense of immersion, they can also trigger symptoms of discomfort. This condition is called cybersickness (CS) and is quite popular in recent virtual reality research. In this work we first present a review of the literature on theories of discomfort manifestations usually attributed to virtual reality environments. Following, we reviewed existing strategies aimed at minimizing CS problems and discussed how the CS measurement has been conducted based on subjective, biosignal (or objective), and users profile data. We also describe and discuss related works that are aiming to mitigate cybersickness problems using deep and symbolic machine learning approaches. Although some works used methods to make deep learning explainable, they are not strongly affirmed by literature. For this reason in this work we argue that symbolic classifiers can be a good way to identify CS causes, once they possibilities human-readability which is crucial for analyze the machine learning decision paths. In summary, from a total of 157 observed studies, 24 were excluded. Moreover, we believe that this work facilitates researchers to identify the leading causes for most discomfort situations in virtual reality environments, associate the most recommended strategies to minimize such discomfort, and explore different ways to conduct experiments involving machine learning to overcome cybersickness.
This article concerns the use of a graphics processor unit (GPU) as a math co-processor in real-time applications in special games and physics simulations. To validate this approach, we present a new game loop architecture that employs GPUs for general-purpose computations (GPGPUs). A critical issue here is the process distribution between the CPU and the GPU. The architecture consists of a model for distribution, and our implementation offers many advantages in comparison to other approaches without the GPGPU stage. This architecture can be used either by a general-purpose language such as the Compute Unified Device Architecture (CUDA), or shader languages such as the High-Level Shader Language (HLSL) and the OpenGL Shading Language (GLSL).Although the architecture proposed here aims at supporting mathematics and physics on the GPU, it is possible to adapt any kind of generic computation. This article discusses the model implementation in an open-source game engine and presents the results of using this platform.
We propose a versatile method for estimating the RMS error of depth data provided by generic 3D sensors with the capability of generating RGB and depth (D) data of the scene, i.e., the ones based on techniques such as structured light, time of flight and stereo. A common checkerboard is used, the corners are detected and two point clouds are created, one with the real coordinates of the pattern corners and one with the corner coordinates given by the device. After a registration of these two clouds, the RMS error is computed. Then, using curve fittings methods, an equation is obtained that generalizes the RMS error as a function of the distance between the sensor and the checkerboard pattern. The depth errors estimated by our method are compared to those estimated by state-of-the-art approaches, validating its accuracy and utility. This method can be used to rapidly estimate the quality of RGB-D sensors, facilitating robotics applications as SLAM and object recognition.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.