Degradation is a technical and market hurdle in the development of novel photovoltaics and other energy devices. Understanding and addressing degradation requires complex, time-consuming measurements on multiple samples. To address this challenge, we present \textit{DeepDeg}, a machine learning model that combines deep learning, explainable machine learning, and physical modeling to: 1) forecast hundreds of hours of degradation, and 2) explain degradation in novel photovoltaics. Using a large and diverse dataset of over 785 stability tests of organic solar cells, totaling 230,000 measurement hours, DeepDeg is able to accurately predict degradation dynamics and explain the physiochemical factors driving them using few initial hours of degradation. We use cross-validation and a held-out dataset of over 9,000 hours of degradation of PCE10:OIDTBR to evaluate our model. We demonstrate that by using DeepDeg, degradation characterization and screening can be accelerated by 5-20x.
to distance, visibility constraints, and competing mission downlinks. Long missions and highresolution, multispectral imaging devices easily produce data exceeding the available bandwidth. As an example, the HiRISE camera aboard the Mars Reconnaissance Orbiter produces images of up to 16.4 Gbits in data volume but downlink bandwidth is limited to 6 Mbits per second (Mbps).To address this situation, the Jet Propulsion Laboratory has developed computationally efficient algorithms for analyzing science imagery onboard spacecraft. These algorithms autonomously cluster the data into classes of similar imagery. This enables selective downlink of representatives of each class and a map classifying the imaged terrain rather than the full data set, reducing downlinked data volume. This article demonstrates the method on an Earth-based aerial image data set. We examine a range of approaches including k-means clustering using image features based on color, texture, temporal, and spatial arrangement and compare it to the manual clustering of a fi eld expert. In doing so, we demonstrate the potential for such summarization algorithms to enable effective exploratory science despite limited downlink bandwidth.
Current and proposed remote space missions, such as the proposed aerial exploration of Titan by an aerobot, often can collect more data than can be communicated back to Earth. Autonomous selective downlink algorithms can choose informative subsets of data to improve the science value of these bandwidth-limited transmissions. This requires statistical descriptors of the data that reflect very abstract and subtle distinctions in science content. We propose a metric learning strategy that teaches algorithms how best to cluster new data based on training examples supplied by domain scientists. We demonstrate that clustering informed by metric learning produces results that more closely match multiple scientists' labelings of aerial data than do clusterings based on random or periodic sampling. A new metric-learning strategy accommodates training sets produced by multiple scientists with different and potentially inconsistent mission objectives. Our methods are fit for current spacecraft processors (e.g., RAD750) and would further benefit from more advanced spacecraft processor architectures, such as OPERA.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.