To design advanced functional materials, different concepts are currently pursued, including machine learning and high-throughput calculations. Here, a different approach is presented, which uses the innate structure of the multidimensional property space. Clustering algorithms confirm the intricate structure of property space and relate the different property classes to different chemical bonding mechanisms. For the inorganic compounds studied here, four different property classes are identified and related to ionic, metallic, covalent, and recently identified metavalent bonding. These different bonding mechanisms can be quantified by two quantum chemical bonding descriptors, the number of electrons transferred and the number of electrons shared between adjacent atoms. Hence, we can link these bonding descriptors to the corresponding property portfolio, turning bonding descriptors into property predictors. The close relationship between material properties and quantum chemical bonding descriptors can be used for an inverse material design, identifying particularly promising materials based on a set of target functionalities.
We study classification problems over relational background structures for hypotheses that are defined using logics with counting. The aim of this paper is to find learning algorithms running in time sublinear in the size of the background structure. We show that hypotheses defined by FOCN(P)formulas over structures of polylogarithmic degree can be learned in sublinear time. Furthermore, we prove that for structures of unbounded degree there is no sublinear learning algorithm for first-order formulas.
We analyse the complexity of learning first-order definable concepts in a model-theoretic framework for supervised learning introduced by (Grohe and Turán, TOCS 2004). Previous research on the complexity of learning in this framework focussed on the question of when learning is possible in time sublinear in the background structure.Here we study the parameterized complexity of the learning problem. We obtain a hardness result showing that exactly learning first-order definable concepts is at least as hard as the corresponding model-checking problem, which implies that on general structures it is hard for the parameterized complexity class AW[ * ]. Our main contribution is a fixed-parameter tractable agnostic PAC learning algorithm for first-order definable concepts over effectively nowhere dense background structures.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.