Superhigh-ε materials that exhibit exceptionally high dielectric permittivity are recognized as potential candidates for a wide range of next-generation photonic and electronic devices. In general, achieving a high-ε state requires low material symmetry, as most known high-ε materials are symmetry-broken crystals. There are few reports on fluidic high-ε dielectrics. Here, we demonstrate how small molecules with high polarity, enabled by rational molecular design and machine learning analyses, enable the development of superhigh-ε fluid materials (dielectric permittivity, ε > 104) with strong second harmonic generation and macroscopic spontaneous polar ordering. The polar structures are confirmed to be identical for all the synthesized materials. Furthermore, adapting this strategy to high–molecular weight systems allows us to generalize this approach to polar polymeric materials, creating polar soft matters with spontaneous symmetry breaking.
Recently, a type of ferroelectric nematic fluid has been discovered in liquid crystals in which the molecular polar nature at molecule level is amplified to macroscopic scales through a ferroelectric packing of rod-shaped molecules. Here, we report on the experimental proof of a polar chiral liquid matter state, dubbed helielectric nematic, stabilized by the local polar ordering coupled to the chiral helicity. This helielectric structure carries the polar vector rotating helically, analogous to the magnetic counterpart of helimagnet. The helielectric state can be retained down to room temperature and demonstrates gigantic dielectric and nonlinear optical responses. This matter state opens a new chapter for developing the diverse polar liquid crystal devices.
An interpretable system for open-domain reasoning needs to express its reasoning process in a transparent form.Natural language is an attractive representation for this purpose -it is both highly expressive and easy for humans to understand. However, manipulating natural language statements in logically consistent ways is hard: models must cope with variation in how meaning is expressed while remaining precise. In this paper, we describe PARAPATTERN, a method for building models to generate deductive inferences from diverse natural language inputs without direct human supervision. We train BART-based models (Lewis et al., 2020) to generate the result of applying a particular logical operation to one or more premise statements. Crucially, we develop a largely automated pipeline for constructing suitable training examples from Wikipedia. We evaluate our models using out-of-domain sentence compositions from the QASC (Khot et al., 2020) and EntailmentBank (Dalvi et al., 2021) datasets as well as targeted perturbation sets. Our results show that our models are substantially more accurate and flexible than baseline systems. PARAPATTERN achieves 85% validity on examples of the 'substitution' operation from EntailmentBank without the use of any in-domain training data, matching the performance of a model fine-tuned for EntailmentBank. The full source code for our method is publicly available. 1 .
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.