Word embedding, which encodes words into vectors, is an important starting point in natural language processing and commonly used in many text-based machine learning tasks. However, in most current word embedding approaches, the similarity in embedding space is not optimized in the learning. In this paper we propose a novel neighbor embedding method which directly learns an embedding simplex where the similarities between the mapped words are optimal in terms of minimal discrepancy to the input neighborhoods. Our method is built upon two-step random walks between words via topics and thus able to better reveal the topics among the words. Experiment results indicate that our method, compared with another existing word embedding approach, is more favorable for various queries.
We consider a laterally confined two-dimensional electron gas (2DEG), placed inside a gyrotropic cavity. Splitting of the circularly polarized electromagnetic modes leads to the emergence of the ground state spontaneous magnetization, anomalous Hall effect and chiral edge currents in 2DEG. We examine the dependence of the magnetization and edge current density on the system size for two particular cases of the confining potential: infinite wall and parabolic potentials. We show that paramagnetic and diamagnetic contributions to the edge currents have qualitatively different dependence on the system size. These findings pave the route to the design quantum electrodynamic engineering of the material properties of the mesoscopic electron systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.