The nonlinear Schrödinger and the Schrödinger-Newton equations model many phenomena in various fields. Here, we perform an extensive numerical comparison between splitting methods (often employed to numerically solve these equations) and the integrating factor technique, also called Lawson method. Indeed, the latter is known to perform very well for the nonlinear Schrödinger equation, but has not been thoroughly investigated for the Schrödinger-Newton equation. Comparisons are made in one and two spatial dimensions, exploring different boundary conditions and parameters values. We show that for the short range potential of the nonlinear Schrödinger equation, the integrating factor technique performs better than splitting algorithms, while, for the long range potential of the Schrödinger-Newton equation, it depends on the particular system considered.
Data is increasingly being collected from multiple sources and described by multiple views. These multi-view data provide richer information than traditional single-view data. Fusing the former for specific tasks is an essential component of multi-view clustering. Since the goal of multi-view clustering algorithms is to discover the common latent structure shared by multiple views, the majority of proposed solutions overlook the advantages of incorporating knowledge derived from horizontal collaboration between multi-view data and the final consensus. To fill this gap, we propose the Joint Multi-View Collaborative Clustering (JMVCC) solution, which involves the generation of basic partitions using Non-negative Matrix Factorization (NMF) and the horizontal collaboration principle, followed by the fusion of these local partitions using ensemble clustering. Furthermore, we propose a weighting method to reduce the risk of negative collaboration (i.e., views with low quality) during the generation and fusion of local partitions. The experimental results, which were obtained using a variety of data sets, demonstrate that JMVCC outperforms other multi-view clustering algorithms and is robust to noisy views.
Information Extraction (IE) pipelines aim to extract meaningful entities and relations from documents and structure them into a knowledge graph that can then be used in downstream applications. Training and evaluating such pipelines requires a dataset annotated with entities, coreferences, relations, and entity-linking. However, existing datasets either lack entity-linking labels, are too small, not diverse enough, or automatically annotated (that is, without a strong guarantee of the correction of annotations). Therefore, we propose Linked-DocRED, to the best of our knowledge, the first manually-annotated, large-scale, document-level IE dataset. We enhance the existing and widely-used Do-cRED dataset with entity-linking labels that are generated thanks to a semi-automatic process that guarantees high-quality annotations. In particular, we use hyperlinks in Wikipedia articles to provide disambiguation candidates. We also propose a complete framework of metrics to benchmark end-to-end IE pipelines, and we define an entity-centric metric to evaluate entitylinking. The evaluation of a baseline shows promising results while highlighting the challenges of an end-toend IE pipeline. Linked-DocRED, the source code for the entity-linking, the baseline, and the metrics are distributed under an open-source license and can be downloaded from a public repository 1 .
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.