The work of Berezinskii, Kosterlitz and Thouless in the 1970s revealed exotic phases of matter governed by the topological properties of low-dimensional materials such as thin films of superfluids and superconductors. A hallmark of this phenomenon is the appearance and interaction of vortices and antivortices in an angular degree of freedom-typified by the classical XY model-owing to thermal fluctuations. In the two-dimensional Ising model this angular degree of freedom is absent in the classical case, but with the addition of a transverse field it can emerge from the interplay between frustration and quantum fluctuations. Consequently, a Kosterlitz-Thouless phase transition has been predicted in the quantum system-the two-dimensional transverse-field Ising model-by theory and simulation. Here we demonstrate a large-scale quantum simulation of this phenomenon in a network of 1,800 in situ programmable superconducting niobium flux qubits whose pairwise couplings are arranged in a fully frustrated square-octagonal lattice. Essential to the critical behaviour, we observe the emergence of a complex order parameter with continuous rotational symmetry, and the onset of quasi-long-range order as the system approaches a critical temperature. We describe and use a simple approach to statistical estimation with an annealing-based quantum processor that performs Monte Carlo sampling in a chain of reverse quantum annealing protocols. Observations are consistent with classical simulations across a range of Hamiltonian parameters. We anticipate that our approach of using a quantum processor as a programmable magnetic lattice will find widespread use in the simulation and development of exotic materials.
The task of algorithm selection involves choosing an algorithm from a set of algorithms on a per-instance basis in order to exploit the varying performance of algorithms over a set of instances. The algorithm selection problem is attracting increasing attention from researchers and practitioners in AI. Years of fruitful applications in a number of domains have resulted in a large amount of data, but the community lacks a standard format or repository for this data. This situation makes it difficult to share and compare different approaches effectively, as is done in other, more established fields. It also unnecessarily hinders new researchers who want to work in this area. To address this problem, we introduce a standardized format for representing algorithm selection scenarios and a repository that contains a growing number of data sets from the literature. Our format has been designed to be able to express a wide variety of different scenarios. To demonstrate the breadth and power of our platform, we describe a study that builds and evaluates algorithm selection models through a common interface. The results display the potential of algorithm selection to achieve significant performance improvements across a broad range of problems and algorithms.• A human-readable README file explains the origin and meaning of the scenario, as well as the process of data generation.Optional Data.• The feature costs file contains the costs of the feature groups, i.e., sets of features computed together.• The ground truth file specifies information on the instances and their respective solutions (e.g., SAT or UNSAT).• The literature references file in BibTeX format includes information on the context in which the data set was generated and previous studies in which it was used. Algorithm Selection Scenarios Provided in ASlib Release 2.0The set of algorithm selection scenarios in release version 2.0 of our library, shown in Table 2, has been assembled to represent a diverse set of selection problem settings that covers a wide range of problem domains, types of algorithms, 7 http://www.satcompetition.org/ 8 http://www.qbflib.org/index_eval.php 9 http://qbf.satisfiability.org/gallery/
Moon and Mars are considered to be future targets for human space explorations. The gravity level on the Moon and Mars amount to 16% and 38%, respectively, of Earth’s gravity. Mechanical loading during the anticipated habitual activities in these hypogravity environments will most likely not be sufficient to maintain physiological integrity of astronauts unless additional exercise countermeasures are performed. Current microgravity exercise countermeasures appear to attenuate but not prevent ‘space deconditioning’. However, plyometric exercises (hopping and whole body vibration) have shown promise in recent analogue bed rest studies and may be options for space exploration missions where resources will be limited compared to the ISS. This paper therefore tests the hypothesis that plyometric hop exercise in hypogravity can generate sufficient mechanical stimuli to prevent musculoskeletal deconditioning. It has been suggested that hypogravity-induced reductions in peak ground reaction force (peak vertical GRF) can be offset by increases in hopping height. Therefore, this study investigated the effects of simulated hypogravity (0.16G, 0.27G, 0.38G, and 0.7G) upon sub-maximal plyometric hopping on the Verticalised Treadmill Facility, simulating different hypogravity levels. Results show that peak vertical GRF are negatively related to simulated gravity level, but positively to hopping height. Contact times decreased with increasing gravity level but were not influenced through hopping height. In contrast, flight time increased with decreasing gravity levels and increasing hopping height ( P < 0.001). The present data suggest that the anticipated hypogravity-related reductions of musculoskeletal forces during normal walking can be compensated by performing hops and therefore support the idea of plyometric hopping as a robust and resourceful exercise countermeasure in hypogravity. As maximal hop height was constrained on the VTF further research is needed to determine whether similar relationships are evident during maximal hops and other forms of jumping.
We investigate the problem of repacking stations in the FCC's upcoming, multi-billion-dollar "incentive auction". Early efforts to solve this problem considered mixed-integer programming formulations, which we show are unable to reliably solve realistic, national-scale problem instances. We describe the result of a multi-year investigation of alternatives: a solver, SATFC, that has been adopted by the FCC for use in the incentive auction. SATFC is based on a SAT encoding paired with a wide range of techniques: constraint graph decomposition; novel caching mechanisms that allow for reuse of partial solutions from related, solved problems; algorithm configuration; algorithm portfolios; and the marriage of local-search and complete solver strategies. We show that our approach solves virtually all of a set of problems derived from auction simulations within the short time budget required in practice.
We study a class of robust network design problems motivated by the need to scale core networks to meet increasingly dynamic capacity demands. Past work has focused on designing the network to support all hose matrices (all matrices not exceeding marginal bounds at the nodes). This model may be too conservative if additional information on traffic patterns is available. Another extreme is the fixed demand model, where one designs the network to support peak point-to-point demands. We introduce a capped hose model to explore a broader range of traffic matrices which includes the above two as special cases. It is known that optimal designs for the hose model are always determined by single-hub routing, and for the fixeddemand model are based on shortest-path routing. We shed light on the wider space of capped hose matrices in order to see which traffic models are more shortest path-like as opposed to hub-like. To address the space in between, we use hierarchical multi-hub routing templates, a generalization of hub and tree routing. In particular, we show that by adding peak capacities into the hose model, the single-hub tree-routing template is no longer cost-effective. This initiates the study of a class of robust network design (RND) problems restricted to these templates. Our empirical analysis is based on a heuristic for this new hierarchical RND problem. We also propose that it is possible to define a routing indicator that accounts for the strengths of the marginals and peak demands and use this information to choose the appropriate routing template. We benchmark our approach against other well-known routing templates, using representative carrier networks and a variety of different capped hose traffic demands, parameterized by the relative importance of their marginals as opposed to their point-to-point peak demands. This study also reveals conditions under which multi-hub routing gives improvements over single-hub and shortest-path routings.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.