We experimentally confirm the functionality of a coupling element for flux-based superconducting qubits, with a coupling strength J whose sign and magnitude can be tuned in situ. To measure the effective J, the ground state of a coupled two-qubit system has been mapped as a function of the local magnetic fields applied to each qubit. The state of the system is determined by directly reading out the individual qubits while tunneling is suppressed. These measurements demonstrate that J can be tuned from antiferromagnetic through zero to ferromagnetic.
Recently, deep residual networks have been successfully applied in many computer vision and natural language processing tasks, pushing the state-of-the-art performance with deeper and wider architectures. In this work, we interpret deep residual networks as ordinary differential equations (ODEs), which have long been studied in mathematics and physics with rich theoretical and empirical success. From this interpretation, we develop a theoretical framework on stability and reversibility of deep neural networks, and derive three reversible neural network architectures that can go arbitrarily deep in theory. The reversibility property allows a memory-efficient implementation, which does not need to store the activations for most hidden layers. Together with the stability of our architectures, this enables training deeper networks using only modest computational resources. We provide both theoretical analyses and empirical results. Experimental results demonstrate the efficacy of our architectures against several strong baselines on CIFAR-10, CIFAR-100 and STL-10 with superior or on-par state-of-the-art performance. Furthermore, we show our architectures yield superior results when trained using fewer training data.
S U M M A R Y Z-AxisTipper Electromagnetic Technique (ZTEM) data are airborne electromagnetic data which record the vertical magnetic field that results from natural sources. The data are transfer functions that relate the local vertical field to orthogonal horizontal fields measured at a reference station on the ground. The transfer functions depend on frequency and provide information about the 3-D conductivity structure of the Earth. The practical frequency range is 30-720 Hz and hence it is possible to see structures at depths of a kilometre or more if the earth is of moderate conductivity. This depth of penetration is significantly greater than that obtained with controlled source EM techniques and, when coupled with rapid spatial acquisition with an airborne system, means that ZTEM data can be used to map large-scale structures that are difficult to survey with ground based surveys. We present some fundamentals about understanding the signatures obtained with ZTEM transfer functions and then develop a Gauss-Newton algorithm to invert ZTEM data. The algorithm is applied to synthetic examples and to a field data set from the Bingham Canyon region in Utah. The field data set requires a workflow procedure to estimate appropriate noise levels in individual frequency components. These noise levels can then be used to invert multiple frequencies simultaneously. ZTEM data are insensitive to a 1-D conductivity structures and hence the background can be difficult to estimate. We provide two methods to determine appropriate background models. Interestingly, topography, which is usually a hinderance in field data interpretation, provides a first-order signal in the ZTEM data and helps with this calibration.
In this work, we establish the relation between optimal control and training deep Convolution Neural Networks (CNNs). We show that the forward propagation in CNNs can be interpreted as a time-dependent nonlinear differential equation and learning can be seen as controlling the parameters of the differential equation such that the network approximates the data-label relation for given training data. Using this continuous interpretation, we derive two new methods to scale CNNs with respect to two different dimensions. The first class of multiscale methods connects low-resolution and high-resolution data using prolongation and restriction of CNN parameters inspired by algebraic multigrid techniques. We demonstrate that our method enables classifying high-resolution images using CNNs trained with low-resolution images and vice versa and warm-starting the learning process. The second class of multiscale methods connects shallow and deep networks and leads to new training strategies that gradually increase the depths of the CNN while re-using parameters for initializations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.