We study 2D fermions with a short-range interaction in the presence of a van Hove singularity. It is shown that this system can be consistently described by an effective field theory whose Fermi surface is subdivided into regions as defined by a factorization scale, and that the theory is renormalizable in the sense that all of the counterterms are well defined in the IR limit. The theory has the unusual feature that the renormalization group equation for the coupling has an explicit dependence on the renormalization scale, much as in theories of Wilson lines. In contrast to the case of a round Fermi surface, there are multiple marginal interactions with nontrivial RG flow. The Cooper instability remains strongest in the BCS channel. We also show that the marginal Fermi liquid scenario for the quasiparticle width is a robust consequence of the van Hove singularity. Our results are universal in the sense that they do not depend on the detailed properties of the Fermi surface away from the singularity.
Inspired by two basic mechanisms in animal visual systems, we introduce a feature transform technique that imposes invariance properties in the training of deep neural networks. The resulting algorithm requires less parameter tuning, trains well with an initial learning rate 1.0, and easily generalizes to different tasks. We enforce scale invariance with local statistics in the data to align similar samples at diverse scales. To accelerate convergence, we enforce a GL(n)-invariance property with global statistics extracted from a batch such that the gradient descent solution should remain invariant under basis change. Profiling analysis shows our proposed modifications takes 5% of the computations of the underlying convolution layer. Tested on convolutional networks and transformer networks, our proposed technique requires fewer iterations to train, surpasses all baselines by a large margin, seamlessly works on both small and large batch size training, and applies to different computer vision and language tasks.
We consider a class of variable effort human annotation tasks in which the number of labels required per item can greatly vary (e.g., finding all faces in an image, named entities in a text, bird calls in an audio recording, etc.). In such tasks, some items require far more effort than others to annotate. Furthermore, the per-item annotation effort is not known until after each item is annotated since determining the number of labels required is an implicit part of the annotation task itself. On an image bounding-box task with crowdsourced annotators, we show that annotator accuracy and recall consistently drop as effort increases. We hypothesize reasons for this drop and investigate a set of approaches to counteract it. Firstly, we benchmark on this task a set of general best-practice methods for quality crowdsourcing. Notably, only one of these methods actually improves quality: the use of visible gold questions that provide periodic feedback to workers on their accuracy as they work. Given these promising results, we then investigate and evaluate variants of the visible gold approach, yielding further improvement. Final results show a 7% improvement in bounding-box accuracy over the baseline. We discuss the generality of the visible gold approach and promising directions for future research.
Quantum revivals in very-high-n (n ∼ 300) high-Rydberg wave packets generated from parent np states are used to examine decoherence induced by the application of 'coloured' noise from a random pulse generator and by collisions. In the absence of external perturbations, the high-wave packets maintain their coherence for periods ∼ 1 μs, i.e. for many hundreds of orbits. This coherence can be destroyed on sub-microsecond timescales by the application of even very small amounts of electrical noise at a rate that depends markedly on the spectral characteristics of the noise. In contrast, measurements over similar timescales with CO 2 target gas densities of ∼ 10 11 cm −3 provided no evidence of collisional dephasing. The mechanisms responsible for decoherence are discussed with the aid of classical and quantum simulations. The results of these simulations are in good accord with the experimental data.
Inspired by two basic mechanisms in animal visual systems, we introduce a feature transform technique that imposes invariance properties in the training of deep neural networks. The resulting algorithm requires less parameter tuning, trains well with an initial learning rate 1.0, and easily generalizes to different tasks. We enforce scale invariance with local statistics in the data to align similar samples generated in diverse situations. To accelerate convergence, we enforce a GL(n)-invariance property with global statistics extracted from a batch that the gradient descent solution should remain invariant under basis change. Tested on Im-ageNet, MS COCO, and Cityscapes datasets, our proposed technique requires fewer iterations to train, surpasses all baselines by a large margin, seamlessly works on both small and large batch size training, and applies to different computer vision tasks of image classification, object detection, and semantic segmentation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.