Program slicing provides explanations that illustrate how program outputs were produced from inputs. We build on an approach introduced in prior work by Perera et al. [2012], where dynamic slicing was de ned for pure higher-order functional programs as a Galois connection between la ices of partial inputs and partial outputs. We extend this approach to imperative functional programs that combine higher-order programming with references and exceptions. We present proofs of correctness and optimality of our approach and a proof-of-concept implementation and experimental evaluation.
Haskell, as implemented by the Glasgow Haskell Compiler (GHC), allows expressive type-level programming. The most popular typelevel programming extension is TypeFamilies, which allows users to write functions on types. Yet, using type functions can cripple type inference in certain situations. In particular, lack of injectivity in type functions means that GHC can never infer an instantiation of a type variable appearing only under type functions.In this paper, we describe a small modification to GHC that allows type functions to be annotated as injective. GHC naturally must check validity of the injectivity annotations. The algorithm to do so is surprisingly subtle. We prove soundness for a simplification of our algorithm, and state and prove a completeness property, though the algorithm is not fully complete.As much of our reasoning surrounds functions defined by a simple pattern-matching structure, we believe our results extend beyond just Haskell. We have implemented our solution on a branch of GHC and plan to make it available to regular users with the next stable release of the compiler.
Scientific progress increasingly depends on data management, particularly to clean and curate data so that it can be systematically analyzed and reused. A wealth of techniques for managing and curating data (and its provenance) have been proposed, largely in the database community. In particular, a number of influential papers have proposed collecting provenance information explaining where a piece of data was copied from, or what other records were used to derive it. Most of these techniques, however, exist only as research prototypes and are not available in mainstream database systems. This means scientists must either implement such techniques themselves or (all too often) go without. This is essentially a code reuse problem: provenance techniques currently cannot be implemented reusably, only as ad hoc, usually unmaintained extensions to standard databases. An alternative, relatively unexplored approach is to support such techniques at a higher abstraction level, using metaprogramming or reflection techniques. Can advanced programming techniques make it easier to transfer provenance research results into practice?We build on a recent approach called language-integrated provenance, which extends language-integrated query techniques with source-to-source query translations that record provenance. In previous work, a proof of concept was developed in a research programming language called Links, which supports sophisticated Web and database programming. In this paper, we show how to adapt this approach to work in Haskell building on top of the Database-Supported Haskell (DSH) library.Even though it seemed clear in principle that Haskell's rich programming features ought to be sufficient, implementing language-integrated provenance in Haskell required overcoming a number of technical challenges due to interactions between these capabilities. Our implementation serves as a proof of concept showing how this combination of metaprogramming features can, for the first time, make data provenance facilities available to programmers as a library in a widely-used, general-purpose language.In our work we were successful in implementing forms of provenance known as where-provenance and lineage. We have tested our implementation using a simple database and query set and established that the resulting queries are executed correctly on the database. Our implementation is publicly available on GitHub.Our work makes provenance tracking available to users of DSH at little cost. Although Haskell is not widely used for scientific database development, our work suggests which languages features are necessary to support provenance as library. We also highlight how combining Haskell's advanced type programming features can lead to unexpected complications, which may motivate further research into type system expressiveness.
Abstract. This paper introduces a new method for an adaptive synthesis of a wavelet transform using a fast neural network with a topology based on the lattice structure. The lattice structure and the orthogonal lattice structure are presented and their properties are discussed. A novel method for unsupervised training of the neural network is introduced. The proposed approach is tested by synthesizing new wavelets with an expected energy distribution between low-and high-pass filters. Energy compaction of the proposed method and Daubechies wavelets is compared. Tests are performed using sound and image signals.
Improving energy compaction of a wavelet transform using genetic algorithm and fast neural networkIn this paper a new method for adaptive synthesis of a smooth orthogonal wavelet, using fast neural network and genetic algorithm, is introduced. Orthogonal lattice structure is presented. A new method of supervised training of fast neural network is introduced to synthesize a wavelet with desired energy distribution between output signals from low-pass and high-pass filters on subsequent levels of a Discrete Wavelet Transform. Genetic algorithm is proposed as a global optimization method for defined objective function, while neural network is used as a local optimization method to further improve the result. Proposed approach is tested by synthesizing wavelets with expected energy distribution between low- and high-pass filters. Energy compaction of proposed method and Daubechies wavelets is compared. Tests are performed using image signals.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.