Standard ML is a general-purpose programming language designed for large projects. This book provides a formal definition of Standard ML for the benefit of all concerned with the language, including users and implementers. Because computer programs are increasingly required to withstand rigorous analysis, it is all the more important that the language in which they are written be defined with full rigor. One purpose of a language definition is to establish a theory of meanings upon which the understanding of particular programs may rest. To properly define a programming language, it is necessary to use some form of notation other than a programming language. Given a concern for rigor, mathematical notation is an obvious choice. The authors have defined their semantic objects in mathematical notation that is completely independent of Standard ML. In defining a language one must also define the rules of evaluation precisely—that is, define what meaning results from evaluating any phrase of the language. The definition thus constitutes a formal specification for an implementation. The authors have developed enough of their theory to give sense to their rules of evaluation. The Definition of Standard ML is the essential point of reference for Standard ML. Since its publication in 1990, the implementation technology of the language has advanced enormously and the number of users has grown. The revised edition includes a number of new features, omits little-used features, and corrects mistakes of definition.
The Edinburgh Logical Framework (LF) provides a means to define (or present) logics. It is based on a general treatment of syntax, rules, and proofs by means of a typed λ-calculus with dependent types. Syntax is treated in a style similar to, but more general than, Martin-Löf's system of arities. The treatment of rules and proofs focuses on his notion of a judgement. Logics are represented in LF via a new principle, the judgements as types principle, whereby each judgement is identified with the type of its proofs. This allows for a smooth treatment of discharge and variable occurrence conditions and leads to a uniform treatment of rules and proofs whereby rules are viewed as proofs of higher-order judgements and proof checking is reduced to type checking. The practical benefit of our treatment of formal systems is that logic-independent tools such as proof editors and proof checkers can be constructed.
Traditional techniques for implementing polymorphism use a universal representation for objects of unknown type. Often, this forces a compiler to use universal representations even if the types of objects are known. We examine an alternative approach for compiling polymorphism where types are passed as arguments to polymorphic routines in order to determine the representation of an object. This approach allows monomorphic code to use natural, e cient representations, supports separate compilation of polymorphic denitions and, unlike coercion-based implementations of polymorphism, natural representations can be used for mutable objects such as refs and arrays. We are particularly interested in the typing properties of an intermediate language that allows run-time type analysis to be coded within the language. This allows us to compile many representation transformations and many language features without adding new primitive operations to the language. In this paper, we provide a core target language where type-analysis operators can be coded within the language and the types of such operators can be accurately tracked. The target language is powerful enough to code a variety of useful features, yet type checking remains decidable. We show how to translate an ML-like language into the target language so that primitive operators can analyze types to produce e cient representations. We demonstrate the power of the \user-level" operators by coding attened tuples, marshalling, type classes, and a form of type dynamic within the language.
Decidability of definitional equality and conversion of terms into canonical form play a central role in the meta-theory of a type-theoretic logical framework. Most studies of definitional equality are based on a confluent, strongly-normalizing notion of reduction. Coquand has considered a different approach, directly proving the correctness of a practical equivalence algorithm based on the shape of terms. Neither approach appears to scale well to richer languages with unit types or subtyping, and neither directly addresses the problem of conversion to canonical form. In this paper we present a new, type-directed equivalence algorithm for the LF type theory that overcomes the weaknesses of previous approaches. The algorithm is practical, scales to more expressive languages, and yields a new notion of canonical form sufficient for adequate encodings of logical systems. The algorithm is proved complete by a Kripke-style logical relations argument similar to that suggested by Coquand. Crucially, both the algorithm itself and the logical relations rely only on the shapes of types, ignoring dependencies on terms. IntroductionAt present the mechanization of constructive reasoning relies almost entirely on type theories of various forms. The principal reason is that the computational meaning of constructive proofs is an integral part of the type theory itself. The main computational mechanism in such type theories is reduction, which has therefore been studied extensively.For logical frameworks the case for type theoretic meta-languages is also compelling, since they allow us to internalize deductions as objects. The validity of a deduction is then verified by type-checking in the meta-language. To ensure that proof checking remains decidable under this representation, the type checking problem for the meta-language must also be decidable. To support deductive systems of practical interest, the type theory must support dependent types, that is, types that depend on objects.The correctness of the representation of a logic in type theory is given by an adequacy theorem that correlates the syntax and deductions of the logic with canonical forms of suitable type. To establish a precise correspondence, canonical forms are taken to be β-normal, η-long forms. In particular, it is important that canonical forms enjoy the property that constants and variables of higher type are "fully applied" -that is, each occurrence is applied to enough arguments to reach a base type.Thus we see that the methodology of logical frameworks relies on two fundamental metatheoretic results: the decidability of type checking, and the existence of canonical forms. For many type theories the decidability of type checking is easily seen to reduce to the decidability of definitional equality of types and terms. Therefore we focus attention on the decision problem for definitional equality and on the conversion of terms to canonical form.Traditionally, both problems have been treated by considering normal forms for β, and possibly η, reduction. If we ta...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.