No abstract
Factors which govern the temporal integration of spatial information were examined in a group of five experiments. A series of high-pass and low-pass spatially filtered versions of a visual scene were generated. Observers' ratings of these filtered versions of the scene for perceived image quality indicated that quality was determined both by the bandwidth of spatial information and the presence of high-spatial-frequency edge information. When sequences of three different versions of the scene were presented over an interval of 120 ms the perceived quality of the resulting composite image was determined both from the ratings of the individual components of that sequence and from the order in which these components were presented. When the order of spatial information in a sequence moved from coarse to fine detail the perceived quality of the composite image was significantly better than when the order moved from fine to coarse. This evidence of a coarse-to-fine bias in pattern integration was further investigated with a detection paradigm. The pattern of errors once again indicated that temporal integration of spatial information was superior when a coarse-to-fine mode of information delivery was employed. Taken together the data indicate that the pattern-integration mechanism has an inherent order bias and does not accumulate spatial information so efficiently when the 'natural' coarse-to-fine order is violated.
Literature is a form of expression whose temporal structure, both in content and style, provides a historical record of the evolution of culture. In this work we take on a quantitative analysis of literary style and conduct the first large-scale temporal stylometric study of literature by using the vast holdings in the Project Gutenberg Digital Library corpus. We find temporal stylistic localization among authors through the analysis of the similarity structure in feature vectors derived from content-free word usage, nonhomogeneous decay rates of stylistic influence, and an accelerating rate of decay of influence among modern authors. Within a given time period we also find evidence for stylistic coherence with a given literary topic, such that writers in different fields adopt different literary styles. This study gives quantitative support to the notion of a literary "style of a time" with a strong trend toward increasingly contemporaneous stylistic influence.cultural evolution | stylometry | culture | complexity | big data W ritten works, or literature, provide one of the great bodies of cultural artifacts. The analysis of literature typically involves the aggregation of information on several levels, ranging from words to sentences and even larger scale properties of temporal narratives such as structure, plot, and the use of irony and metaphor (1-3). Quantitative methods have long been applied to literature, most notably in the analysis of style, which can be traced back to a comment by the mathematician Augustus de Morgan regarding the attribution of the Pauline epistles (4) and the late nineteenth century work of the historian of philosophy Wincenty Lutasłowski, who brought basic statistical ideas of word usage to the problem of dating the dialogues of Plato (5). It was Lutasłowski who coined the word "stylometry" to describe such an approach to investigating questions of literary style. Since then, a wide range of statistical techniques have been developed toward this end (6), generally with the goal of settling questions of author attribution (see, e.g., refs. 6-11). Stylometric studies have also been pursued in the study of visual art (12, 13) and music [both in composition (14-16) and performance (17)], and are part of a growing body of work in the quantitative analysis of cultural artifacts (18).In this paper we report our findings from the first large-scale stylometric analysis of literature. The goal of this work is not author attribution-for the authorship of all the works is well known-but is instead to articulate, in a quantitative fashion, large-scale temporal trends in literary (i.e., writing) style. This type of study has been, until now, impossible to undertake, but the advent of mass digitization has created dramatic new opportunities for scholarly studies in literature as well as in other disciplines (19). Our literature sample is obtained from the Project Gutenberg Digital Library (http://www.gutenberg.org/wiki/ Gutenberg:About). Project Gutenberg consists of more than 30,000 public domai...
Recently, statistical techniques have been used to assist art historians in the analysis of works of art. We present a novel technique for the quantification of artistic style that utilizes a sparse coding model. Originally developed in vision research, sparse coding models can be trained to represent any image space by maximizing the kurtosis of a representation of an arbitrarily selected image from that space. We apply such an analysis to successfully distinguish a set of authentic drawings by Pieter Bruegel the Elder from another set of well-known Bruegel imitations. We show that our approach, which involves a direct comparison based on a single relevant statistic, offers a natural and potentially more germane alternative to wavelet-based classification techniques that rely on more complicated statistical frameworks. Specifically, we show that our model provides a method capable of discriminating between authentic and imitation Bruegel drawings that numerically outperforms well-known existing approaches. Finally, we discuss the applications and constraints of our technique. The statistical approaches that can be applied to the analysis of artistic style are varied, as are the potential applications of these approaches. Wavelet-based techniques are often used [e.g., (2)], as are fractals (3), as well as multiresolution hidden Markov methods (11). In this paper, we bring instead the adaptive technique of sparse coding to bear on the problem. Although originally developed for vision research (12), we show that the principle of sparse coding (finding a set of basis functions that is welladapted for the representation of a given class of images) is useful for accomplishing an image classification task important in the analysis of art. In particular, we show that a sparse coding model is appropriate for distinguishing the styles of different artists. This kind of discriminatory ability could be used to provide statistical evidence for, or against, a particular attribution, a task which is usually known as "authentication."In this paper, we consider the application of sparse coding to a particular authentication task, looking at a problem that has already been attacked by statistical techniques (2, 10): distinguishing a set of secure drawings by the great Flemish artist Pieter Bruegel the Elder (1525-1569) from a set of imitation Bruegels, each of whose attribution is generally accepted among art historians. The drawings in the group of imitations were long thought to be by Bruegel (13), so that their comparison to secure Bruegels is especially interesting. The sparse coding model attempts to create the sparsest possible representation of a given image (or set of images). Thus, a useful statistic for the attribution task is to compare the kurtosis of the representations of the authentic and imitation Bruegels in order to determine their similarity to a control set of authentic Bruegel drawings. Fig. 1 shows the steps involved in our analysis.We find that a sparse coding approach successfully distinguishes the secure...
Many real-world networks tend to be very dense. Particular examples of interest arise in the construction of networks that represent pairwise similarities between objects. In these cases, the networks under consideration are weighted, generally with positive weights between any two nodes. Visualization and analysis of such networks, especially when the number of nodes is large, can pose significant challenges which are often met by reducing the edge set. Any effective “sparsification” must retain and reflect the important structure in the network. A common method is to simply apply a hard threshold, keeping only those edges whose weight exceeds some predetermined value. A more principled approach is to extract the multiscale “backbone” of a network by retaining statistically significant edges through hypothesis testing on a specific null model, or by appropriately transforming the original weight matrix before applying some sort of threshold. Unfortunately, approaches such as these can fail to capture multiscale structure in which there can be small but locally statistically significant similarity between nodes. In this paper, we introduce a new method for backbone extraction that does not rely on any particular null model, but instead uses the empirical distribution of similarity weight to determine and then retain statistically significant edges. We show that our method adapts to the heterogeneity of local edge weight distributions in several paradigmatic real world networks, and in doing so retains their multiscale structure with relatively insignificant additional computational costs. We anticipate that this simple approach will be of great use in the analysis of massive, highly connected weighted networks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.