This paper describes using wearable computing devices to perform "sousveillance" (inverse surveillance) as a counter to organizational surveillance. A variety of wearable computing devices generated different kinds of responses, and allowed for the collection of data in different situations. Visible sousveillance often evoked counter-performances by front-line surveillance workers. The juxtaposition of sousveillance with surveillance generates new kinds of information in a social surveillance situation.
We consider a multidimensional parameter space formed by inner products of a parameterizable family of chirp functions with a signal under analysis. We propose the use of quadratic chirp functions (which we will call q-chirps for short), giving irise to a parameter space that includes both the time-frequency plane and the timescale plane as 2-D subspaces. The parameter space contains a "time-frequency-scale volume" and thus encalmpasses both the short-time Fourier transform (as a slice along the time and frequency axes) and the wavelet transform (as (a slice along the time and scale axes). In addition to time, frequency, and scale, there are two other coordinate axes within this transform space: shear in time (obtained through convolution with a q-chirp) and shear in frequency (obtained through multiplication by a q-chirp). Signals in this multidimeinsional space can be obtained by a new transform, which we call the "q-chirplet transform" or simply the "chirplet transform." The proposed chirplets are generalizations of wavelets related to each other by 2-D aMine coordinate transformations (translations, dilations, rotations, and shears) in the time-frequency plane, as opposed to wavelets, which are related to each other by 1-D affine coordinarte transformations (translations and dilations) in the time domain only.
Abstract-It is argued that, hidden within the flow of signals from typical cameras, through image processing, to display media, is a homomorphic filter. While homomorphic filtering is often desirable, there are some occasions where it is not. Thus, cancellation of this implicit homomorphic filter is proposed, through the introduction of an antihomomorphic filter. This concept gives rise to the principle of quantigraphic image processing, wherein it is argued that most cameras can be modeled as an array of idealized light meters each linearly responsive to a semi-monotonic function of the quantity of light received, integrated over a fixed spectral response profile. This quantity is neither radiometric nor photometric, but, rather, depends only on the spectral response of the sensor elements in the camera. A particular class of functional equations, called comparametric equations, is introduced as a basis for quantigraphic image processing. Comparametric equations are fundamental to the analysis and processing of multiple images differing only in exposure. The well-known "gamma correction" of an image is presented as a simple example of a comparametric equation, for which it is shown that the underlying quantigraphic function does not pass through the origin. For this reason it is argued that exposure adjustment by gamma correction is inherently flawed, and alternatives are provided. These alternatives, when applied to a plurality of images that differ only in exposure, give rise to a new kind of processing in the "amplitude domain" (as opposed to the time domain or the frequency domain). While the theoretical framework presented in this paper originated within the field of wearable cybernetics (wearable photographic apparatus) in the 1970s and early 1980s, it is applicable to the processing of images from nearly all types of modern cameras, wearable or otherwise. This paper is a much revised draft of a 1992 peer-reviewed but unpublished report by the author, entitled "Lightspace and the Wyckoff principle."
Wearable computing moves computation from the desktop to the user. We are forming a community of networked, wearable-computer users to explore, over a long period, the augmented realities that these systems can provide. By adapting its behavior to the user's changing environment, a body-worn computer can assist the user more intelligently, consistently, and continuously than a desktop system. A text-based augmented reality, the Remembrance Agent, is presented to illustrate this approach. Video cameras are used both to warp the visual input (mediated reality) and to sense the user's world for graphical overlay. With a camera, the computer could track the user's finger to act as the system's mouse; perform face recognition; and detect passive objects to overlay 2.5D and 3D graphics onto the real world. Additional apparatus such as audio systems, infrared beacons for sensing location, and biosensors for learning about the wearer's affect are described. With the use of input from these interface devices and sensors, a long-term goal of this project is to model the user's actions, anticipate his or her needs, and perform a seamless interaction between the virtual and physical environments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.