Recently, several camera designs have been proposed for either making defocus blur invariant to scene depth or making motion blur invariant to object motion. The benefit of such invariant capture is that no depth or motion estimation is required to remove the resultant spatially uniform blur. So far, the techniques have been studied separately for defocus and motion blur, and object motion has been assumed to be 1D (e.g., horizontal). This paper explores a more general capture method that makes both defocus blur and motion blur nearly invariant to scene depth and in-plane 2D object motion. We formulate the problem as capturing a time-varying light field through a time-varying light field modulator at the lens aperture, and perform 5D (4D light field + 1D time) analysis of all the existing computational cameras for defocus/motion-only deblurring and their hybrids. This leads to a surprising conclusion that focus sweep, previously known as a depth-invariant capture method that moves the plane of focus through a range of scene depth during exposure, is near-optimal both in terms of depth and 2D motion-invariance and in terms of high frequency preservation for certain combinations of depth and motion ranges. Using our prototype camera, we demonstrate joint defocus and motion deblurring for moving scenes with depth variation.
ShAir is a middleware infrastructure that allows mobile applications to share resources of their devices (e.g., data, storage, connectivity, computation) in a transparent way. The goals of ShAir are: (i) abstracting the creation and maintenance of opportunistic delay-tolerant peer-to-peer networks; (ii) being decoupled from the actual hardware and network platform; (iii) extensibility in terms of supported hardware, protocols, and on the type of resources that can be shared; (iv) being capable of self-adapting at run-time; (v) enabling the development of applications that are easier to design, test, and simulate. In this paper we discuss the design, extensibility, and maintainability of the ShAir middleware, and how to use it as a platform for collaborative resource-sharing applications. Finally we show our experience in designing and testing a file-sharing application.
In this paper, we propose CoCam, a framework for mobile phones that enables uncoordinated real-time image and video collaboration between different users sharing the same context, in the same physical location. CoCam addresses the complexity and difficulty introduced when users wish to collaborate, create, and share media contents at the same time.CoCam is based on a middleware that creates a self-organizing ad-hoc network within the context of a shared event in a proximal physical space (a common scene). This middleware automatically handles the context detection, as well as the network configuration and peer discovery. It also enables realtime content sharing while reducing the burden of complicated settings and configurations by the users . As a result of operation tests and user study with the prototype implementation, we have verified that CoCam is feasible and has the potential to enrich the users' experience when sharing contents in an event scenario.
Video content has become available on an increasingly diverse set of devices and from an ever growing number of sources, creating a vast amount of choice for viewers. At the same time, the varying methods of viewing, interacting with, and sharing content have diverged. This paper introduces neXtream, a new approach to delivering video by integrating multiple devices, content sources, and social networks. This concept is developed following research in social television and converged applications, providing both personalization features and social interaction. NeXtream delivers video by dynamically generating streams of video customized to a viewer, while facilitating a common dialog between users around the content, creating both a user-and community-centric viewing experience. NeXtream integrates smartphones, PCs, and TVs to deliver video content to viewers. The paper presents the system concept, theory, and architecture, and describes the developed prototype.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.