Grayscale logo watermarking is a quite well-developed area of digital image watermarking which seeks to embed into the host image another smaller logo image. The key advantage of such an approach is the ability to visually analyze the extracted logo for rapid visual authentication and other visual tasks. However, logos pose new challenges for invisible watermarking applications which need to keep the watermark imperceptible within the host image while simultaneously maintaining robustness to attacks. This paper presents an algorithm for invisible grayscale logo watermarking that operates via adaptive texturization of the logo. The central idea of our approach is to recast the watermarking task into a texture similarity task. We first separate the host image into sufficiently textured and poorly textured regions. Next, for textured regions, we transform the logo into a visually similar texture via the Arnold transform and one lossless rotation; whereas for poorly textured regions, we use only a lossless rotation. The iteration for the Arnold transform and the angle of lossless rotation are determined by a model of visual texture similarity. Finally, for each region, we embed the transformed logo into that region via a standard wavelet-based embedding scheme. We employ a multistep extraction stage, in which an affine parameter estimation is first performed to compensate for possible geometrical transformations. Testing with multiple logos on a database of host images and under a variety of attacks demonstrates that the proposed algorithm yields better overall performance than competing methods.
Boulder have designed and prototyped a small UAS (Unmanned Aerial System) detection, tracking, classification, and identification system for UTM (UAS Traffic Management) compliance verification and counter-UAS security for non-compliant UAS. The system design, known as Drone Net, which continues to be developed and improved, is a network of sensors designed to cover a square kilometer area used instead of-or in addition to-RADAR (Radio Detection and Ranging). System tests completed previously have shown feasibility for lower-cost UTM and counter-UAS or enhanced classification and identification features compared to RADAR alone. The partially demonstrated and hypothesized advantages are based on not just track data, but additionally target characteristic shape, texture, and spectral data from EO/IR (Electro-Optical Infrared) sensing that can be used with MV/ML (Machine Vision and Machine Learning). For EO/IR to provide effective data to MV/ML, a narrow-field camera system must track small UAS to provide an effective small cross-section image (less than 1 meter) along with track. To address this challenge, we have coordinated the use of an All-sky camera system with a hemispherical wide field of view and six cameras with resolution of 2 million pixels each (12 million total) for coarse detection of a potential target with azimuth and elevation estimation. The estimated azimuth and elevation in turn provide a cue for slew of the narrow-field EO/IR for re-detection of the same target and tracking with much higher optical zoom at similar or better resolution. This provides a sufficiently large target pixel neighborhood of at least nine or more pixels within an operating range similar to RADAR, which has been purpose-built for small UAS detection at similar kilometer ranges. Further, the paper provides an initial evaluation for the potential to reduce false alarm cues generated by the All-sky camera based upon supplementary acoustic cues as well as future work concepts for visual and acoustic data fusion. In this paper, we present experimental results to establish feasibility of the All-sky camera system to narrow the size of EO/IR re-detection search space with and without acoustic data fusion for slew-to-cue message generation. The All-sky allows for narrowed search space re-detection and tracking with a high optical gain EO/IR instrument used as an alternative or in addition to RADAR based upon site needs, costs, and constraints.
Glare due to sunlight, moonlight, or other light sources can be a serious impediment during autonomous or manual driving. Automatically detecting the presence, location, and severity of such glare can be of critical importance for an autonomous driving system, which may then give greater priority to other sensors or cues/parts of the scene. We present an algorithm for automatic real-time glare detection that uses a combination of: (1) the intensity, saturation, and local contrast of the input frame; (2) shape detection; and (3) solar azimuth and elevation computed based on the position and heading information from the GPS (used under daylight conditions). These data are used to generate a glare occurrence map that indicates the center location(s) and extent(s) of the glare region(s). Testing on a variety of daytime and nighttime scenes demonstrates that the proposed system is effective at glare detection and is capable of real-time operation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.