The Sensor Hosting Autonomous and Remote Craft (SHARC) Wave Glider system is an autonomous surface vehicle widely used for long-duration at-sea data collection and acoustic monitoring. The system is manufactured by Liquid Robotics Inc. (LRI) and consists of two components (1) a surface float housing sensors, batteries, solar panels, and communication systems, and (2) a submerged propulsor (glider) containing six hinged hydrofoils. A third component, a submerged body housing additional sensors, may be towed behind the propulsor. While system launch is relatively straight forward, recovery of the system is challenging and incurs high risk. Challenges include (1) the Wave Glider system contains multiple components, (2) forward motion of the system cannot easily be arrested, and (3) the ship conducting recovery operations often has a high freeboard. Need for an improved recovery system is desired, particularly for the United States Naval Oceanographic Office (NAVOCEANO), as current recovery methods require personnel in small boats or swimmers in the water to control the vehicle and attach lifting lines. These procedures are hazardous to both personnel and equipment and are limited to low sea states. This project, conducted by a team of five undergraduate students and two advisers, designed and evaluated multiple alternatives as viable recovery solutions. Based on design and operational requirements, the final proposed solution is two-part. The first is a remotely actuated inflatable lift bag attached to the stern of the submerged propulsor that will halt forward movement when inflated; and the second is a vertical cable loop mounted on the surface float to facilitate lifting of the float, propulsor and towed payloads. The proposed solution was demonstrated to be feasible and met all design requirements, with an emphasis on simplicity.
This paper reports on some results to date from a program of study to better understand the influence of factors which influence the long-term float behaviour of lead-acid batteries. Data from high resolution, long-term, real-time logging of the float performance of in-service VRLA batteries in the network is also presented.
Artificial intelligence (AI) is expanding into every niche of human life, organizing our activity, expanding our agency and interacting with us to an exponentially increasing extent. At the same time, AI’s efficiency, complexity and refinement are growing at an accelerating speed. An expanding, ubiquitous intelligence that does not have a means to care about us poses a species-level risk. Justifiably, there is a growing concern with the immediate problem of how to engineer an AI that is aligned with human interests. Computational approaches to the alignment problem currently focus on engineering AI systems to (i) parameterize human values such as harm and flourishing, and (ii) avoid overly drastic solutions, even if these are seemingly optimal. In parallel, ongoing work in applied AI (caregiving, consumer care) is concerned with developing artificial empathy, teaching AI’s to decode human feelings and behavior, and evince appropriate emotional responses.We propose that in the absence of affective empathy (which allows us to share in the states of others), existing approaches to artificial empathy may fail to reliably produce the pro-social, caring component of empathy, potentially resulting in increasingly cognitively complex sociopaths. We adopt the colloquial usage of the term “sociopath” to signify an intelligence possessing cognitive empathy (i.e., the ability to decode, infer, and model the mental and affective states of others), but crucially lacking pro-social, empathic concern arising from shared affect and embodiment. It is widely acknowledged that aversion to causing harm is foundational to the formation of empathy and moral behavior. However, harm aversion is itself predicated on the experience of harm, within the context of the preservation of physical integrity. Following from this, we argue that a “top-down” rule-based approach to achieving caring AI may be inherently unable to anticipate and adapt to the inevitable novel moral/logistical dilemmas faced by an expanding AI. Crucially, it may be more effective to coax caring to emerge from the bottom up, baked into an embodied, vulnerable artificial intelligence with an incentive to preserve its physical integrity. This may be achieved via iterative optimization within a series of tailored environments with incentives and contingencies inspired by the development of empathic concern in humans. Here we attempt an outline of what these training steps might look like. We speculate that work of this kind may allow for AI that surpasses empathic fatigue and the idiosyncrasies, biases, and computational limits that restrict human empathy. While for us, “a single death is a tragedy, a million deaths are a statistic”, the scaleable complexity of AI may allow it to deal proportionately with complex, large-scale ethical dilemmas. Hopefully, by addressing this problem seriously in the early stages of AI’s integration with society, we may one day be accompanied by AI that plans and behaves with a deeply ingrained weight placed on the welfare of others, coupled with the cognitive complexity necessary to understand and solve extraordinary problems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.