B en Shneiderman is a long-time proponent of direct manipulation for user interfaces. Direct manipulation affords the user control and predictability in their interfaces. Pattie Maes believes direct manipulation will have to give way to some form of delegationnamely software agents. Should users give up complete control of their interaction with interfaces? Will users want to risk depending on "agents" that learn their likes and dislikes and act on a user's behalf? Ben and Pattie debated these issues and more at both IUI 97 (Intelligent User Interfaces conference -January 6-9, 1997) and again at CHI 97 in Atlanta (March 22-27, 1997). Read on and decide for yourself where the future of interfaces should be headed-and why. 43 i n t e r a c t i o n s . . . n o v e m b e r + d e c e m b e r 1 9 9 7 d e b a t e P A T T I E M A E S vs Interface Agents Excerpts from debates at IUI 97 and CHI 97 " 44 i n t e r a c t i o n s . . . n o v e m b e r + d e c e m b e r 1 9 9 7
he idea of computers generating content has been around since the 1950s. Some of the earliest attempts were focused on replicating human creativity by having computers generate visual art and music 1 . Unlike today's synthesized media, computer-generated content from the early era was far from realistic and easily distinguishable from that created by humans. It has taken decades and major leaps in artificial intelligence (AI) for generated content to reach a high level of realism.Generative and discriminative models are two different approaches to machines learning from data. Although discriminative models can identify a person in an image, generative models can produce a new image of a person that has never existed before. Recent leaps in generative models include generative adversarial networks (GANs) 2 . Since their introduction, models for AI-generated media, such as GANs, have enabled the hyper-realistic synthesis of digital content, including the generation of photorealistic images, cloning of voices, animation of faces and translation of images from one form to another 3-6 . The GAN architecture includes two neural networks, a generator and a discriminator. The generator is responsible for generating new content that resembles the input data, while the discriminator's job is to differentiate the generated or fake output from the real data. The two networks compete and try to outperform each other in a closed-feedback loop, resulting in a gradual increase of the realism of the generated output.GAN architectures can generate images of things that have never existed before, such as human faces 3,4 . However, StyleGAN is an example of a modifiable GAN that enables intuitive control of the facial details of generated images by separating high-level attributes like the identity of a person from low-level features such as hair or freckles, with few visible artefacts 4 . Researchers have also proposed an in-domain GAN inversion approach to enable the editing of GAN-generated images, allowing for de-aging or the addition of new facial expressions to existing photographs 7 . Meanwhile, transformers such as the ones used in the massive generative GPT-3 language model are already being shown to be successful for text-to-image generation 8 .
Information about a person’s engagement and attention might be a valuable asset in many settings including work situations, driving, and learning environments. To this end, we propose the first prototype of a device called AttentivU—a system that uses a wearable system which consists of two main components. Component 1 is represented by an EEG headband used to measure the engagement of a person in real-time. Component 2 is a scarf, which provides subtle, haptic feedback (vibrations) in real-time when the drop in engagement is detected. We tested AttentivU in two separate studies with 48 adults. The participants were engaged in a learning scenario of either watching three video lectures on different subjects or participating in a set of three face-to-face lectures with a professor. There were three conditions administrated during both studies: (1) biofeedback, meaning the scarf (component 2 of the system) was vibrating each time the EEG headband detected a drop in engagement; (2) random feedback, where the vibrations did not correlate or depend on the engagement level detected by the system, and (3) no feedback, when no vibrations were administered. The results show that the biofeedback condition redirected the engagement of the participants to the task at hand and improved their performance on comprehension tests.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.