2000
DOI: 10.1145/348941.348998
|View full text |Cite
|
Sign up to set email alerts
|

Embodied user interfaces for really direct manipulation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
30
0

Year Published

2002
2002
2022
2022

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 65 publications
(30 citation statements)
references
References 7 publications
0
30
0
Order By: Relevance
“…This result is fairly intuitive since a direct manipulation task such as a drag-and-drop requires a high concentration of visual attention on a display (Fishkin et al 2000). Therefore visual-only feedback does not add much value compared to other forms of feedback.…”
Section: Visual Effectmentioning
confidence: 98%
“…This result is fairly intuitive since a direct manipulation task such as a drag-and-drop requires a high concentration of visual attention on a display (Fishkin et al 2000). Therefore visual-only feedback does not add much value compared to other forms of feedback.…”
Section: Visual Effectmentioning
confidence: 98%
“…When integrated sensors enable devices to react to physical manipulation, designers can make use of the 'physical effects principle' (Fishkin et al, 2000), where system effects are analogous to the real-world effects of similar actions and adhere to naïve physics. For example, a handheld calendar can scroll to the next day if tilted.…”
Section: Motivation and Backgroundmentioning
confidence: 99%
“…Given that the system provides users with a 3D input tool, this three-dimensionality should be acknowledged to some extent. Some researchers have successfully developed interfaces based on the 'physical effects principle', where system effects are analogous to the real-world effect of similar actions (Fishkin et al, 2000). But a more cautionary lesson is also recommended.…”
Section: Physical Input and Intuitive Usementioning
confidence: 99%
“…As one's viewing angle changes, one sequentially sees different strips of the image underneath the lenses, showing an animation. We used this effect to design a user-friendly technique that helps users to perform gestures (as advised in embodied user interface [4]). By analyzing the data provided by a Gsensor (i.e.…”
Section: Timetilt Techniquementioning
confidence: 99%