Proceedings. IEEE 1998 Virtual Reality Annual International Symposium (Cat. No.98CB36180)
DOI: 10.1109/vrais.1998.658482
|View full text |Cite
|
Sign up to set email alerts
|

Extendible object-centric tracking for augmented reality

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
9
0

Publication Types

Select...
4
2
1

Relationship

2
5

Authors

Journals

citations
Cited by 21 publications
(9 citation statements)
references
References 32 publications
0
9
0
Order By: Relevance
“…Today, it is required for making annotation onto newly discovered knowledge from data mining for effectively researches (3), (4) . Taking these into account, we can consider that the annotation function is very useful and important in CVE to promote better understanding.…”
Section: Hybridp2p-based Cvementioning
confidence: 99%
“…Today, it is required for making annotation onto newly discovered knowledge from data mining for effectively researches (3), (4) . Taking these into account, we can consider that the annotation function is very useful and important in CVE to promote better understanding.…”
Section: Hybridp2p-based Cvementioning
confidence: 99%
“…They are defined by a 2D screen position p and a type g that encodes characteristics such as colour and shape. Our fiducial design is a coloured circle or triangle [19,25], but other designs such as concentric circles or coded squares are equally valid [9,12,24,34]. We use the three primary and three secondary colours along with the triangle and circle shapes to provide 12 unique fiducial types.…”
Section: Segmentation and Feature Detectionmentioning
confidence: 99%
“…These applications are particularly suited to wearable computers because of the mobility afforded to the user. A more appropriate tracking solution for these highly mobile applications is one that is objectcentric, or in other words, based on viewing the object itself [1,[18][19][20][21][22][23][24][25]. Such tracking is possible with the pose estimation methods developed in the fields of computer vision and photogrammetry [26,27].…”
Section: Introductionmentioning
confidence: 99%
“…Much previous image-based research into methods for aligning video images of real environments and annotation information has relied on the use of artificial markers (fiducials) that identify locations within the actual environment; by detecting the location of such markers within a video sequence the camera position and orientation can be estimated [3][4][5]. However, since such methods require markers to be physically associated with each object that is to be annotated, they struggle to provide coverage for large environments.…”
Section: Introductionmentioning
confidence: 99%