Proceedings of the 10th International Conference on Computational Linguistics - 1984
DOI: 10.3115/980431.980597
|View full text |Cite
|
Sign up to set email alerts
|

Natural language driven image generation

Abstract: In this paper the experience made through the development of a NAtural Language driven Image Generation is discussed. This system is able to imagine a static scene described by means of a sequence of simple phrases. In particular, a theory for equilibrium and support will be outlined together with the problem of object positioning.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
13
0

Year Published

2001
2001
2022
2022

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 19 publications
(13 citation statements)
references
References 0 publications
0
13
0
Order By: Relevance
“…2 the Mirai© 3D animation system from IZware, and uses 3D models from Viewpoint. 2 An individual semantic representation fragment as currently used in WordsEye may seem relatively simple when compared, say, with the PAR (("node2" (:ENTITY :3D-OBJECTS ("mr_happy") :LEXICAL-SOURCE "John" :SOURCE SELF)) ("node1" (:ACTION "say" :SUBJECT "node2" :DIRECT-OBJECT ("node5" "node4" "node7")...)) ("node5" (:ENTITY :3D-OBJECTS ("cat-vp2842"))) ("node4" (:STATIVE-RELATION "on" :FIGURE "node5"…”
Section: Linguistic Analysismentioning
confidence: 99%
See 1 more Smart Citation
“…2 the Mirai© 3D animation system from IZware, and uses 3D models from Viewpoint. 2 An individual semantic representation fragment as currently used in WordsEye may seem relatively simple when compared, say, with the PAR (("node2" (:ENTITY :3D-OBJECTS ("mr_happy") :LEXICAL-SOURCE "John" :SOURCE SELF)) ("node1" (:ACTION "say" :SUBJECT "node2" :DIRECT-OBJECT ("node5" "node4" "node7")...)) ("node5" (:ENTITY :3D-OBJECTS ("cat-vp2842"))) ("node4" (:STATIVE-RELATION "on" :FIGURE "node5"…”
Section: Linguistic Analysismentioning
confidence: 99%
“…Natural language input has been investigated in a number of 3D graphics systems including an early system by [2] and the oft-cited Put system [8]; the Put system shared our goal of making graphics creation easier, but was limited to spatial arrangements of existing objects. Also, input was restricted to an artificial subset of English consisting of expressions of the form Put (X P Y) ¡ , where X and Y are objects, and P is a spatial preposition.…”
Section: Introductionmentioning
confidence: 99%
“…In this case, intelligent systems facilitating cross-media references are helpful and worth developing. In this research area so far, it has been most conventional that conceptual contents conveyed by information media such as languages and pictures are represented in computable forms independent of each other and translated via 'transfer' processes so called which are often ad hoc and very specific to task domains [1][2][3].…”
Section: Introductionmentioning
confidence: 99%
“…The authors pointed out that there is a possible way to improve the visualization to be more dynamic. They suggested directly creating the scene rather than showing representative pictures; this can be done via text-toscene systems such as NALIG [1] and WordsEye [9], or text-to-animation systems such as animated pictures like text-to-picture Synthesis [21] and animations like Carsim [14], the latter of which converts narrative text about car accidents into 3D scenes using techniques for information extraction coupled with a planning and a visualization module. The CONFUCIUS system is also capable of converting single sentences into corresponding 3D animations [38].…”
Section: Introductionmentioning
confidence: 99%