Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology 2023
DOI: 10.1145/3586183.3606723
|View full text |Cite
|
Sign up to set email alerts
|

Style2Fab: Functionality-Aware Segmentation for Fabricating Personalized 3D Models with Generative AI

Faraz Faruqi,
Ahmed Katary,
Tarik Hasic
et al.
Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 52 publications
0
3
0
Order By: Relevance
“…A designer could utilize this tool, which is called Style2Fab, to personalize 3D models of objects using only natural language prompts to describe their desired design. The user could then fabricate the objects with a 3D printer [99]. It is only a matter of time until AI tools will support doctors in 3D designing, personalization and improving efficiency of individualized appliances for biocompatible 3D printing.…”
Section: Discussionmentioning
confidence: 99%
“…A designer could utilize this tool, which is called Style2Fab, to personalize 3D models of objects using only natural language prompts to describe their desired design. The user could then fabricate the objects with a 3D printer [99]. It is only a matter of time until AI tools will support doctors in 3D designing, personalization and improving efficiency of individualized appliances for biocompatible 3D printing.…”
Section: Discussionmentioning
confidence: 99%
“…In the wake of strikingly realistic randomly generated 2D images [ 1 , 2 ], there is a mounting expectation for generative models to replicate the same success when automatically synthesizing 3D objects. Demand for such models arises in various domains, ranging from rapid design and prototyping for the manufacturing sector to the entertainment industry [ 3 , 4 , 5 , 6 , 7 ]. Hence, the research focus is shifting towards 3D deep generative models, as rich and flexible 3D representations are rapidly emerging.…”
Section: Introductionmentioning
confidence: 99%
“…Real-time computation allows the random generation of items or characters during the play. Tweaking the design of a 3D-printable object [ 7 ] may also require multiple iterations of a generative model and would also benefit from quick inference. Any application in extended reality, such as generating human models from motion capture data [ 4 ], requires the model to run at the same frequency as the frames.…”
Section: Introductionmentioning
confidence: 99%