A web-based 3D game project was presented in this paper to demonstrate the process of using building information modeling (BIM) to create an interactive 3D on-line 'Green' training environment. The system architecture, the implementation process and major components of this virtual training environment were discussed. Existing studies on BIM-based collaborations mainly focused on local-file-sharing approach using proprietary applications. Limited research focused on using BIM as an online gaming platform to create a web browser-based interactive 3D virtual environment for collaboration, learning and/or training. The gap was partially caused by the lack of understanding how to implement a BIMbased game in web browser environment. In this paper, the authors provided an implementation example using a hospital BIM model to create an interactive web-based 3D BIM game environment to allow users to visualize and interact with the BIM components using regular web browsers. The intention of this project is to create a proprietaryindependent training environment to conduct energy re-commissioning trainings for hospital facility management staff. This virtual BIM environment can potentially be customized for engineering student learning and project collaborations as well. The conclusion was that current BIM and game technology are mature enough to allow us to create serious web-based interactive learning/training virtual environment. The successful integration of BIM and web browsers paved the way for many learning and training applications, which need builtenvironment as context.
Food is essential for human life and has been the concern of many healthcare conventions. Nowadays new dietary assessment and nutrition analysis tools enable more opportunities to help people understand their daily eating habits, exploring nutrition patterns and maintain a healthy diet. In this paper, we develop a deep model based food recognition and dietary assessment system to study and analyze food items from daily meal images (e.g., captured by smartphone). Specifically, we propose a three-step algorithm to recognize multi-item (food) images by detecting candidate regions and using deep convolutional neural network (CNN) for object classification. The system first generates multiple region of proposals on input images by applying the Region Proposal Network (RPN) derived from Faster R-CNN model. It then indentifies each region of proposals by mapping them into feature maps, and classifies them into different food categories, as well as locating them in the original images. Finally, the system will analyze the nutritional ingredients based on the recognition results and generate a dietary assessment report by calculating the amount of calories, fat, carbohydrate and protein. In the evaluation, we conduct extensive experiments using two popular food image datasets-UEC-FOOD100 and UEC-FOOD256. We also generate a new type of dataset about food items based on FOOD101 with bounding. The model is evaluated through different evaluation metrics. The experimental results show that our system is able to recognize the food items accurately and generate the dietary assessment report efficiently, which will benefit the users with a clear insight of healthy dietary and guide their daily recipe to improve body health and wellness.
New media and communication technologies like mobile devices are nowadays widely used everywhere for providing rich functionalities and highly personalized services. However, using such a device in a driving environment is still very inconvenient and unsafe to be controlled by the driver. The touchscreen operations are one major obstacle since multi-touchscreen is optimized for hand-held usage scenarios. To overcome this limitation, we propose to replace some most used touch operations with gesture controls for mobile devices in a driving environment. Gesture control is simple, more flexible and requires less eye focus, which makes it more suitable for in-vehicle usages. In this paper, we design Givs, a fully functional gesture control system for mobile devices in a driving environment. Givs leverages the latest motion sensing technology to enable ubiquitous and driving-friendly gestures. Compared to other off-the-shelf gesture recognition solutions, Givs is optimized for in-vehicle use cases and is designed to overcome various limitations caused by real driving conditions, including bumpy road conditions, significant noise introduced by car vibration and technical limitations of motion sensors. Our extensive in-vehicle tests and participant experience experiments demonstrate that Givs well assists users in accomplishing various types of tasks and support human-machine interaction in driving environments such as personal vehicle and public transport, with high accuracy and fast responsiveness, while promoting drivnig convenience and safety. INDEX TERMS Human-machine interaction, smart sensing, mobile computing, driving safety.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.