Incorporating the individual and collective problem solving skills of non-experts into the scientific discovery process could potentially accelerate the advancement of science. This paper discusses the design process used for Foldit, a multiplayer online biochemistry game that presents players with computationally difficult protein folding problems in the form of puzzles, allowing ordinary players to gain expertise and help solve these problems. The principle challenge of designing such scientific discovery games is harnessing the enormous collective problem-solving potential of the game playing population, who have not been previously introduced to the specific problem, or, often, the entire scientific discipline. To address this challenge, we took an iterative approach to designing the game, incorporating feedback from players and biochemical experts alike. Feedback was gathered both before and after releasing the game, to create the rules, interactions, and visualizations in Foldit that maximize contributions from game players. We present several examples of how this approach guided the game's design, and allowed us to improve both the quality of the gameplay and the application of player problem-solving.
Large-scale, ground-level urban imagery has recently developed as an important element of online mapping tools such as Google's Street View. Such imagery is extremely valuable in a number of potential applications, ranging from augmented reality to 3D modeling, and from urban planning to monitoring city infrastructure. While such imagery is already available from many sources, including Street View and tourist photos on photo-sharing sites, these collections have drawbacks related to high cost, incompleteness, and accuracy. A potential solution is to leverage the community of photographers around the world to collaboratively acquire large-scale image collections. This work explores this approach through PhotoCity, an online game that trains its players to become "experts" at taking photos at targeted locations and in great density, for the purposes of creating 3D building models. To evaluate our approach, we ran a competition between two universities that resulted in the submission of over 100,000 photos, many of which were highly relevant for the 3D modeling task at hand. Although the number of players was small, we found that this was compensated for by incentives that drove players to become experts at photo collection, often capturing thousands of useful photos each.
We are interested in reconstructing real world locations as detailed 3D models, but to achieve this goal, we require a large quantity of photographic data. We designed a game to employ the efforts and digital cameras of everyday people to not only collect this data, but to do so in a fun and effective way. The result is PhotoCity, a game played outdoors with a camera, in which players take photos to capture flags and take over virtual models of real buildings. The game falls into the genres of both games with a purpose (GWAPs) and alternate reality games (ARGs). Each type of game comes with its own inherent challenges, but as a hybrid of both, PhotoCity presented us with a unique combination of obstacles. This paper describes the design decisions made to address these obstacles, and seeks to answer the question: Can games be used to achieve massive data-acquisition tasks when played in the real world, away from standard game consoles? We conclude with a report on player experiences and showcase some 3D reconstructions built by players during gameplay.
We propose an Augmented Reality (AR) system that helps users take a picture from a designated pose, such as the position and camera angle of an earlier photo. Repeat photography is frequently used to observe and document changes in an object. Our system uses AR technology to estimate camera poses in real time. When a user takes a photo, the camera pose is saved as a "view bookmark". To support a user in taking a repeat photo, two simple graphics are rendered in an AR viewer on the camera's screen to guide the user to this bookmarked view. The system then uses image adjustment techniques to create an image based on the user's repeat photo that is even closer to the original.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.