Touchscreens are the primary input devices for smartphones and tablets. Although widely used, the output of touchscreen controllers is still limited to the two-dimensional position of the contacting finger. Finger angle (or orientation) estimation from touchscreen images has been studied for enriching touch input. However, only pitch and yaw are usually estimated and estimation error is large. One main reason is that touchscreens provide very limited information of finger. With the development of under-screen fingerprint sensing technology, fingerprint images, which contain more information of finger compared with touchscreen images, can be captured when a finger touches the screen. In this paper, we constructed a dataset with fingerprint images and the corresponding ground truth values of finger angle. We contribute with a network architecture and training strategy that harness the strong dependencies among finger angle, finger region, finger type, and fingerprint ridge orientation to produce a top-performing model for finger angle estimation. The experimental results demonstrate the superiority of our method over previous state-of-the-art methods. The mean absolute errors of the three angles are 6.6 degrees for yaw, 7.1 degrees for pitch, and 9.1 degrees for roll, markedly smaller than previously reported errors. Extensive experiments were conducted to examine important factors including image resolution, image size, and finger type. Evaluations on a set of under-screen fingerprints were also performed to explore feasibility in real-world applications. Code and a subset of the data are publicly available.
Summary
DeepKG is an end-to-end deep learning-based workflow that helps researchers automatically mine valuable knowledge in biomedical literature. Users can utilize it to establish customized knowledge graphs in specified domains, thus facilitating in-depth understanding on disease mechanisms and applications on drug repurposing and clinical research. To improve the performance of DeepKG, a cascaded hybrid information extraction framework is developed for training model of 3-tuple extraction, and a novel AutoML-based knowledge representation algorithm (AutoTransX) is proposed for knowledge representation and inference. The system has been deployed in dozens of hospitals and extensive experiments strongly evidence the effectiveness. In the context of 144 900 COVID-19 scholarly full-text literature, DeepKG generates a high-quality knowledge graph with 7980 entities and 43 760 3-tuples, a candidate drug list, and relevant animal experimental studies are being carried out. To accelerate more studies, we make DeepKG publicly available and provide an online tool including the data of 3-tuples, potential drug list, question answering system, visualization platform.
Availability and implementation
All the results are publicly available at the website (http://covidkg.ai/).
Supplementary information
Supplementary data are available at Bioinformatics online.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.