Confocal microscope imaging has become popular in biotechnology labs. Confocal imaging technology utilizes fluorescence optics, where laser light is focused onto a specific spot at a defined depth in the sample. A considerable number of images are produced regularly during the process of research. These images require methods of unbiased quantification to have meaningful analyses. Increasing efforts to tie reimbursement to outcomes will likely increase the need for objective data in analyzing confocal microscope images in the coming years. Utilizing visual quantification methods to quantify confocal images with naked human eyes is an essential but often underreported outcome measure due to the time required for manual counting and estimation. The current method (visual quantification methods) of image quantification is time-consuming and cumbersome, and manual measurement is imprecise because of the natural differences among human eyes’ abilities. Subsequently, objective outcome evaluation can obviate the drawbacks of the current methods and facilitate recording for documenting function and research purposes. To achieve a fast and valuable objective estimation of fluorescence in each image, an algorithm was designed based on machine vision techniques to extract the targeted objects in images that resulted from confocal images and then estimate the covered area to produce a percentage value similar to the outcome of the current method and is predicted to contribute to sustainable biotechnology image analyses by reducing time and labor consumption. The results show strong evidence that t-designed objective algorithm evaluations can replace the current method of manual and visual quantification methods to the extent that the Intraclass Correlation Coefficient (ICC) is 0.9.
[ACCESS RESTRICTED TO THE UNIVERSITY OF MISSOURI AT AUTHOR'S REQUEST.] Measurement of finger active range of motion (ROM) is essential for monitoring the effectiveness of rehabilitative treatments and for evaluating patients' functional impairment accurately. Currently, finger ROM is measured through a labor-intensive process of applying a hand-held goniometer to each finger joint and recording the value. This method is subject to error and is time-consuming, leading many surgeons not to collect the data to avoid delays in the clinic. To speed up and simplify this process, we proposed a system to measure the ROM of joints of each finger automatically. The system is based on extracting a 3D hand model using the low-cost Intel RealSense SR300 camera that can produce an accurate 3D model. Segmentation methods are developed to extract each finger individually, and several algorithms are proposed to estimate joint angles. To evaluate the proposed system, we collected data on 30 healthy volunteers and 22 hand therapy patients, with University of Missouri IRB approval. The system has been tested and compared to manual measurements made with a goniometer by a fellowship trained hand surgeon to extract the finger joint ROM. First, using data on the healthy subjects, the mean absolute differences in measurement were 8 degrees across all finger joints. These differences were found to compare favorably to the variability noted in manual measurements in published papers. Moreover, the proposed system estimates MCP and PIP joint angles in all long fingers accurately with one automated, non-contact scan. The system was improved and then further tested with patients in the hand therapy clinic, who may have finger swelling, attachments, and/or unusual finger ROM. The system showed a mean absolute difference of 7 degrees with respect to the goniometer readings. Moreover, analyzing the clinic use provided recommendations for further development of the proposed system.
Introduction Automated measurement of digital range of motion (ROM) may improve the accuracy of reporting and increase clinical efficiency. We hypothesize that a 3-D camera on a custom gantry will produce ROM measurements similar to those obtained with a manual goniometer. Methods A 3-D camera mounted on a custom gantry, was mechanized to rotate 200° around a platform. The video was processed to segment each digit and calculate joint angles in people with no history of any hand conditions or surgery to validate the system. A second-generation prototype was then assessed in people with different hand conditions. Metacarpophalangeal (MCP) and proximal interphalangeal (PIP) joint flexion were measured repeatedly with a goniometer and the automated system. The average difference between manual and automatic measurements was calculated along with intraclass correlation coefficients (ICC). Results In the initial validation, 1,488 manual and 1,488 automated joint measurements were obtained and the measurement algorithm was refined. In people with hand conditions, 688 manual and 688 automated joint measurements were compared. Average acquisition time was 7 s per hand, with an additional 2–3 s required for data processing. ICC between manual and automated data in the clinical study ranged from 0.65 to 0.85 for the MCP joints, and 0.22 to 0.66 for PIP joints. Discussion The automated system resulted in rapid data acquisition, with reliability varying by type of joint and location. It has the potential to improve efficiency in the collection of physical exam findings. Further developments of the system are needed to measure thumb and distal phalangeal motions.
The path-planning algorithm is the central part of most v. The algorithm should consider fixed obstacles, furniture and building style, dynamic obstacles, humans, and pets. assistive robots encounter a challenging and complex environment with various obstacles during daily work. In addition, to maximize the service per hour, the robot has to select the optimum path. These challenges motivate the work toward an efficient path-planning algorithm that can handle complex environments. The proposed algorithm employs a designed genetic algorithm to look for the best path that maximizes the service area per hour. This genetic algorithm is then combined with a dynamic obstacle detection fuzzy system. This system relies on fuzzy membership zones. The algorithm decides whether the obstacle is dynamic or static according to speed, direction, and size. The Geno-fuzzy path planning algorithm is implemented in an assistive robot and tested in an actual environment. The algorithm implementation in a simulated environment of 100 BED hospitals in Iraq reveals a high-performance result. The test on a large scale without obstacles shows the ability of the algorithm to deal with more than 300 service points successfully. The local experiment on Webots proved the algorithm's performance to overcome dynamic obstacles and achieve safe traveling.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.