SUMMARYThe study of transgenic Arabidopsis lines with altered vascular patterns has revealed key players in the venation process, but details of the vascularization process are still unclear, partly because most lines have only been assessed qualitatively. Therefore, quantitative analyses are required to identify subtle perturbations in the pattern and to test dynamic modeling hypotheses using biological measurements. We developed an online framework, designated Leaf Image Analysis Interface (LIMANI), in which venation patterns are automatically segmented and measured on dark-field images. Image segmentation may be manually corrected through use of an interactive interface, allowing supervision and rectification steps in the automated image analysis pipeline and ensuring high-fidelity analysis. This online approach is advantageous for the user in terms of installation, software updates, computer load and data storage. The framework was used to study vascular differentiation during leaf development and to analyze the venation pattern in transgenic lines with contrasting cellular and leaf size traits. The results show the evolution of vascular traits during leaf development, suggest a self-organizing mechanism for leaf venation patterning, and reveal a tight balance between the number of end-points and branching points within the leaf vascular network that does not depend on the leaf developmental stage and cellular content, but on the leaf position on the rosette. These findings indicate that development of LIMANI improves understanding of the interaction between vascular patterning and leaf growth.
In this paper, we propose a novel extrinsic calibration method for camera networks by analyzing tracks of pedestrians. First of all, we extract the center lines of walking persons by detecting their heads and feet in the camera images. We propose an easy and accurate method to estimate the 3D positions of the head and feet w.r.t. a local camera coordinate system from these center lines. We also propose a RANSAC-based orthogonal Procrustes approach to compute relative extrinsic parameters connecting the coordinate systems of cameras in a pairwise fashion. Finally, we refine the extrinsic calibration matrices using a method that minimizes the reprojection error. While existing state-of-the-art calibration methods explore epipolar geometry and use image positions directly, the proposed method first computes 3D positions per camera and then fuses the data. This results in simpler computations and a more flexible and accurate calibration method. Another advantage of our method is that it can also handle the case of persons walking along straight lines, which cannot be handled by most of the existing state-of-the-art calibration methods since all head and feet positions are co-planar. This situation often happens in real life.
In this paper, we propose a novel extrinsic calibration method for camera networks using a sphere as the calibration object. First of all, we propose an easy and accurate method to estimate the 3D positions of the sphere center w.r.t. the local camera coordinate system. Then, we propose to use orthogonal procrustes analysis to pairwise estimate the initial camera relative extrinsic parameters based on the aforementioned estimation of 3D positions. Finally, an optimization routine is applied to jointly refine the extrinsic parameters for all cameras. Compared to existing sphere-based 3D position estimators which need to trace and analyse the outline of the sphere projection in the image, the proposed method requires only very simple image processing: estimating the area and the center of mass of the sphere projection. Our results demonstrate that we can get a more accurate estimate of the extrinsic parameters compared to other sphere-based methods. While existing state-of-the-art calibration methods use point like features and epipolar geometry, the proposed method uses the sphere-based 3D position estimate. This results in simpler computations and a more flexible and accurate calibration method. Experimental results show that the proposed approach is accurate, robust, flexible and easy to use.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.