In open abdominal image-guided liver surgery, sparse measurements of the organ surface can be taken intraoperatively via a laser-range scanning device or a tracked stylus with relatively little impact on surgical workflow. We propose a novel nonrigid registration method which uses sparse surface data to reconstruct a mapping between the preoperative CT volume and the intraoperative patient space. The mapping is generated using a tissue mechanics model subject to boundary conditions consistent with surgical supportive packing during liver resection therapy. Our approach iteratively chooses parameters which define these boundary conditions such that the deformed tissue model best fits the intraoperative surface data. Using two liver phantoms, we gathered a total of five deformation datasets with conditions comparable to open surgery. The proposed nonrigid method achieved a mean target registration error (TRE) of 3.3 mm for targets dispersed throughout the phantom volume, using a limited region of surface data to drive the nonrigid registration algorithm, while rigid registration resulted in a mean TRE of 9.5 mm. In addition, we studied the effect of surface data extent, the inclusion of subsurface data, the trade-offs of using a nonlinear tissue model, robustness to rigid misalignments, and the feasibility in five clinical datasets.
PurposeBinocular alignment typically includes motor fusion compensating for heterophoria. This study evaluated heterophoria and then accommodation and vergence responses during measurement of fusional ranges in infants and preschoolers.MethodsPurkinje image eye tracking and eccentric photorefraction (MCS PowerRefractor) were used to record the eye alignment and accommodation of uncorrected infants (n = 17; 3–5 months old), preschoolers (n = 19; 2.5–5 years), and naïve functionally emmetropic adults (n = 14; 20–32 years; spherical equivalent [SE], +1 to −1 diopters [D]). Heterophoria was derived from the difference between monocular and binocular alignments while participants viewed naturalistic images at 80 cm. The presence or absence of fusion was then assessed after base-in (BI) and base-out (BO) prisms (2–40 prism diopters [pd]) were introduced.ResultsMean (±SD) SE refractions were hyperopic in infants (+2.4 ± 1.2 D) and preschoolers (+1.1 ± 0.6 D). The average exophoria was similar (P = 0.11) across groups (Infants, −0.79 ± 2.5 pd; Preschool, −2.43 ± 2.0 pd; Adults, −1.0 ± 2.7 pd). Mean fusional vergence range also was similar (P = 0.1) for BI (Infants, 11.2 ± 2.5 pd; Preschool, 8.8 ± 2.8 pd; Adults, 11.8 ± 5.2 pd) and BO (Infants, 14 ± 6.6 pd; Preschool, 15.3 ± 8.3 pd; Adults, 20 ± 9.2 pd). Maximum change in accommodation to the highest fusible prism was positive (increased accommodation) for BO (Infants, 1.69 ± 1.4 D; Preschool, 1.35 ± 1.6 D; Adults, 1.22 ± 1.0 D) and negative for BI (Infants, −0.96 ± 1.0 D; Preschool, −0.78 ± 0.6 D; Adults, −0.62 ± 0.3 D), with a similar magnitude across groups (BO, P = 0.6; BI, P = 0.4).ConclusionsDespite typical uncorrected hyperopia, infants and preschoolers exhibited small exophorias at 80 cm, similar to adults. All participants demonstrated substantial fusional ranges, providing evidence that even 3- to 5-month-old infants can respond to a large range of image disparities.
Computational modelling demonstrates the principles and limitations of photorefraction to help users avoid potential measurement errors. Factors that could cause clinically significant errors in photorefraction estimates include high refractive error, vertex distance and magnification effects of a spectacle lens, increased higher-order monochromatic aberrations, and changes in primary spherical aberration with accommodation. The impact of these errors increases with increasing defocus.
In order to obtain a panoramic image which is clearer, and has more layers and texture features, we propose an innovative multi-focus image fusion algorithm by combining with non-subsampled shearlet transform (NSST) and residual network (ResNet). First, NSST decomposes a pair of input images to produce subband coefficients of different frequencies for subsequent feature processing. Then, ResNet is applied to fuse the low frequency subband coefficients, and improved gradient sum of Laplace energy (IGSML) perform high frequency feature information processing. Finally, the inverse NSST is performed on the fused coefficients of different frequencies to obtain the final fused image. In our method, we fully consider the low frequency global features and high frequency detail information in image by using NSST. For low-frequency coefficients fusion, we can also obtain the spatial information features of low-frequency coefficient images by using ResNet, which has a deep network structure. IGSML can use different directional gradients to process high-frequency subband coefficients of different levels and directions, which is more conducive to the fusion of the coefficients. The experiment results show that the proposed method has been improved in the structural features and edge texture in the fusion images. INDEX TERMS Image fusion, multi-focus image fusion, NSST, ResNet. YIFEI WU received the B.S. degree from Xidian University, Xi'an, China, in 2018. He is currently pursuing the M.S. degree with the
Sentiment analysis, including aspect-level sentiment classification, is an important basic natural language processing (NLP) task. Aspect-level sentiment can provide complete and in-depth results. Words with different contexts variably influence the aspect-level sentiment polarity of sentences, and polarity varies based on different aspects of a sentence. Recurrent neural networks (RNNs) are regarded as effective models for handling NLP and have performed well in aspect-level sentiment classification. Extensive literature exists on sentiment classification that utilizes convolutional neural networks (CNNs); however, no literature on aspect-level sentiment classification that uses CNNs is available. In the present study, we develop a CNN model for handling aspect-level sentiment classification. In our model, attention-based input layers are incorporated into CNN to introduce aspect information. In our experiment, in which a benchmark dataset from Twitter is compared with other models, incorporating aspect information into CNN improves aspect-level sentiment classification performance without using syntactic parser or other language features.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.