We present a fully automated process for segmentation and classification of multispectral magnetic resonance (MR) images. This hybrid neural network method uses a Kohonen self-organizing neural network for segmentation and a multilayer backpropagation neural network for classification. To separate different tissue types, this process uses the standard T1-, T2-, and PD-weighted MR images acquired in clinical examinations. Volumetric measurements of brain structures, relative to intracranial volume, were calculated for an index transverse section in 14 normal subjects (median age 25 years; seven male, seven female). This index slice was at the level of the basal ganglia, included both genu and splenium of the corpus callosum, and generally, showed the putamen and lateral ventricle. An intraclass correlation of this automated segmentation and classification of tissues with the accepted standard of radiologist identification for the index slice in the 14 volunteers demonstrated coefficients (ri) of 0.91, 0.95, and 0.98 for white matter, gray matter, and ventricular cerebrospinal fluid (CSF), respectively. An analysis of variance for estimates of brain parenchyma volumes in five volunteers imaged five times each demonstrated high intrasubject reproducibility with a significance of at least p < 0.05 for white matter, gray matter, and white/gray partial volumes. The population variation, across 14 volunteers, demonstrated little deviation from the averages for gray and white matter, while partial volume classes exhibited a slightly higher degree of variability. This fully automated technique produces reliable and reproducible MR image segmentation and classification while eliminating intra- and interobserver variability.
One step at a time: DNA linkers were placed at defined locations and in defined 3D orientations on a colloidal nanoparticle. Because the implemented ligand‐replacement strategy was carried out sequentially, DNA linkers maximally segregated, producing a nanoparticle with linkers at 90 or 180° angles (see picture). These building blocks should enable assembly of anisotropic nanostructures with precisely designed geometry and complex functionality.
The assembly of nanoparticles into complicated, anisotropic shapes has much promise for advanced materials and devices. Developing effective and efficient anisotropic mono‐functionalization strategies is an imperative step in realizing this potential. By functionalizing DNA one at a time to the nanoparticle, a DNA‐nanoparticle building block could have distinct DNA sequences at different locations on the surface of the particle. Since this technology could incorporate nanoparticles of different composition, generating toolboxes of various nanoparticle building blocks (“nano‐toolboxes”) with DNA at defined locations and in defined 3D orientations on a nanoparticle, it promises not only complicated shapes, but also the ability to tune the function of the assembly. The challenges of programmable and scalable multifunctional nanostructure self‐assembly with DNA conjugated to nanoparticles are reviewed. The first difficulty is to control the assembly process so that designed products are formed, and unwanted products are minimized. The design problem for nanostructure construction is both physically and computationally complex. Thus, the other major challenge is to devise design methodologies that move nanostructure construction from trial and error to principled approaches. Strategies to overcome these challenges are also presented by realizing greater control over the final shapes and functions of the self‐assembled nanostructures. Finally, the future perspectives of nano‐toolboxes and their promise in applications such as multifunctional, multicolor, and multimodal contrast nanoagents for medical therapy and diagnostics (theranostics) are described.
DNA-based computing uses the tendency of nucleotide bases to bind (hybridize) in preferred combinations to do computation. Depending on reaction conditions, oligonucleotides can bind despite noncomplementary base pairs. These mismatched hybridizations are a source of false positives and negatives, which limit the efficiency and scalability of DNA-based computing. The ability of specific base sequences to support error-tolerant Adleman-style computation is analyzed, and criteria are proposed to increase reliability and efficiency. A method is given to calculate reaction conditions from estimates of DNA melting. [S0031-9007 (97)04987-9] PACS numbers: 89.70. + c, 87.15.By, 89.80. + h Adleman [1] introduced a way to do computations with DNA, and applied the technique to the solution of an NPcomplete problem, the Hamiltonian path problem (HPP) [2]. In general, a DNA-based computation involves three steps. First, the problem instance is encoded in a collection of DNA oligonucleotides. Second, template matching reactions, or hybridizations, between oligonucleotides produce double-stranded molecules, which ligase forms into longer molecules. These long molecules potentially represent the result of the computation. Third, the results are extracted with techniques, such as polymerase chain reaction (PCR) and gel electrophoresis. The basic processing power of a DNA-based computation, as suggested by Adleman [1], is in the massive number of string comparisons that occur during the template matching reactions between DNA oligonucleotides. Thus, a fundamental step in a DNA computation is the hybridization between oligonucleotides. Other proposals for DNA computation [3][4][5] continue to rely on the mechanism of the template-matching hybridization reaction. Most assume that the hybridizations between oligonucleotides occur error free. Nevertheless, errors, i.e., double strands which are not fully Watson-Crick complementary, are a consequence of the cooperative and uncertain nature of the chemistry on which the technique is based, and cannot be eliminated entirely.To make DNA-based computing a reliable technique, the first step is to ensure that false positives and negatives occur with negligible probabilities. If many incorrect or mismatched hybridizations are possible, then false positives (i.e., DNA strands which appear to be valid solutions, but actually are not) can occur. Likewise, if DNA oligonucleotides are used up in unproductive mismatches, there will be fewer available for formation of the result, and a false negative, or the failure to detect a correct answer when one is present, is possible. The probability of a less than perfect hybridization depends on the reaction conditions of the hybridization, with temperature being the most significant [6,7]. In this paper, the Hamming distance between oligonucleotides is explored as a criterion for reliable DNA solution of HPP. As a first estimate for a reliable encoding, the required distance can be estimated from the melting temperature, which is the temperature at which ...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.