Support for multimedia applications by general purpose computing platforms has been the subject of considerable research. Much of this work is based on an evolutionary strategy in which small changes to existing systems are made. The approach adopted here is to start ab initio with no backward compatibility constraints. This leads to a novel structure for an operating system. The structure aims to decouple applications from one another and to provide multiplexing of all resources, not just the CPU, at a low level. The motivation for this structure, a design based on the structure, and its implementation on a number of hardware platforms is described. 2 processor sees a performance which is in uenced by the load on the other virtual processors, and mechanisms to control this interference are generally not available. Multimedia applications require such mechanisms.One way of controlling this interference is by providing multiple real processors. For example, many multimedia applications (or parts thereof), run on processors on peripheral cards so that the main processor is not involved. Moreover, the code running on the peripheral is likely to be embedded and there is no danger of competing applications using the peripheral at the same time. The same approach is also used in mainframes where the use of channel processors reduces the I/O demands on the central processors, in particular ensuring that the central processors do not get overloaded by I/O interrupts.Our aim in Nemesis is to allow a general purpose processor to be used to provide the functions one would nd in a specialised DSP peripheral while providing the same control of interference across virtual processors as can be achieved with distinct hardware. We wish to retain the exibility of the virtual processor system so that resources can be used more e ciently than in a dedicated-peripheral approach.In approaching the design of an operating system with these goals, the immediate question of revolution versus evolution arises. Should one attempt to migrate a current operating system (or indeed use a current operating system) in order to meet these goals, or should one start afresh? The reasons why current general purpose operating systems are not appropriate are well established. Similarly, hard real time solutions which require static analysis are not appropriate in a situation where the application mix is dynamic.General purpose operating systems with \real time threads" in which the real time behaviour is provided by static priority are also inappropriate, unless one is running a single multimedia application or can a ord to perform an analysis of the complete system in order to assign priorities. A better solution might be to take an existing operating system and modify its scheduling system to support multimedia applications { perhaps one reason for the di culty in performing such a scheduler transplant is that knowledge of the characteristics of the scheduler often migrates to other components making the e ect of replacement unpredictable.This, toget...
Bugs in kernel extensions remain one of the main causes of poor operating system reliability despite proposed techniques that isolate extensions in separate protection domains to contain faults. We believe that previous fault isolation techniques are not widely used because they cannot isolate existing kernel extensions with low overhead on standard hardware. This is a hard problem because these extensions communicate with the kernel using a complex interface and they communicate frequently. We present BGI (Byte-Granularity Isolation), a new software fault isolation technique that addresses this problem. BGI uses efficient byte-granularity memory protection to isolate kernel extensions in separate protection domains that share the same address space. BGI ensures type safety for kernel objects and it can detect common types of errors inside domains. Our results show that BGI is practical: it can isolate Windows drivers without requiring changes to the source code and it introduces a CPU overhead between 0 and 16%. BGI can also find bugs during driver testing. We found 28 new bugs in widely used Windows drivers.
BackgroundMammographic density has been shown to be a strong independent predictor of breast cancer and a causative factor in reducing the sensitivity of mammography. There remain questions as to the use of mammographic density information in the context of screening and risk management, and of the association with cancer in populations known to be at increased risk of breast cancer.AimTo assess the association of breast density with presence of cancer by measuring mammographic density visually as a percentage, and with two automated volumetric methods, Quantra™ and VolparaDensity™.MethodsThe TOMosynthesis with digital MammographY (TOMMY) study of digital breast tomosynthesis in the Breast Screening Programme of the National Health Service (NHS) of the United Kingdom (UK) included 6020 breast screening assessment cases (of whom 1158 had breast cancer) and 1040 screened women with a family history of breast cancer (of whom two had breast cancer). We assessed the association of each measure with breast cancer risk in these populations at enhanced risk, using logistic regression adjusted for age and total breast volume as a surrogate for body mass index (BMI).ResultsAll density measures showed a positive association with presence of cancer and all declined with age. The strongest effect was seen with Volpara absolute density, with a significant 3% (95% CI 1–5%) increase in risk per 10 cm3 of dense tissue. The effect of Volpara volumetric density on risk was stronger for large and grade 3 tumours.ConclusionsAutomated absolute breast density is a predictor of breast cancer risk in populations at enhanced risk due to either positive mammographic findings or family history. In the screening context, density could be a trigger for more intensive imaging.
Purpose:To evaluate the results from two software tools for measurement of mammographic breast density and compare them with observer-based scores in a large cohort of women. Materials and Methods:Following written informed consent, a data set of 36 281 mammograms from 8867 women were collected from six United Kingdom centers in an ethically approved trial. Breast density was assessed by one of 26 readers on a visual analog scale and with two automated density tools. Mean differences were calculated as the mean of all the individual percentage differences between each measurement for each case (woman). Agreement in total breast volume, fibroglandular volume, and percentage density was assessed with the Bland-Altman method. Association with observer's scores was calculated by using the Pearson correlation coefficient (r). Results:Correlation between the Quantra and Volpara outputs for total breast volume was r = 0.97 (P , .001), with a mean difference of 43.5 cm 3 for all cases representing 5.0% of the mean total breast volume. Correlation of the two measures was lower for fibroglandular volume (r = 0.86, P , .001). The mean difference was 30.3 cm 3 for all cases representing 21.2% of the mean fibroglandular tissue volume result. Quantra gave the larger value and the difference tended to increase with volume. For the two measures of percentage volume density, the mean difference was 1.61 percentage points (r = 0.78, P , .001).Comparison of observer's scores with the area-based density given by Quantra yielded a low correlation (r = 0.55, P , .001). Correlations of observer's scores with the volumetric density results gave r values of 0.60 (P , .001) and 0.63 (P , .001) for Quantra and Volpara, respectively. Conclusion:Automated techniques for measuring breast density show good correlation, but these are poorly correlated with observer's scores. However automated techniques do give different results that should be considered when informing patient personalized imaging. Note: This copy is for your personal non-commercial use only. To order presentation-ready copies for distribution to your colleagues or clients, contact us at www.rsna.org/rsnarights. (11) or within discrete ranges, such as the four-point Breast Imaging Reporting and Data System, BI-RADS (12), scale or the Boyd five-point scale (5). Studies suggest that training and experience are essential in ensuring that the scores are accurate and reproducible (12,13).The introduction of full-field digital mammography technologies has provided an opportunity to implement automated breast density measurement algorithms, which had been initially developed for digitized analog mammograms (11,14). These algorithms work by applying thresholds to the pixel values within the digital image to identify the area of the image that contains the breast and to then determine the proportion of that breast which contains fibroglandular tissue. For example, the pixel values with the highest signal (radiation dose detected by the pixel) can identify the areas of the image where ...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.