Waiting times have been a central concern in the English NHS, where care is provided free at the point of delivery and is rationed by waiting time. Pro-market reforms introduced in the NHS in the 1990s were not accompanied by large drops in waiting times. As a result, the English government in 2000 adopted the use of an aggressive policy of targets coupled with publication of waiting times data at hospital level and strong sanctions for poor performing hospital managers. This regime has been dubbed 'targets and terror'. We estimate the effect of the English target regime for waiting times for hospital care after 2001 by a comparative analysis with Scotland, a neighbouring country with the same healthcare system that did not adopt the target regime. We estimate difference-in-differences models of the proportion of people on the waiting list who waited over 6, 9 and 12 months. Comparisons between England and Scotland are sensitive to whether published or unpublished data are used but, regardless of the data source, the 'targets and terror' regime in England lowered the proportion of people waiting for elective treatment relative to Scotland.
Performance targets are commonly used in the public sector, despite their well known problems when organisations have multiple objectives and performance is difficult to measure. It is possible that such targets may work where there is considerable consensus that performance needs to be improved. We investigate this possibility by examining the response of the English National Health Service (NHS) to waiting time targets. Long waiting times have been a key issue for the NHS for many years. Using a natural policy experiment exploiting differences between countries of the UK, supplemented with a panel of data on English hospitals, we examine whether high profile targets to reduce waiting times met their goals of reducing waiting times without diverting activity from other less well monitored aspects of health care. Using this robust design, we find that targets led to a fall in waiting times without apparent reductions in other aspects of patient care. Keywords AbstractPerformance targets are commonly used in the public sector, despite their well known problems when organisations have multiple objectives and performance is difficult to measure. It is possible that such targets may work where there is considerable consensus that performance needs to be improved. We investigate this possibility by examining the response of the English National Health Service (NHS) to waiting time targets. Long waiting times have been a key issue for the NHS for many years. Using a natural policy experiment exploiting differences between countries of the UK, supplemented with a panel of data on English hospitals, we examine whether high profile targets to reduce waiting times met their goals of reducing waiting times without diverting activity from other less well monitored aspects of health care. Using this robust design, we find that targets led to a fall in waiting times without apparent reductions in other aspects of patient care.
Abstract. A generic DPA strategy is one which is able to recover secret information from physically observable device leakage without any a priori knowledge about the device's leakage characteristics. Here we provide much-needed clarication on results emerging from the existing literature, demonstrating precisely that such methods (strictly dened) are inherently restricted to a very limited selection of target functions. Continuing to search related techniques for a`silver bullet' generic attack appears a bootless errand. However, we nd that a minor relaxation of the strict denitionthe incorporation of some minimal non-devicespecic intuitionproduces scope for generic-emulating strategies, able to succeed against a far wider range of targets. We present stepwise regression as an example of such, and demonstrate its eectiveness in a variety of scenarios. We also give some evidence that its practical performance matches that of`best bit' DoM attacks which we take as further indication for the necessity of performing proled attacks in the context of device evaluations.Keywords: side-channel analysis, dierential power analysis
Abstract. The resistance of cryptographic implementations to sidechannel analysis is a matter of considerable interest to those concerned with information security. It is particularly desirable to identify the attack methodology (e.g. differential power analysis using correlation or distance-of-means as the distinguisher) able to produce the best results. Such attempts are complicated by the many and varied factors contributing to attack success: the device power consumption characteristics, an attacker's power model, the distinguisher by which measurements and model predictions are compared, the quality of the estimations, and so on. Previous work has delivered partial answers for certain restricted scenarios. In this paper we assess the effectiveness of mutual informationbased differential power analysis within a generic and comprehensive evaluation framework. Complementary to existing work, we present several notions/characterisations of attack success with direct implications for the amount of data required. We are thus able to identify scenarios in which mutual information offers performance advantages over other distinguishers. Furthermore we observe an interesting feature-unique to the mutual information based distinguisher-resembling a type of stochastic resonance, which could potentially enhance the effectiveness of such attacks over other methods in certain noisy scenarios.
Abstract. The ability to make meaningful comparisons between side-channel distinguishers is important both to attackers seeking an optimal strategy and to designers wishing to secure a device against the strongest possible threat. The usual experimental approach requires the distinguishing vectors to be estimated: outcomes do not fully represent the inherent theoretic capabilities of distinguishers and do not provide a basis for conclusive, like-for-like comparisons. This is particularly problematic in the case of mutual information-based side channel analysis (MIA) which is notoriously sensitive to the choice of estimator. We propose an evaluation framework which captures those theoretic characteristics of attack distinguishers having the strongest bearing on an attacker's general ability to estimate with practical success, thus enabling like-for-like comparisons between different distinguishers in various leakage scenarios. We apply our framework to an evaluation of MIA relative to its rather more well-established correlationbased predecessor and a proposed variant inspired by the Kolmogorov-Smirnov distance. Our analysis makes sense of the rift between the a priori reasoning in favour of MIA and the disappointing empirical findings of previous comparative studies, and moreover reveals several unprecedented features of the attack distinguishers in terms of their sensitivity to noise. It also explores-to our knowledge, for the first time-theoretic properties of near-generic power models previously proposed (and experimentally verified) for use in attacks targeting injective functions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.