To assess maturity distributions of shelled-stock peanut lots, a method was developed to characterize peanut kernels into one of three possible maturity classes based on testa texture and color and kernel shape. Kernels having testa with longitudinal wrinkles, a raisin-like texture, light color and slightly elongated shape were classed Immature and predominately were shelled from pods in the Hull-Scrape categories White, Yellow I, and early-Yellow 11. Kernels with a smooth testa, pink to dark pink and with a more rounded appearance were classed Mid-mature and predominately were shelled from pods in the late-Yellow 11, Orange, and early-Brown Hull-Scrape classes. Kernels with awaffle-like surface texture, dark pink to brown testa, and a more rounded appearance were classed as Mature, and predominately were shelled from pods in the midand late-Brown and the Black Hull-Scrape categories. Attempts to automate the system using color alone were unsuccessful; to be a reliable maturity sorting technique, both testa texture and color pattern had to be considered.Equipment brands and manufacturers are given as information for the reader and are not an endorsement to the exclusion of other products which may perform the same function. (1987) reported that maturity and size of kernels within a cultivar are related. Therefore, the current size related market classes of shelled stock peanut (Jumbo, Medium, No. 1) reflect a degree of maturity. However, kernel size and maturity are not perfectly correlated (Sanders, 1989). Varying environmental conditions can result in smdl mature kernels or large immature kernels.Tollner and Hung (1993) used NMR readings for moist and dried peanuts to assess peanut maturity. In 1887, Whitaker et at. found that Near Infared Redlectance ( N I H) could be used to measure kernel maturity.Past research has determined that 'shriveled' o r 'writiklcd' testa are indicators of immaturity ( Parham, 1942;Mixon, 1963;Aristizabal et at., 1969). Pickett (l9SO) noted that a reliable and simple method of determining niutiirity of' developing peanut kernels included a coin hination of wed texture and testa color. Schenk (1961) also used k t w d surface texture (wrinkled, smooth) and testti color (wlritc to pink to red with brown splotches) tu clescritw the stwl maturing process. Pattee et al.(1 970 and refined in 1974) gave a detailed description of charitctcvistices itssociilted
This paper is a report from the Extreme Events Working Party. The paper considers some of the difficulties in calculating capital buffers to cover potential losses. This paper considers the reasons why a purely mechanical approach to calculating capital buffers may bot be possible or justified. A range of tools and techniques is presented to help address some of the difficulties identified.
Under the European Union’s Solvency II regulations, insurance firms are required to use a one-year VaR (Value at Risk) approach. This involves a one-year projection of the balance sheet and requires sufficient capital to be solvent in 99.5% of outcomes. The Solvency II Internal Model risk calibrations require annual changes in market indices/term structure for the estimation of risk distribution for each of the Internal Model risk drivers. This presents a significant challenge for calibrators in terms of: Robustness of the calibration that is relevant to the current market regimes and at the same time able to represent the historically observed worst crisis; Stability of the calibration model year on year with arrival of new information. The above points need careful consideration to avoid credibility issues with the Solvency Capital Requirement (SCR) calculation, in that the results are subject to high levels of uncertainty. For market risks, common industry practice to compensate for the limited number of historic annual data points is to use overlapping annual changes. Overlapping changes are dependent on each other, and this dependence can cause issues in estimation, statistical testing, and communication of uncertainty levels around risk calibrations. This paper discusses the issues with the use of overlapping data when producing risk calibrations for an Internal Model. A comparison of the overlapping data approach with the alternative non-overlapping data approach is presented. A comparison is made of the bias and mean squared error of the first four cumulants under four different statistical models. For some statistical models it is found that overlapping data can be used with bias corrections to obtain similarly unbiased results as non-overlapping data, but with significantly lower mean squared errors. For more complex statistical models (e.g. GARCH) it is found that published bias corrections for non-overlapping and overlapping datasets do not result in unbiased cumulant estimates and/or lead to increased variance of the process. In order to test the goodness of fit of probability distributions to the datasets, it is common to use statistical tests. Most of these tests do not function when using overlapping data, as overlapping data breach the independence assumption underlying most statistical tests. We present and test an adjustment to one of the statistical tests (the Kolmogorov Smirnov goodness-of-fit test) to allow for overlapping data. Finally, we explore the methods of converting “high”-frequency (e.g. monthly data) to “low”-frequency data (e.g. annual data). This is an alternative methodology to using overlapping data, and the approach of fitting a statistical model to monthly data and then using the monthly model aggregated over 12 time steps to model annual returns is explored. There are a number of methods available for this approach. We explore two of the widely used approaches for aggregating the time series.
We study two Bayesian (Reference Intrinsic and Jeffreys prior), two frequentist (MLE and PWM) approaches and the nonparametric Hill estimator for the Pareto and related distributions. Three of these approaches are compared in a simulation study and all four to investigate how much equity risk capital banks subject to Basel II banking regulations must hold. The Reference Intrinsic approach, which is invariant under one-to-one transformations of the data and parameter, performs better when fitting a generalized Pareto distribution to data simulated from a Pareto distribution and is competitive in the case study on equity capital requirements.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.