In comparison with healthy persons, chronic uremic patients on regular hemodialysis treatment had significantly higher blood serum concentrations of alpha 1-acid glycoprotein, ceruloplasmin and C4 complement component, while levels of haptoglobin, C3 and transferrin were lower. Serum alpha 2-macroglobulin and alpha 1-antitrypsin levels were similar in both groups. Hemodialysis with cuprophan membrane induced only slight changes in some of these glycoproteins during a 48-hour follow-up period. Seven hours after termination of hemodialysis slight, but significant, decreases in blood serum transferrin and alpha 1-antitrypsin concentrations were observed. Hemodialysis thus does not seem to induce a conspicuous acute-phase reaction.
even more difficult, automatic data annotation problems are often considered as high-class imbalance problems. In this paper we address the basic recognition modelthe linear perceptron. On top of it, many other, more complex solutions may be proposed. The presented research is done from the perspective of automatic data annotation. 1.1 Linear recognition models Training of linear models has a long history. One shall note the classic Fisher's Linear Discriminant Analysis (LDA, e.g., [2]). Existence of closed-form, analytical solution is the largest advantage of discriminant analysis (both linear and quadratic). A disadvantage of linear discriminant analysis is the assumption on equality of covariance matrices for both classes. Also, it can cause typical difficulties related to zeroed or near-zeroed generalized variance [6] and covariance matrix inverse problems, especially for data with a large number of attributes. One of the possible solutions is to filter out attributes with zeroes-related eigenvalues [6]. Another possible solution is the use Regularized Discriminant Analysis (RDA) [3]. The basic assumption is that some recognition problems may be ill-posed due to insufficient amount of data comparing to the number of attributes. It combines together covariance matrix, diagonal variance matrix and identity matrix and thus makes the training process solvable. An interesting solution for LDA covariance matrix calculation is given by Fukunaga [4]. It combines both covariance matrices using weighted average instead of average, as originally proposed by Fisher. An extension of LDA is Kernel-LDA [5], which uses kernel trick known from Support Vector Machines to address linearly non-separable problems. The second family of approaches to train linear models is logistic regression (e.g., [6]). Logistic regression is Abstract Delta rule is a standard, well-established approach to train perceptron recognition model. However, mean squared error, on which it is based, is not suitable estimate for some problems, like information retrieval or automatic data annotation. F-score, a combination of precision and recall, is one of the major quality measures and can be used as an alternative. In this paper we present perceptron training model based on f-score. An approximate of f-score is proposed, based on components which are both continuous and differentiable. It allows to formulate a gradient-descent training routine, conceptually similar to the standard delta rule.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.