A weighted random survival forest is presented in the paper. It can be regarded as a modification of the random forest improving its performance. The main idea underlying the proposed model is to replace the standard procedure of averaging used for estimation of the random survival forest hazard function by weighted avaraging where the weights are assigned to every tree and can be veiwed as training paremeters which are computed in an optimal way by solving a standard quadratic optimization problem maximizing Harrell's C-index. Numerical examples with real data illustrate the outperformance of the proposed model in comparison with the original random survival forest.
A new meta-algorithm for estimating the conditional average treatment effects is pro-posed in the paper. The basic idea behind the algorithm is to consider a new dataset consisting of feature vectors produced by means of concatenation of examples from control and treatment groups, which are close to each other. Outcomes of new data are defined as the difference between outcomes of the corresponding examples comprising new feature vectors. The second idea is based on the assumption that the number of controls is rather large and the control outcome function is precisely determined. This assumption allows us to augment treatments by generating feature vectors which are closed to available treatments. The outcome regression function constructed on the augmented set of concatenated feature vectors can be viewed as an estimator of the conditional average treatment effects. A simple modification of the Co-learner based on the random subspace method or the feature bagging is also proposed. Various numerical simulation experiments illustrate the proposed algorithm and show its outperformance in comparison with the well-known T-learner and X-learner for several types of the control and treatment outcome functions.
A modification of the confidence screening mechanism based on adaptive weighing of every training instance at each cascade level of the Deep Forest is proposed. The idea underlying the modification is very simple and stems from the confidence screening mechanism idea proposed by Pang et al. to simplify the Deep Forest classifier by means of updating the training set at each level in accordance with the classification accuracy of every training instance. However, if the confidence screening mechanism just removes instances from training and testing processes, then the proposed modification is more flexible and assigns weights by taking into account the classification accuracy. The modification is similar to the AdaBoost to some extent. Numerical experiments illustrate good performance of the proposed modification in comparison with the original Deep Forest proposed by Zhou and Feng.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.