We describe the R package rmcfs that implements an algorithm for ranking features from high dimensional data according to their importance for a given supervised classification task. The ranking is performed prior to addressing the classification task per se. This R package is the new and extended version of the MCFS (Monte Carlo feature selection) algorithm where an early version was published in 2005. The package provides an easy R interface, a set of tools to review results and the new ID (interdependency discovery) component. The algorithm can be used on continuous and/or categorical features (e.g., gene expression and phenotypic data) to produce an objective ranking of features with a statistically well-defined cutoff between informative and non-informative ones. Moreover, the directed ID graph that presents interdependencies between informative features is provided.Keywords: MCFS-ID, feature selection, high-dimensional problems, Java, R, ID graph. rmcfs: Monte Carlo Feature Selection and Interdependency Discovery in R 2003). More recently and within non-filter approaches, a Bayesian technique of automatic relevance determination, the use of support vector machines, and the use of ensembles of classifiers, all these either alone or in combination, have proved promising. For further details see Li, Campbell, and Tipping (2002), Lu, Devos, Suykens, Arús, and Huffel (2007), Chrysostomou, Chen, and Liu (2008) and the literature therein.Moreover, the last developments by the late Leo Breiman deserve special attention. In his random forests (RFs), he proposed to make use of the so-called variable (i.e., feature) importance for feature selection. Determination of the importance of the variable is not necessary for random forest construction, but it is a subroutine performed in parallel to building the forest (cf. Breiman and Cutler 2008). Ranking features by variable importance can thus be considered to be a by-product of building the classifier. At the same time, nothing prevents one from using such variable importances within, say, the embedded approach; cf., e.g., Díaz-Uriarte and De Andres (2006). In any case, feature selection by measuring variable importance in random forests should be seen as a very promising method, albeit under one proviso. Namely, the problem with variable importance as originally defined is that it is biased towards variables with many categories; cf. Strobl, Boulesteix, Zeileis, and Hothorn (2007), Archer and Kimes (2008), Nicodemus, Malley, Strobl, and Ziegler (2010). Accordingly, proper debiasing is needed, in order to obtain true ranking of features; cf. Strobl, Boulesteix, Kneib, Augustin, and Zeileis (2008). And, however sound such debiasing may be, it incurs much additional computational cost. For an excellent and recent survey on RFs, their properties and capabilities, see Ziegler and König (2014).Most recently, much work has been done to: (i) give embedded feature selection procedures, in particular those used within RFs (whether biased or unbiased), a clear statistical meaning;...