The Hierarchical risk parity (HRP) approach of portfolio allocation, introduced by Lopez de Prado (2016), applies graph theory and machine learning to build a diversified portfolio. Like the traditional risk-based allocation methods, HRP is also a function of the estimate of the covariance matrix, however, it does not require its invertibility. In this paper, we first study the impact of covariance misspecification on the performance of the different allocation methods. Next, we study under an appropriate covariance forecast model whether the machine learning based HRP outperforms the traditional risk-based portfolios. For our analysis, we use the test for superior predictive ability on out-of-sample portfolio performance, to determine whether the observed excess performance is significant or if it occurred by chance. We find that when the covariance estimates are crude, inverse volatility weighted portfolios are more robust, followed by the machine learning-based portfolios. Minimum variance and maximum diversification are most sensitive to covariance misspecification. HRP follows the middle ground; it is less sensitive to covariance misspecification when compared with minimum variance or maximum diversification portfolio, while it is not as robust as the inverse volatility weighed portfolio. We also study the impact of the different rebalancing horizon and how the portfolios compare against a market-capitalization weighted portfolio.