The major cost of running a cloud is power consumption. Under‐utilization of resources that are kept idly on, over‐allocation of resources, and so on, are a few reasons for excessive power utilization by the data centers. So, to optimize the power consumption, future resource usage of virtual machines (VM) can be forecasted using their trace logs. Based on this prediction, excessively allocated VM resources can be freed thereby reducing the number of physical machines as well as the carbon footprint. In this work, we present a comparative study of some deep learning techniques such as multilayer perceptron (MLP), autoregressive neural network (ARNN), convolutional neural network (CNN), long short‐term memory (LSTM) network in forecasting the CPU, memory usage of many VMs. The GWA‐T‐12 Bitbrains data center dataset consisting of 1250 VMs' workload traces is used in this work. The main goal is to avoid underload/overload in VMs which occurs sometimes while trying to optimize resource allocation. We achieved a maximum of 98% accuracy in future resource requirement forecasting. Among all models, MLP attained the highest accuracy.