Customer churn is the most important problem in the business world, especially in the telecommunications industry, because it greatly influences company profits. Getting new customers for a company is much more difficult and expensive than retaining existing customers. Machine learning, part of data mining, is a sub-field of artificial intelligence widely used to make predictions, including predicting customer churn. Deep neural network (DNN) has been used for churn prediction, but selecting hyperparameters in modeling requires more time and effort, making the process more challenging for the researcher. Therefore, the purpose of this study is to propose a better architecture for the DNN algorithm by using a hard tuner to obtain more optimal hyperparameters. The tuning hyperparameter used is random search in determining the number of nodes in each hidden layer, dropout, and learning rate. In addition, this study also uses three variations of the number of hidden layers, two variations of the activation function, namely rectified linear unit (ReLu) and Sigmoid, then uses five variations of the optimizer (stochastic gradient descent (SGD), adaptive moment estimation (Adam), adaptive gradient algorithm (Adagrad), Adadelta, and root mean square propagation (RMSprop)). Experiments show that the DNN algorithm using hyperparameter tuning random search produces a performance value of 83.09 % accuracy using three hidden layers, the number of nodes in each hidden layer is [20, 35, 15], using the RMSprop optimizer, dropout 0.1, the learning rate is 0.01, with the fastest tuning time of 21 seconds. Better than modeling using k-nearest neighbor (K-NN), random forest (RF), and decision tree (DT) as comparison algorithms.