Facial expression may establish communication between physically disabled people and assistive devices. Different types of facial expression including eye wink, smile, eye blink, looking up and looking down can be extracted from the brain signal. In this study, the possibility of controlling assistive devices using the individual's wink has been investigated. Brain signals from the five subjects have been captured to recognize the left wink, right wink, and no wink. The brain signals have been captured using Emotiv Insight which consists of five channels. Fast Fourier transform and the sample range have been computed to extract the features. The extracted features have been classified with the help of different machine learning algorithms. Here, support vector machine (SVM), linear discriminant analysis (LDA) and K-nearest neighbor (K-NN) have been employed to classify the features sets. The performance of the classifier in terms of accuracy, confusion matrix, true positive and false positive rate and the area under curve (AUC)-receiver operating characteristics (ROC) have been evaluated. In the case of sample range, the highest training and testing accuracies are 98.9% and 96.7% respectively which have been achieved by two classifiers namely, SVM and K-NN. The achieved results indicate that the person's wink can be utilized in controlling assistive devices.