Object detection methods based on Convolutional Neural Networks (CNNs) require a large number of images with annotation information to train. In aircraft detection from remote sensing images (RSIs), aircraft targets are usually small and the cost of manual annotation is very high. In this paper, we tackle the problem of weakly supervised aircraft detection from RSIs, which aims to learn detectors with only image-level annotations, i.e., without bounding-box labeled data during the training stage. Based on the fact that the feature maps learned from the CNN network are localizable, we propose a simple yet efficient aircraft detection algorithm called Weakly Supervised Learning in AlexNet (AlexNet-WSL). In AlexNet-WSL, we utilize the AlexNet CNN as backbone network, but replace the last two fully connected layers with a Global Average Pooling (GAP) and two convolutional layers. Based on the class activation maps, we generate heat maps via reverse weighting for locating the target object. Unlike object detection methods that require object location data for training, our proposal only needs image-level labelled data. We furthermore build a set of remote sensing aircraft images, the Weakly Supervised Aircraft Detection Dataset (WSADD) for algorithm benchmarking. The experimental results on the WSADD show that AlexNet-WSL effectively detects the aircraft and achieves a detection effect equivalent to the Faster R-CNN method and the YOLOv3 method, which both require bounding-box labelled data for training, with a lower false alarm rate and a shorter training time.