This paper proposes an efficient GPU-based massively parallel implementation of the edge-directed adaptive intra-field deinterlacing method which interpolates the missing pixels based on the deinterlaced covariance estimated from the interlaced covariance according to the geometric duality between the interlaced and the deinterlaced covariance. Although the edge-directed adaptive intra-field deinterlacing method can obtain better visual quality than conventional intra-field deinterlacing methods, the time-consuming computation is usually the bottleneck of this deinterlacing method. In order to tackle the problem, Graphics Processing Units (GPUs), as opposed to traditional CPU architectures, are better candidates to speed up the computation process. The proposed method interpolates more than one missing pixel at a time in order to gain a significant speedup compared to the case of interpolating just one missing pixel at a time. Experimental results show that we obtained a speedup of 94.6 when the I/O transfer time was taken into account, compared to the original single-threaded C CPU code with the -O2 compiling optimization.