Construction of a nearest neighbor graph is often a necessary step in many machine learning applications. However, constructing such a graph is computationally expensive, especially when the data is high dimensional. Python's open source machine learning library Scikit-learn uses k-d trees and ball trees to implement nearest neighbor graph construction. However, this implementation is inefficient for large datasets. In this work, we focus on exploiting these underlying tree-based data structures to optimize parallel execution of the nearest neighbor algorithm. We present parallel implementations of nearest neighbor graph construction using such tree structures, with parallelism provided by the OpenMP and the Galois framework. We empirically show that our parallel and exact approach is efficient as well as scalable, compared to the Scikit-learn implementation. We present the first implementation of k-d trees and ball trees using Galois. Our results show that k-d trees are faster when the number of dimensions is small (2 d N ); ball trees on the other hand scale well with the number of dimensions. Our implementation of ball trees in Galois has almost linear speedup on a number of datasets irrespective of the size and dimensionality of the data.