Semi-supervised learning holds promise for cost-effective neuron segmentation in Electron Microscopy (EM) volumes. This technique fully leverages extensive unlabeled data to regularize supervised training for robust predictions. However, diverse neuronal patterns and limited annotation budgets may lead to distribution mismatch between labeled and unlabeled data, hindering the generalization of semi-supervised models. To address this issue, we propose an improved pipeline for cost-effective neuron segmentation by integrating selective labeling and semi-supervised training. For selective labeling, we present an unsupervised heuristic tailored for labeled dataset selection in EM volumes. Guided by self-supervised learning on local patches, representative sub-volumes comprising spatially associated patches are greedily selected via a coverage-based criterion. Those sub-volumes can effectively reflect unlabeled data distribution within a limited budget. For semi-supervised training, we introduce spatial mixing into neuron segmentation and integrate it within a Siamese architecture. This enhancement allows us to reinforce cross-view consistency through intra- and inter-mixing of labeled and unlabeled datasets. The proposed strategies bridge the distribution gap and encourage the model to learn shared semantics across datasets, promoting more effective semi-supervised learning. Extensive experiments on public datasets consistently demonstrate the effectiveness of the proposed pipeline, providing a practical and efficient solution for large-scale neuron reconstruction. Codes and data will be available.