Nebular phase spectra of core-collapse supernovae (SNe) provide critical and unique information on the progenitor massive star and its explosion. We present a set of 1-D steady-state non-local thermodynamic equilibrium radiative transfer calculations of type II SNe at 300 d after explosion. Guided by results for a large set of stellar evolution simulations, we craft ejecta models for type II SNe from the explosion of a 12, 15, 20, and 25 M star. The ejecta density structure and kinetic energy, the 56 Ni mass, and the level of chemical mixing are parametrized. Our model spectra are sensitive to the adopted line Doppler width, a phenomenon we associate with the overlap of Fe ii and O i lines with Ly α and Ly β. Our spectra show a strong sensitivity to 56 Ni mixing since it determines where decay power is absorbed. Even at 300 d after explosion, the H-rich layers reprocess the radiation from the inner metal rich layers. In a given progenitor model, variations in 56 Ni mass and distribution impact the ejecta ionization, which can modulate the strength of all lines. Such ionization shifts can quench Ca ii line emission. In our set of models, the [O i] λλ 6300, 6364 doublet strength is the most robust signature of progenitor mass. However, we emphasize that convective shell merging in the progenitor massive star interior can pollute the O-rich shell with Ca, which will weaken the O i doublet flux in the resulting nebular SN II spectrum. This process may occur in Nature, with a greater occurrence in higher mass progenitors, and may explain in part the preponderance of progenitor masses below 17 M inferred from nebular spectra.