We review research related to both Construction Grammar (CxG) and Natural Language Processing showing that recent advances in probing Large Language Models (LLMs) for certain types of linguistic knowledge align with the tenets of CxG. However, our survey leads us to hypothesize that LLM constructional information may be limited to the constructions within the lower levels of postulated taxonomical “constructicons” enumerating a particular language’s constructions. Specifically, probing studies show that the constructions at the lower levels of the taxonomy, which are more substantive constructions with fixed elements corresponding to frequently used words within that construction, are a type of linguistic information accessible to LLMs. In contrast, more general, abstract constructions with schematic slots that can be filled by a variety of different words are not included in the linguistic knowledge of LLMs. We test this hypothesis on a collection of 10 distinct constructions, each of which is exhibited in 50 or more corpus instances. Our experimental results strongly support our hypothesis and lead us to conclude that, in order for LLMs to generalize to the point where purely schematic constructions can be recognized regardless of the frequency of the instantiating words (as psycholinguistic experimentation has shown people can), additional semantic resources are needed to make explicit the semantic role of the schematic slot. To ensure transparency and reproducibility, we publicly release our experimental data, including the prompts used with the model.