In implicit cognition research generally, one standard strategy is to measure the conscious status of knowledge on each trial (e.g. with confidence, structural knowledge attributions, visual clarity ratings) and then sub-select the trials where the knowledge is measured to be unconscious. If the accuracy is above chance on the latter trials that is taken to be evidence for unconscious knowledge. David Shanks (2017) has pointed out the problem of regression to the mean when trials (or people) are sub-selected based on a variable: Because of the ubiquitous possibility of error in measurement, the estimated value of the variable for the sub-set selected based on pre-selection scores will be biased. Thus, for example, trials selected to be based on unconscious knowledge will actually sometimes be based on conscious knowledge. Does this critique undermine the separate analyses of categories obtained every trial, such as structural knowledge attributions in implicit learning research, or confidence or PAS ratings in subliminal perception research? I show how to quantify the maximum effect that could be produced by regression to the mean in a given situation, how it may be a problem but it may also be so small as to be meaningless, and how to deal with it when it is of a moderate size, using Bayes factors with an interval null hypothesis.