With advances in sequencing technology, forensic workers can access genetic information from increasingly challenging samples. A recently published computational approach,IBDGem, analyzes sequencing reads, including from low-coverage samples, in order to arrive at likelihood ratios for tests of identity. Here, we show that likelihood ratios produced byIBDGemtest a null hypothesis different from the traditional one used in a forensic genetics context. In particular, rather than testing the hypothesis that the sample comes from a person unrelated to the person of interest,IBDGemtests the hypothesis that the sample comes from an individual who is included in the reference sample used to run the method. This null hypothesis is not generally of forensic interest, because the defense hypothesis is not that the evidence comes from an individual included in a reference panel. Further, it does not take into account genetic variation outside the reference panel, and as a result, the computed likelihood ratios can be much larger than likelihood ratios computed for the standard forensic null hypothesis, often by many orders of magnitude, thus potentially creating an impression of stronger evidence for identity than is warranted. We lay out this result and illustrate it with examples, giving suggestions for directions that might lead to likelihood ratios that have the traditional interpretation.