Tests and/or test items can sometimes be expensive, unique, or only performed in a few laboratories. There can be cases where assigned values are unknown, there is no information, or only poor information on the probability density function attributed to the test result. Sometimes there are neither reference materials nor the ability to establish consensus values due to a lack of experts. It can be impossible to repeat a test on the same item because it is destroyed during the test itself, or the homogeneity of tested items is unknown and no criteria can be established. Specified technical requirements concerning proficiency testing and interlaboratory comparison schemes are generally not applicable in this situation. However, interlaboratory comparison could allow laboratories to have more confidence in their results. The present paper discusses three statistical methods of assessing interlaboratory comparison results obtained in such conditions. Two methods are based on an assigned value determined from participant results through robust analysis. The third is based on the compatibility of results assessed using the f parameter. This paper focuses on an interlaboratory comparison for two laboratories, each testing three samples. The use of statistical methods turns out to be high risk, particularly in terms of falsely accepting results. Additionally, is shown that methods dedicated to small samples are also not efficient in detecting discrepancies of test results.