Standard methods for differential expression and differential abundance analysis rely on normalization to address sample-to-sample variation in sequencing depth. However, normalizations imply strict, unrealistic assumptions about the unmeasured scale of biological systems (e.g., microbial load or total cellular transcription). This introduces bias that can lead to false positives and false negatives. To overcome these limitations, we suggest replacing normalizations with interval assumptions. This approach allows researchers to explicitly define plausible lower and upper bounds on the unmeasured biological system's scale, making these assumptions more realistic, transparent, and flexible than those imposed by traditional normalizations. Compared to recent alternatives like scale models and sensitivity analyses, interval assumptions are easier to use, resulting in potentially reduced false positives and false negatives, and have stronger guarantees of Type-I error control. We make interval assumptions accessible by introducing a modified version of ALDEx2 as a publicly available software package. Through simulations and real data studies, we show these methods can reduce false positives and false negatives compared to normalization-based tools.