ions are significant domain terms that have assisted in requirements elicitation and modeling. To extend the assistance toward requirements validation, we present in this paper an automated approach to identifying the abstractions for supporting requirements-based testing. We select relevant Wikipedia pages to serve as a domain corpus that is independent from any specific software system. We further define five novel patterns based on part-of-speech tagging and dependency parsing, and frame our candidate abstractions in the form of pairs for better testability, where the “key” helps locate “what to test”, and the “value” helps guide “how to test it” by feeding in concrete data. We evaluate our approach with six software systems in two application domains: Electronic health records and Web conferencing. The results show that our abstractions are more accurate than those generated by a state-of-the-art technique. While the initial findings indicate our abstractions’ capabilities of revealing bugs and matching the environmental assumptions created manually, we articulate a new way to perform requirements-based testing by focusing on a software system’s changing features. Specifically, we hypothesize that the same feature would behave differently under a pair of opposing environmental conditions and assess our abstractions’ applicability to this new form of feature testing.