Many adaptive educational systems and other artificial intelligence applications rely on high-quality knowledge representations. Still, knowledge acquisition remains the primary bottleneck hindering large-scale deployment and adoption of knowledge-based systems. One path to scalable knowledge extraction is using digital textbooks, given their domain-oriented content, structure, and availability. This dissertation presents a unified approach for automatically extracting high-quality and domain-specific knowledge models from digital textbooks. The proposed approach leverages the authors’ knowledge encoded in the textbooks’ elements that facilitate navigation and understanding of the material (table of contents, index, formatting styles) to create knowledge models. The proposed workflow first extracts initial information elements from the textbooks: the structure of chapters and subchapters using the Table of Content, the content of each section, and domain terminology from the back-of-the-book index. Then, new information is added: domain terms are linked to external entities in a knowledge graph (DBpedia) and are enriched with semantic content (e.g., abstracts and categories). Finally, the knowledge about the domain is refined by identifying the relevance of concepts to the target domain. The extracted knowledge is represented in a model using the Text Encoding Initiative. Multiple evaluations show that the extracted knowledge models have high levels of quality across several properties: accuracy, semantics, coverage, specificity, cognitive validity, and granularity. Additionally, the approach is effective in multiple domains—for example, statistics, ancient philosophy, and Python programming. Finally, there are many potential applications for the extracted knowledge models. This dissertation presents three different educational systems supported by the knowledge models.