The quantity of personal data that is collected, stored, and subsequently processed continues to grow at a rapid pace. Given its potential sensitivity, ensuring privacy protections has become a necessary component of database management. To enhance protection, a number of mechanisms have been developed, such as audit logging and alert triggers, which notify administrators about suspicious activities that may require investigation. However, this approach to auditing is limited in several respects. First, the volume of such alerts grows with the size of the database and is often substantially greater than the capabilities of resource-constrained organizations. Second, strategic attackers can attempt to disguise their actions or carefully choosing which records they touch, such as by limiting the number of database accesses they commit, thus potentially hiding illicit activity in plain sight. In this paper, we introduce a novel approach to database auditing that explicitly accounts for adversarial behavior by 1) prioritizing the order in which types of alerts are investigated and 2) providing an upper bound on how much resource to allocate for each type. Specifically, we model the interaction between a database auditor and potential attackers as a Stackelberg game in which the auditor chooses a (possibly randomized) auditing policy and attackers choose which, if any, records to target. Upon doing so, we show that even a highly constrained version of the auditing problem is NP-Hard. Based on this finding, we introduce an approach that combines linear programming, column generation, and heuristic search to derive an auditing policy. On the synthetic data, we perform an extensive evaluation on both the approximation degree of our solution with the optimal one and the computational magnitude of our approach. The two real datasets, 1) 1.5 months of audit logs from the electronic medical record system of Vanderbilt University Medical Center and 2) a publicly available credit card application dataset of 1000 records, are used to test the policy-searching performance. The findings illustrate that our methods produce high-quality mixed strategies as database audit policies, and our general approach significantly outperforms non-game-theoretic baselines. arXiv:1801.07215v1 [cs.AI]