Bayesian belief networks (BNs) are well-suited to capturing vague and uncertain knowledge. However, the capture of this knowledge and associated reasoning from human domain experts often requires specialized knowledge engineers responsible for translating the expert's communications into BN-based models. Across application domains, we have analyzed how these models are constructed, refined, and validated with domain experts. From this analysis, we have identified key user-centered complexities and challenges that we have used to drive the selection of simplifying assumptions. This led us to develop computational techniques and user interface methods that leverage these same assumptions with the goal of improving the efficiency and ease with which expert knowledge can be expressed, verified, validated, and encoded. In this paper, we present the results of our analysis of BN construction, validation, and use. We discuss how these results motivated the design of a simplified version of BNs called Causal Influence Models (CIMs). In addition, we detail how CIMs enable the design and construction of user interface mechanisms that address complexities identified in our analysis.