It is widely held that debugging cyber-physical systems (CPS) is challenging; many strongly held beliefs exist regarding how CPS are currently debugged and tested and the suitability of various techniques. For instance, dissenting opinions exist as to whether formal methods (including static analysis, theorem proving, and model checking) are appropriate in CPS verification and validation. Simulation tools and simulation-based testing are also often considered insufficient for CPS. Many "experts" posit that high-level programming languages (e.g., Java or C#) are not applicable to CPS due to their inability to address (significant) resource constraints at a high level of abstraction. To date, empirical studies investigating these questions have not been done. In this paper, we qualitatively and quantitatively analyze why debugging CPS remains challenging and either dispel or confirm these strongly held beliefs along the way. Specifically, we report on a structured online survey of 25 CPS researchers (10 participants classified themselves as CPS developers), semistructured interviews with nine practitioners across four continents, and a qualitative literature review. We report these results and discuss several implications for research and practice related to CPS.