Normative monitoring of black-box AI systems entails detecting whether input-output combinations of AI systems are acceptable in specific contexts. To this end, we build on an existing approach that uses Bayesian networks and a tailored conflict measure called IOconfl. In this paper, we argue that the default fixed threshold associated with this measure is not necessarily suitable for the purpose of normative monitoring. We subsequently study the bounds imposed on the measure by the normative setting and, based upon our analyses, propose a dynamic threshold that depends on the context in which the AI system is applied. Finally, we show the measure and threshold are effective by experimentally evaluating them using an existing Bayesian network.