A system is said to be current-state opaque if the entrance of the system state to a set of secret states remains opaque (uncertain) to an intruder -at least until the system leaves the set of secret states. This notion of opacity has been studied in non-deterministic finite automata settings (where the intruder observes a subset of events, e.g., via some natural projection mapping) and has been shown to be useful in characterizing security requirements in many applications. One limitation of the majority of existing analysis is that it fails to provide a quantifiable measure of opacity for a given system; instead it simply provides a binary characterization of the system (being opaque or not opaque). In this paper, we address this limitation by extending current-state opacity formulations to systems that can be modeled as probabilistic finite automata under partial observation. We introduce three notions of opacity, namely, step-based almost current-state opacity, almost current-state opacity, and probabilistic current-state opacity, and propose corresponding verification methods. DRAFT context, the notion of current-state opacity can be used to characterize all trajectories that the agent can follow without fully exposing that she/he is currently visiting certain strategic (secret) locations (cells) (see [5] for more details). However, consider the following two diametrically opposite scenarios: (i) the probability that the agent follows a trajectory that exposes its current strategic (secret) location (cell) is 10 −6 , (ii) the probability that the agent follows a trajectory that exposes its current strategic (secret) location (cell) is 1 − 10 −6 . In both scenarios, the system will be classified as a system that violates current-state opacity (as defined in [3]) despite the huge discrepancy in the likelihood of observing a sequence of observations that reveals that the system current-state belongs to the set of secret states.Another shortcoming of the definition of current-state opacity in [3] is that it does not attempt to characterize the confidence of the intruder when current-state opacity is not violated (i.e., the probability of the system state belonging to the set S). In the sensor network example above, assume that a particular sequence of sensor readings reveals, with (at least) 99% (but not absolute) confidence, that the current location (cell) of the agent is a strategic (secret) location for any consistent trajectory (i.e., given the sensor readings, the probability that the current state of the system is within the set of secret states is at least 0.99 -but not one). Such case will not be considered as a violation of current-state opacity (because the intruder is not absolutely certain about the membership of the current state of the system to the set of secret states).These type of situations question the appropriateness of the notion of current-state opacity in applications where the confidence of the intruder can serve as a measure of opacity. Areas where such confidence concerns have been co...