Possibilistic networks, which are compact representations of possibility distributions, are powerful tools for representing and reasoning with uncertain and incomplete information in the framework of possibility theory. They are like Bayesian networks but lie on possibility theory to deal with uncertainty, imprecision and incompleteness. While classification is a very useful task in many real world applications, possibilistic network-based classification issues are not well investigated in general and possibilistic-based classification inference with uncertain observations in particular. In this paper, we address on one hand the theoretical foundations of inference in possibilistic classifiers under uncertain inputs and propose on the other hand a novel efficient algorithm for the inference in possibilistic networkbased classification under uncertain observations. We start by studying and analyzing the counterpart of Jeffrey's rule in the framework of possibility theory. After that, we address the validity of Markov-blanket criterion in the context of possibilistic networks used for classification with uncertain inputs purposes. Finally, we propose a novel algorithm suitable for possibilistic classifiers with uncertain observations without assuming any independence relations between observations. This algorithm guarantees the same results as if classification were performed using the possibilistic counterpart of Jeffrey's rule. Classification is achieved in polynomial time if the target variable is binary. The basic idea of our algorithm is to only search for 270 S. Benferhat, K. Tabia totally plausible class instances through a series of equivalent and polynomial transformations applied on the possibilistic classifier taking into account the uncertain observations.