We address the problem of real-time remote tracking of a partially
observable Markov source in an energy harvesting system with an
unreliable communication channel. We consider both sampling and
transmission costs. Different from most prior studies that assume the
source is fully observable, the sampling cost renders the source
partially observable. The goal is to jointly optimize sampling and
transmission policies for two semantic-aware metrics: i) a general
distortion measure and ii) the age of incorrect information (AoII). We
formulate a stochastic control problem. To solve the problem for each
metric, we cast a partially observable Markov decision process (POMDP),
which is transformed into a belief MDP. Then, for both AoII under the
perfect channel setup and distortion, we express the belief as a
function of the age of information (AoI). This expression enables us to
effectively truncate the corresponding belief space and formulate a
finite-state MDP problem, which is solved using the relative value
iteration algorithm. For the AoII metric in the general setup, a deep
reinforcement learning policy is proposed to solve the belief MDP
problem. Simulation results show the effectiveness of the derived
policies and, in particular, reveal a non-monotonic switching-type
structure of the real-time optimal policy with respect to AoI.