This paper investigates the active fault tolerant control problem via the H 1 state feedback controller. Because of the limitations of Markov processes, we apply semi-Markov process in the system modeling. Two random processes are involved in the system: the failure process and the fault detection process. Therefore, two corresponding semi-Markov processes are integrated in the closed-loop system model. This framework can generally accommodate different types of system faults, including the randomly happening sensor faults and actuator faults. A controller is designed to guarantee the closed-loop system stability with a prescribed noise/disturbance attenuation level. The controller can be readily solved by using convex optimization techniques. A vertical take-off and landing vehicle example with actuation faults is used to demonstrate the effectiveness of the proposed technique.represent the random variations of the system parameters. Normally, continuous-time finite state Markov processes are used in continuous-time systems where the system may jump at any time instants, whereas in discrete-time systems, discrete-time finite state Markov processes are applied, to be precise, only Markov process kernels are used. Sometimes, finite state processes are also called discrete-state processes, or countable state processes. To practically formulate active fault tolerant control problems, two stochastic processes are involved in the system model: one is used to model the system faults and the other one is used to represent the fault detection and identification (FDI) process [7]. The rationale behind the two-process model is that the FDI process brings random variations into the control law. In other words, the FDI process modifies the system dynamics by realizing the control strategy reconfiguration [8]. The two-process model was proposed in [9], where necessary and sufficient conditions were provided for systems with single component failures. For the system with multiple failure occurrences, the stochastic stability analysis was presented in [10]. Further, considering the system uncertainties, detection delays, and noise/disturbances, results have been reported in [11]. Besides the results on the stability issue, optimal controller design problems have been studied for fault tolerant control systems modeled by Markov processes (see [12] and the references therein).Among the aforementioned works, most of them deploy continuous-time or discrete-time Markov processes to model the system failure. To satisfy the requirements of Markov processes, the life time of the system components should be assumed to be exponentially distributed. However, such an assumption may not be appropriate in practice for two reasons. Firstly, in the reliability engineering, a typical transition/failure rate function is in a bath tube shape instead of a constant value [13,14]. With such shapes of transition/failure rates, the system components would be more likely to fail in the 'infant' or 'senior' stage. For a more comprehensive literature rev...