The method of resolving functions is used to prove general sufficient conditions for the finiteness of the guaranteed time of reaching a cylinder terminal set by a quasilinear conflict-controlled process with random perturbations. The conditions of finiteness almost everywhere and finiteness with positive probability for the guaranteed reaching time are found for a process with simple matrix. The theory of dynamic games is developed well enough. The available approaches such as Pontryagin's direct methods [1], Krasovskii's extremal aiming principle [2], Pshenichnyi's method of semigroup operators [3], the technique related to the basic equations of Isaac's differential-game theory [4], the method of resolving functions [5], and other efficient procedures allow testing wide classes of conflict-controlled processes for game approach of trajectories. Evasion methods are various and convincing: the Pontryagin-Mishchenko method of evasion maneuver, the methods of constant and variable directions, the method of invariant subspaces, the recursive method (see [6] for a review).Therefore, there naturally arises an issue of introducing not only deterministic [7] but also stochastic uncertainty into the model of a conflict-controlled process. This can be done in different ways. For example, the studies [8-10] consider the situation where not the initial state of a process but only its distribution function is known, which leads to the Fokker-Planck-Kolmogorov equation [8,10,11] whose solution is the distribution function of the current state of the process. The optimization of such continuous processes are exemplified in [8] (based on the Pontryagin's maximum principle) and in [10], where the Milyutin-Dubovitsky necessary extremum conditions are employed. Since solving the Fokker-Planck-Kolmogorov equation in the continuous case is a challenge, the initial distribution or time are often discretized. The complete pattern of possible formalizations in this class is presented in [11]. Such studies for game formulations are exemplified in [12,13].The above-mentioned problems with stochastic uncertainty are often called search problems [14]. The studies [15][16][17][18] propose a bilinear search model where the stochastic transition matrix is a control unit, the number of states is finite, and the time is discrete. The performance criterion is the detection probability or average time of detection of an object. The discrete maximum principle or dynamic programming allows optimizing the search process, including the participation of groups of moving objects under various information assumptions.Uncertainty can be introduced in the model by mixed strategies of the players [19,20]. Various classes of stochastic games are studied in [21][22][23][24][25][26]. Apparently, one of the most natural ways to study stochastic conflict-controlled processes (stochastic differential games) is to consider the stochastic Ito equation with control in the original formulation or to introduce a random perturbation to the right-hand side of the dete...