Abstract. In this paper a control problem for a linear stochastic system driven by a noise process that is an arbitrary zero mean, square integrable stochastic process with continuous sample paths and a cost functional that is quadratic in the system state and the control is solved. An optimal control is given explicitly as the sum of the well-known linear feedback control for the associated deterministic linear-quadratic control problem and the prediction of the response of a system to the future noise process. The optimal cost is also given. The special case of a noise process that is an arbitrary standard fractional Brownian motion is noted explicitly with an explicit expression for the prediction of the future response of a system to the noise process that is used the optimal control. 1. Introduction. The control of a linear stochastic system with a Brownian motion and a cost functional that is quadratic in the state and the control, which is often called the linear-quadratic Gaussian (LQG) control problem, is probably the most well known stochastic control problem for continuous time systems (e.g., [5]). The discrete time LQG control problem was solved in the late 1950s and early 1960s (e.g., [10,20]) and shortly afterward the continuous time LQG problem was solved (e.g., [21]). These solutions are closely related to the corresponding deterministic linear-quadratic control problem whose solution has its origins in the 19th century from the work of Lagrange and others (cf. [6]). For the continuous time LQG problem an optimal control is a linear feedback control which is identical to an optimal control for the corresponding deterministic linear-quadratic control problem where the Brownian motion is replaced by the zero process. The optimal cost only differs from the deterministic problem's optimal cost by the integral of a function of time. However, the usual methods to solve these two problems are quite distinct. The deterministic linear-quadratic control problem is often solved by a first order nonlinear partial differential equation (Hamilton-Jacobi equation), while the stochastic (or LQG) control problem is often solved by a second order nonlinear partial differential equation (Hamilton-Jacobi-Bellman equation). From the form of these equations or by conjecturing a solution for both of them it follows that both solutions are basically the same quadratic function. However, no intrinsic reasons are provided for why the two optimal controls are the same and why the optimal costs differ only by the integral of a function of time. In this paper this asymmetry of approaches for the deterministic and the stochastic problems is removed by applying a method of completion of squares from deterministic linear control and the use of conditional expectation to