Abstract-Fluid motions are highly nonlinear and non-stationary, with turbulence occurring and developing at different length and time scales. In real-life observations, the multiscale flow generates different visual impacts depending on the distance to the viewer. We propose a new fluid simulation framework that adaptively allocates computational resources according to the human visual perception. First, a 3D empirical model decomposition scheme is developed to obtain the velocity spectrum of the turbulent flow. Then, depending on the distance to the viewer, the fluid domain is divided into a sequence of nested simulation partitions. Finally, the multiscale fluid motions revealed in the velocity spectrum are distributed non-uniformly to these view-dependent partitions, and the mixed velocity fields defined on different partitions are solved separately using different grid sizes and time steps. The fluid flow is solved at different spatial-temporal resolutions, such that higher-frequency motions closer to the viewer are solved at higher resolutions and vice versa. The new simulator better utilizes the computing power, producing visually plausible results with realistic fine-scale details in a more efficient way. It is particularly suitable for large scenes with the viewer inside the fluid domain. Also, as high-frequency fluid motions are distinguished from low-frequency motions in the simulation, the numerical dissipation is effectively reduced.Index Terms-fluid simulation, Hilbert-Huang transform, fluid velocity spectrum, view-dependent partition.